基于FlumeNG+Kafka+ElasticSearch+Kibana的日志系统

编程入门 行业动态 更新时间:2024-10-28 18:27:06

基于FlumeNG+Kafka+ElasticSearch+Kibana的日志<a href=https://www.elefans.com/category/jswz/34/1770742.html style=系统"/>

基于FlumeNG+Kafka+ElasticSearch+Kibana的日志系统

为什么80%的码农都做不了架构师?>>>   

环境准备

1.服务器概览

hostnameip操作系统说明安装内容
node1.fek192.168.2.161centos 7node1节点nginx,jdk1.8, flumeNG, elasticsearch slave1
node2.fek192.168.2.162centos 7node2节点jdk1.8, elasticsearch slave2
master.fek192.168.2.163centos 7master节点jdk1.8, elasticsearch master, kibana
master.kafka192.168.2.151centos 7master节点jdk1.8, kafka master, zookeeper
worker1.kafka192.168.2.152centos 7slaver节点jdk1.8, kafka master, zookeeper
worker2.kafka192.168.2.153centos 7slaver节点jdk1.8, kafka master, zookeeper

2.服务器环境安装

三台服务均安装jdk1.8(此处省略,自行安装)。

node1.fek安装nginx,模仿需要采集日志的web容器(此处省略,自行安装)。

三台服务均执行

#添加host
192.168.2.161 node1.fek
192.168.2.162 node2.fek
192.168.2.163 master.fek192.168.2.151 master.kafka
192.168.2.152 worker1.kafka
192.168.2.153 worker2.kafka#执行以下命令关闭防火墙
[root@node1 ~]systemctl stop firewalld && systemctl disable firewalld
[root@node1 ~]setenforce 0#将SELINUX的值改成disabled
[root@node1 ~]vim /etc/selinux/configSELINUX=disabled#修改系统参数
sudo vim /etc/security/limits.conf
*               soft    nproc           65536
*               hard    nproc           65536
*               soft    nofile          65536
*               hard    nofile          65536
sudo vim /etc/sysctl.conf
vm.max_map_count= 262144
sudo sysctl -p#重启服务器
[root@node1 ~]reboot

elasticsearch集群安装

注意:安装elasticsearch需要非root用户,此处我们用user用户

mkdir -p /opt/elasticsearch
chown user:user /opt/elasticsearch
su user

下载

wget .3.0.tar.gztar -zxvf elasticsearch-6.3.0.tar.gzcd elasticsearch-6.3.0/configmkdir -p /opt/elasticsearch/data
mkdir -p /opt/elasticsearch/logs

修改配置参数 elasticsearch.yml

cluster.name: fek-cluster                                 # 集群名称
node.name: els1                                             # 节点名称,仅仅是描述名称,用于在日志中区分path.data: /opt/elasticsearch/data                           # 数据的默认存放路径
path.logs: /opt/elasticsearch/logs                           # 日志的默认存放路径network.host: 192.168.2.163                                   # 当前节点的IP地址
http.port: 9200                                             # 对外提供服务的端口,9300为集群服务的端口discovery.zen.ping.unicast.hosts: ["node1.fek", "node2.fek", "master.fek"]       
# 集群个节点IP地址,也可以使用els、els.shuaiguoxia等名称,需要各节点能够解析discovery.zen.minimum_master_nodes: 2                       # 为了避免脑裂,集群节点数最少为 半数+1

jvm.options

-Xms1g                                                  # JVM最大、最小使用内存
-Xmx1g

对于node1.fek node2.fek,安装实际情况修改elasticsearch.yml参数

启动集群

./bin/elasticsearch -d

访问http://192.168.2.163:9200/_cluster/health?pretty,查看集群状态

{"cluster_name" : "fek-cluster","status" : "green","timed_out" : false,"number_of_nodes" : 3,"number_of_data_nodes" : 3,"active_primary_shards" : 0,"active_shards" : 0,"relocating_shards" : 0,"initializing_shards" : 0,"unassigned_shards" : 0,"delayed_unassigned_shards" : 0,"number_of_pending_tasks" : 0,"number_of_in_flight_fetch" : 0,"task_max_waiting_in_queue_millis" : 0,"active_shards_percent_as_number" : 100.0
}

http://192.168.2.163:9200/_cluster/state/nodes?pretty

{"cluster_name" : "fek-cluster","compressed_size_in_bytes" : 9419,"nodes" : {"NaRMw2usS0q28ZscXMHHcQ" : {"name" : "node2.fek","ephemeral_id" : "1BF5Tiw_RbCq9FsbwuyWjA","transport_address" : "192.168.2.162:9300","attributes" : {"ml.machine_memory" : "1021931520","ml.max_open_jobs" : "20","xpack.installed" : "true","ml.enabled" : "true"}},"y3UhYgopT12alHMlJDJlWQ" : {"name" : "node1.fek","ephemeral_id" : "j5K2Re4QSW-GcJMVVXPP5g","transport_address" : "192.168.2.161:9300","attributes" : {"ml.machine_memory" : "1021931520","ml.max_open_jobs" : "20","xpack.installed" : "true","ml.enabled" : "true"}},"kO7CFyN3RKWLURRtrhwTMQ" : {"name" : "master.fek","ephemeral_id" : "JH2omxVpRVyqQN47RMENsQ","transport_address" : "192.168.2.163:9300","attributes" : {"ml.machine_memory" : "1021931520","ml.max_open_jobs" : "20","xpack.installed" : "true","ml.enabled" : "true"}}}
}

Kafka集群安装

Kafka集群部署使用:

FlumeNG安装

***0.消费Kafka信息,传入ES

mkdir -p /opt/app/
cd /opt/app/
wget .0-SNAPSHOT.jar
java -jar kafka-es-1.0-SNAPSHOT.jar

源码地址:kafka-es 可以根据项目需要,过滤清洗日志,在入ES的库时可以异步写入HDFS,做大数据分析

此处选择Kafka是因为其削峰能力。

1. 配置FlumeNG 此处,我们使用node1.fek上nginx模拟需要采集日志的应用,nginx日志位于/usr/local/nginx/logs目录

mkdir -p /opt/flume/
cd /opt/flume/
wget .8.0/apache-flume-1.8.0-bin.tar.gz
tar -zxvf apache-flume-1.8.0-bin.tar.gz

配置flume-fek.conf,可以配置Spooling Directory Source(监听一个目录)或 Exec Source(监听一个文件)

#Spooling Directory Source
agent.sources = sc_nginx
agent.channels = cl_nginx
agent.sinks = sk_nginxagent.sources.sc_nginx.type = spooldir
agent.sources.sc_nginx.spoolDir = /usr/local/nginx/logs
agent.sources.sc_nginx.fileHeader = trueagent.channels.cl_nginx.type = memory
agent.channels.cl_nginx.capacity = 1000
agent.channels.cl_nginx.transactionCapacity = 100agent1.sinks.sk_nginx.type = org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.sk_nginx.kafka.topic = fek_topic
agent1.sinks.sk_nginx.kafka.bootstrap.servers = 192.168.2.151:9092,192.168.2.152:9092,192.168.2.153:9092
agent1.sinks.sk_nginx.kafka.flumeBatchSize = 20
agent1.sinks.sk_nginx.kafka.producer.acks = 1
agent1.sinks.sk_nginx.kafka.producer.linger.ms = 1
agent1.sinks.sk_nginx.kafka.producerpression.type = snappyagent.sources.sc_nginx.channels = cl_nginx
agent.sinks.sk_nginx.channel = cl_nginx

#Exec Source
agent.sources = sc_nginx
agent.channels = cl_nginx
agent.sinks = sk_nginxagent.sources.sc_nginx.type = exec
agent.sources.sc_nginxmand = tail -F /usr/local/nginx/logs/access.logagent.channels.cl_nginx.type = memory
agent.channels.cl_nginx.capacity = 1000
agent.channels.cl_nginx.transactionCapacity = 100agent1.sinks.sk_nginx.type = org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.sk_nginx.kafka.topic = fek_topic
agent1.sinks.sk_nginx.kafka.bootstrap.servers = 192.168.2.151:9092,192.168.2.152:9092,192.168.2.153:9092
agent1.sinks.sk_nginx.kafka.flumeBatchSize = 20
agent1.sinks.sk_nginx.kafka.producer.acks = 1
agent1.sinks.sk_nginx.kafka.producer.linger.ms = 1
agent1.sinks.sk_nginx.kafka.producerpression.type = snappyagent.sources.sc_nginx.channels = cl_nginx
agent.sinks.sk_nginx.channel = cl_nginx

配置文件说明:

配置项名称作用举例
agent1flume节点名称,启动时通过参数命令-name指定
agent1.sources监听的源,可以有多个,空格隔开即可,用于收集数据并发送到channelagent1.sources=s1 s2
agent1.channels临时通道,可以有多个,空格隔开,存放source收集的数据,sink从这里读取数据agent1.channels=c1
agent1.sinks接收器,可以有多个,空格隔开,从channel读取数据,并发送给目标 (比如kafka 或者hdfs或者另一个flume)agent1.sinks=k1
agent.sources.s1.type源的类型,s1是源名称,可以是目录、日志文件、或者监听端口等。 常用的source的类型包括avro、exec、netcat、spooldir和syslog等. 具体请参考官网 .html#flume-sourcesagent.sources.s1.type=spooldir agent.sources.s2.type=avro
agent1.sources.s1.channels监听数据要写入的channel名称agent1.sources.s1.channels=c1
agent1.channels.c1.type通道的类型,c1为通道名称,此处memory表示为常用的memory-channel, 同时也有其他类型的channel,如JDBC、file-channel、custom-channel等, 具体请参考官网 .html#flume-channelsagent1.channels.c1.type=memory
agent1.sinks.k1.type接收器类型,k1为接收器名称,logger表示直接写入日志, 常用的包括avro、logger、HDFS、Hbase以及file-roll等, 具体参考官网 .html#flume-sinksagent1.sinks.k1.type=logger

2. 启动FlumeNG

cd /opt/flume/apache-flume-1.8.0/
mkdir -p /opt/flume/apache-flume-1.8.0/logs
bin/flume-ng agent --conf ./conf/ -f conf/flume-fek.conf -n agent > ./logs/start.log 2>&1 &

启动命令参数说明:

参数作用举例
–conf 或 -c指定配置文件夹,包含flume-env.sh和log4j的配置文件–conf conf
–conf-file 或 -f配置文件地址--conf-file conf/flume.conf
–name 或 -nagent(flume节点)名称--name agent1

Kibana安装

在master.fek上安装Kibana

mkdir -p /opt/kibana/
cd /opt/kibana/
wget .3.0-linux-x86_64.tar.gz
tar -zxvf kibana-6.3.0-linux-x86_64.tar.gz
mv kibana-6.3.0-linux-x86_64 kibana-6.3.0

编辑kibana-6.3.0/config/kibana.yml

#配置本机ip  
server.host: "master.fek"
#配置es集群url  
elasticsearch.url: "http://192.168.2.163:9200"

启动

cd kibana-6.3.0//bin./kibana &

访问:http://192.168.2.163:5061

转载于:

更多推荐

基于FlumeNG+Kafka+ElasticSearch+Kibana的日志系统

本文发布于:2024-02-12 03:16:11,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1685559.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:系统   日志   Kafka   FlumeNG   Kibana

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!