ELK日志分析系统(实战!)
日志服务器
提高安全性
集中存放日志
缺陷:对日志的分析困难
ELK日志分析系统
Elasticsearch:存储,索引池
Logstash:日志收集器
Kibana:数据可视化
日志处理步骤
1,将日志进行集中化管理
2,将日志格式化(Logstash)并输出到Elasticsearch
3,对格式化后的数据进行索引和存储(Elasticsearch)
4,前端数据的展示(Kibana)
Elasticsearch的概述
提供了一个分布式多用户能力的全文搜索引擎
Elasticsearch的概念
接近实时
集群
节点
索引:索引(库)-->类型(表)-->文档(记录)
分片和副本
Logstash介绍
一款强大的数据处理工具,可以实现数据传输、格式处理、格式化输出
数据输入、数据加工(如过滤,改写等)以及数据输出
LogStash主要组件
Shipper
Indexer
Broker
Search and Storage
Web Interface
Kibana介绍
一个针对Elasticsearch的开源分析及可视化平台
搜索、查看存储在Elasticsearch索引中的数据
通过各种图表进行高级数据分析及展示
Kibana主要功能
Elasticsearch无缝之集成
整合数据,复杂数据分析
让更多团队成员受益
接口灵活,分享更容易
配置简单,可视化多数据源
简单数据导出
实验环境
1、在node1,node2上安装elasticsearch(操作相同,只演示一台)
[ ~]# vim /etc/hosts ##配置解析名 192.168.52.133 node1 192.168.52.134 node2 [ ~]# systemctl stop firewalld.service ##关闭防火墙 [ ~]# setenforce 0 ##关闭增强型安全功能 [ ~]# java -version ##查看是否支持Java [ ~]# mount.cifs //192.168.100.100/tools /mnt/tools/ ##挂载 Password for //192.168.100.100/tools: [ ~]# cd /mnt/tools/elk/ [ elk]# rpm -ivh elasticsearch-5.5.0.rpm ##安装 警告:elasticsearch-5.5.0.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%] Creating elasticsearch group... OK Creating elasticsearch user... OK 正在升级/安装... 1:elasticsearch-0:5.5.0-1 ################################# [100%] ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd sudo systemctl daemon-reload sudo systemctl enable elasticsearch.service ### You can start elasticsearch service by executing sudo systemctl start elasticsearch.service [ elk]# systemctl daemon-reload ##重载守护进程 [ elk]# systemctl enable elasticsearch.service ##开机自启 Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service. [ elk]# cd /etc/elasticsearch/ [ elasticsearch]# cp elasticsearch.yml elasticsearch.yml.bak ##备份 [ elasticsearch]# vim elasticsearch.yml ##修改配置文件 cluster.name: my-elk-cluster ##集群名 node.name: node1 ##节点名,第二个节点为node2 path.data: /data/elk_data ##数据存放位置 path.logs: /var/log/elasticsearch/ ##日志存放位置 bootstrap.memory_lock: false ##不在启动时锁定内存 network.host: 0.0.0.0 ##提供服务绑定的IP地址,为所有地址 http.port: 9200 ##端口号为9200 discovery.zen.ping.unicast.hosts: ["node1", "node2"] ##集群发现通过单播实现 [ elasticsearch]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml ##检查配置是否正确 cluster.name: my-elk-cluster node.name: node1 path.data: /data/elk_data path.logs: /var/log/elasticsearch/ bootstrap.memory_lock: false network.host: 0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: ["node1", "node2"] [ elasticsearch]# mkdir -p /data/elk_data ##创建数据存放点 [ elasticsearch]# chown elasticsearch.elasticsearch /data/elk_data/ ##给权限 [ elasticsearch]# systemctl start elasticsearch.service ##开启服务 [ elasticsearch]# netstat -ntap | grep 9200 ##查看开启情况 tcp6 0 0 :::9200 :::* LISTEN 83358/java [ elasticsearch]#
查看node1节点信息
查看node2节点信息
2、在浏览器上检查健康和状态
node1健康检查
node2健康检查
node1状态
node2状态
3、在node1,node2上安装node组件依赖包(操作相同,只演示一个)
[ elasticsearch]# yum install gcc gcc-c++ make -y ##安装编译工具 [ elasticsearch]# cd /mnt/tools/elk/ [ elk]# tar xf node-v8.2.1.tar.gz -C /opt/ ##解压插件 [ elk]# cd /opt/node-v8.2.1/ [ node-v8.2.1]# ./configure ##配置 [ node-v8.2.1]# make && make install ##编译安装
4、在node1,node2上安装phantomjs前端框架
[ node-v8.2.1]# cd /mnt/tools/elk/ [ elk]# tar xf phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /usr/local/src/ ##解压到/usr/local/src下 [ elk]# cd /usr/local/src/phantomjs-2.1.1-linux-x86_64/bin/ [ bin]# cp phantomjs /usr/local/bin/ ##编译系统识别
5、在node1,node2上安装elasticsearch-head数据可视化
[ bin]# cd /mnt/tools/elk/ [ elk]# tar xf elasticsearch-head.tar.gz -C /usr/local/src/ ##解压 [ elk]# cd /usr/local/src/elasticsearch-head/ [ elasticsearch-head]# npm install ##安装 npm WARN license should be a valid SPDX license expression npm WARN optional SKIPPING OPTIONAL DEPENDENCY: (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for : wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) added 71 packages in 7.262s [ elasticsearch-head]#
6、修改配置文件
[ elasticsearch-head]# cd ~ [ ~]# vim /etc/elasticsearch/elasticsearch.yml #末行插入 http.cors.enabled: true ##开启跨域访问支持,默认为false http.cors.allow-origin: "*" ##跨域访问允许的域名地址 [ ~]# systemctl restart elasticsearch.service ##重启 [ ~]# cd /usr/local/src/elasticsearch-head/ [ elasticsearch-head]# npm run start & ##后台运行数据可视化服务 [1] 83664 [ elasticsearch-head]# > start /usr/local/src/elasticsearch-head > grunt server Running "connect:server" (connect) task Waiting forever... Started connect web server on http://localhost:9100 [ elasticsearch-head]# [ elasticsearch-head]# netstat -ntap | grep 9200 tcp6 0 0 :::9200 :::* LISTEN 83358/java [ elasticsearch-head]# netstat -ntap | grep 9100 tcp 0 0 0.0.0.0:9100 0.0.0.0:* LISTEN 83674/grunt [ elasticsearch-head]#
7、在浏览器上连接并查看健康值状态
node1
node2
8、在node1上创建索引
[ ~]# curl -XPUT ‘localhost:9200/index-demo/test/1?pretty&pretty‘ -H ‘content-Type: application/json‘ -d ‘{"user":"zhangsan","mesg":"hello world"}‘ ##创建索引信息 { "_index" : "index-demo", "_type" : "test", "_id" : "1", "_version" : 1, "result" : "created", "_shards" : { "total" : 2, "successful" : 2, "failed" : 0 }, "created" : true } [ ~]#
9、在Apache服务器上安装logstash,多elasticsearch进行对接
[ ~]# systemctl stop firewalld.service [ ~]# setenforce 0 [ ~]# yum install httpd -y ##安装服务 [ ~]# systemctl start httpd.service ##启动服务 [ ~]# java -version [ ~]# mount.cifs //192.168.100.100/tools /mnt/tools/ ##挂载 Password for //192.168.100.100/tools: [ ~]# cd /mnt/tools/elk/ [ elk]# rpm -ivh logstash-5.5.1.rpm ##安装logstash 警告:logstash-5.5.1.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%] 正在升级/安装... 1:logstash-1:5.5.1-1 ################################# [100%] Using provided startup.options file: /etc/logstash/startup.options OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N Successfully created system startup script for Logstash [ elk]# systemctl start logstash.service ##开启服务 [ elk]# systemctl enable logstash.service ##开机自启 Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service. [ elk]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/ ##便于系统识别 [ elk]#
10、将系统日志文件输出到elasticsearch
[ elk]# chmod o+r /var/log/messages ##给其他用户读权限 [ elk]# vim /etc/logstash/conf.d/system.conf ##创建文件 input { file{ path => "/var/log/messages" ##输出目录 type => "system" start_position => "beginning" } } output { elasticsearch { #输入地址指向node1节点 hosts => ["192.168.13.129:9200"] index => "system-%{+YYYY.MM.dd}" } } [ elk]# systemctl restart logstash.service ##重启服务 ##也可以用数据浏览查看详细信息
11、在node1服务器上安装kibana数据可视化
[ ~]# cd /mnt/tools/elk/ [ elk]# rpm -ivh kibana-5.5.1-x86_64.rpm ##安装 警告:kibana-5.5.1-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%] 正在升级/安装... 1:kibana-5.5.1-1 ################################# [100%] [ elk]# cd /etc/kibana/ [ kibana]# cp kibana.yml kibana.yml.bak ##备份 [ kibana]# vim kibana.yml ##修改配置文件 server.port: 5601 ##端口号 server.host: "0.0.0.0" ##监听任意网段 elasticsearch.url: "http://192.168.13.129:9200" ##本机节点地址 kibana.index: ".kibana" ##索引名称 [ kibana]# systemctl start kibana.service ##开启服务 [ kibana]# systemctl enable kibana.service ##开机自启 Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service. [ elk]# [ elk]# netstat -ntap | grep 5601 ##查看端口 tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN 84837/node [ elk]#
12、浏览器访问kibana
13、在apache服务器中对接apache日志文件,进行统计
[ elk]# vim /etc/logstash/conf.d/apache_log.conf ##创建配置文件 input { file{ path => "/etc/httpd/logs/access_log" ##输入信息 type => "access" start_position => "beginning" } file{ path => "/etc/httpd/logs/error_log" type => "error" start_position => "beginning" } } output { if [type] == "access" { ##根据条件判断输出信息 elasticsearch { hosts => ["192.168.13.129:9200"] index => "apache_access-%{+YYYY.MM.dd}" } } if [type] == "error" { elasticsearch { hosts => ["192.168.13.129:9200"] index => "apache_error-%{+YYYY.MM.dd}" } } } [ elk]# logstash -f /etc/logstash/conf.d/apache_log.conf ##根据配置文件配置logstach
14、访问网页信息,查看kibana统计情况
只有error日志
浏览器访问Apache服务
生成access日志
##选择management>Index Patterns>create index patterns ##创建apache两个日志的信息
在kibana创建access访问日志
在kibana创建error访问日志
查看access日志统计情况
查看error日志统计情况
实验成功!!!
相关推荐
另外一部分,则需要先做聚类、分类处理,将聚合出的分类结果存入ES集群的聚类索引中。数据处理层的聚合结果存入ES中的指定索引,同时将每个聚合主题相关的数据存入每个document下面的某个field下。