ELK安装

安装Logstash依赖包JDK

Logstash的运行依赖于Java运行环境, Logstash 1.5以上版本不低于java 7推荐使用最新版本的Java。由于我们只是运行Java程序,而不是开发,下载JRE即可。首先,在Oracle官方下载新版jre,下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html

$ mkdir /usr/local/java

$ tar -zxf jdk-8u45-linux-x64.tar.gz -C /usr/local/java/

设置JDK的环境变量,如下:

$ vim ~/.bash_profile

export JAVA_HOME=/usr/local/java/jdk1.8.0_161

export PATH=$PATH:$JAVA_HOME/bin

exportCLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH

$ java -version (注意是-version 而非 --version)

java version "1.8.0_161"

Java(TM) SE Runtime Environment (build 1.8.0_161-b12)

Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)

安装Logstash

$ https://download.elastic.co/logstash/logstash/logstash-6.2.3.tar.gz

$ tar –zxf logstash-6.2.3.tar.gz -C /usr/local/

安装完成后运行如下命令:

$ /usr/local/logstash-6.2.3/bin/logstash -e 'input { stdin { } } output { stdout {} }'

输出:

Sending Logstash's logs to /usr/local/logstash-6.2.3/logs which is now configured via log4j2.properties

[2018-04-09T01:54:36,236][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/local/logstash-6.2.3/modules/netflow/configuration"}

[2018-04-09T01:54:36,267][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/local/logstash-6.2.3/modules/fb_apache/configuration"}

[2018-04-09T01:54:36,953][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified

[2018-04-09T01:54:37,799][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.3"}

[2018-04-09T01:54:38,385][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

[2018-04-09T01:54:40,664][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}

[2018-04-09T01:54:40,871][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x72564250 run>"}

The stdin plugin is now waiting for input:

[2018-04-09T01:54:40,981][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}

hello world

我们可以看到,我们输入什么内容logstash按照某种格式输出,其中-e参数参数允许Logstash直接通过命令行接受设置。这点可以帮助我们反复的测试配置是否正确而不用写配置文件。使用CTRL-C命令可以退出之前运行的Logstash。

使用-e参数在命令行中指定配置是很常用的方式,不过如果需要配置更多设置则需要很长的内容。这种情况,我们首先创建一个简单的配置文件,并且指定logstash使用这个配置文件。

例如:在logstash安装目录下创建一个“基本配置”测试文件logstash-simple.conf,文件内容如下:

$ cat logstash-simple.conf

input { stdin { } }

output {

   stdout { codec=> rubydebug }

}

Logstash使用input和output定义收集日志时的输入和输出的相关配置,本例中input定义了一个叫"stdin"的input,output定义一个叫"stdout"的output。无论我们输入什么字符,Logstash都会按照某种格式来返回我们输入的字符,其中output被定义为"stdout"并使用了codec参数来指定logstash输出格式。 

使用logstash的-f参数来读取配置文件,执行如下开始进行测试:

$ echo "`date`  hello World"

Mon Apr  9 02:08:45 UTC 2018  hello World

$ /usr/local/logstash-6.2.3/bin/logstash -f logstash-simple.conf

Logstash startup completed

Mon Apr  9 02:08:45 UTC 2018  hello Worl   #该行是执行echo “`date`hello World” 后输出的结果,直接粘贴到该位置

{

    "@timestamp" => 2018-04-09T02:17:15.064Z,

       "message" => "Mon Apr  9 02:08:45 UTC 2018  hello Worl",

      "@version" => "1",

          "host" => "5ef8026aa3bf"

}

安装Elasticsearch

$ tar -zxf elasticsearch-6.2.3.tar.gz -C /usr/local/

启动Elasticsearch

$ /usr/local/elasticsearch-6.2.3/bin/elasticsearch

OR后台执行

$ nohup /usr/local/elasticsearch-6.2.3/bin/elasticsearch &

这时有可能会直接被Killed掉,因为内存溢出(OOM),elastisearch占用的内存非常大,所以在内存比较小的服务器上运行要先修改jvm的内存大小

$ vim config/jvm.options

将22和23行的栈堆大小改为512M

-Xms512M

-Xmx512M

注:如果在运行过程中还出现Killed,继续将以上的值调小。

[2018-04-09T03:06:34,552][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]

org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root

at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.3.jar:6.2.3]

at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.3.jar:6.2.3]

at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.3.jar:6.2.3]

at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.3.jar:6.2.3]

at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.3.jar:6.2.3]

at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.3.jar:6.2.3]

at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.3.jar:6.2.3]

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.3.jar:6.2.3]

at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.3.jar:6.2.3]

at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.3.jar:6.2.3]

at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.3.jar:6.2.3]

... 6 more

因为安全问题elasticsearch 不让用root用户直接运行,所以要创建新用户

$ groupadd tzhennan

$ useradd tzhennan -g tzhennan

Exception in thread "main" java.nio.file.AccessDeniedException: /usr/local/elasticsearch-6.2.3/config/jvm.options

at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)

at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)

at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)

at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)

at java.nio.file.Files.newByteChannel(Files.java:361)

at java.nio.file.Files.newByteChannel(Files.java:407)

at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)

at java.nio.file.Files.newInputStream(Files.java:152)

at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:58)

$ chown -R elasticsearch:elasticsea  elasticsearch-6.2.3/

查看elasticsearch的9200端口是否已监听

$ netstat -anp | grep :9200

tcp        0      0 127.0.0.1:9200          0.0.0.0:*               LISTEN      1483/java

接下来我们在logstash安装目录下创建一个用于测试logstash使用elasticsearch作为logstash的后端的测试文件logstash-es-simple.conf,该文件中定义了stdout和elasticsearch作为output,这样的“多重输出”即保证输出结果显示到屏幕上,同时也输出到elastisearch中

input { stdin { } }

output {

    elasticsearch {

        hosts => "localhost:9200"

        user => "tzhennan"

        password => "xxx"

    }

    stdout { 

        codec=> rubydebug 

    }

}

执行命令

$ /usr/local/logstash-6.2.3/bin/logstash -f logstash-es-simple.conf

[2018-04-09T07:33:50,581][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash

[2018-04-09T07:33:51,886][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}

[2018-04-09T07:33:52,078][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x17ddf470 run>"}

The stdin plugin is now waiting for input:

[2018-04-09T07:33:52,202][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}

hello logstash

{

    "@timestamp" => 2018-04-09T07:35:03.141Z,

          "host" => "5ef8026aa3bf",

      "@version" => "1",

       "message" => "hello logstash"

}

可以使用curl命令发送请求来查看ES是否接收到了数据:

$ curl 'http://localhost:9200/_search?pretty'

输出:

{

  "took" : 154,

  "timed_out" : false,

  "_shards" : {

    "total" : 5,

    "successful" : 5,

    "skipped" : 0,

    "failed" : 0

  },

  "hits" : {

    "total" : 1,

    "max_score" : 1.0,

    "hits" : [

      {

        "_index" : "logstash-2018.04.09",

        "_type" : "doc",

        "_id" : "WulUqWIB-5JaC2wTa7rt",

        "_score" : 1.0,

        "_source" : {

          "@timestamp" : "2018-04-09T07:35:03.141Z",

          "host" : "5ef8026aa3bf",

          "@version" : "1",

          "message" : "hello logstash"

        }

      }

    ]

  }

}

至此,已经成功利用Elasticsearch和Logstash来收集日志数据了

问题:

可以访问127.0.0.1:9200,但不能访问公网IP:9200 ,需要在在elasticsearch.yml文件中增加以下配置:

network.bind_host: 0.0.0.0

安装elasticsearch插件

$ cd /usr/local/elasticsearch-6.2.3/

安装Head插件

$ ./bin/plugin install mobz/elasticsearch-head

安装安装elasticsearch-kopf插件

$ ./plugin -install lmenezes/elasticsearch-kopf

安装完成后在plugins目录下可以看到

$ ls plugin

可以在浏览器访问http://ip:9200/_plugin/kopf浏览保存在Elasticsearch中的数据

安装Kibana

$ tar -zxf kibana-6.2.3-linux-x86_64.tar.gz -C /usr/local/

启动kibana

$ /usr/local/kibana-6.2.3-linux-x86_64/bin/kibana

1>使用http://ip:5601访问Kibana,登录后,首先,配置一个索引,默认,Kibana的数据被指向Elasticsearch,使用默认的logstash-*的索引名称,并且是基于时间的,点击“Create”即可

2>点击“Discover”,可以搜索和浏览Elasticsearch中的数据,默认搜索的是最近15分钟的数据。可以自定义选择时间

问题:

可以访问127.0.0.1:5601,但不能访问公网IP:9200 ,需要在在kibaba.yml文件中增加以下配置:

server.host: "localhost"

更改为

server.host: "0.0.0.0"

配置logstash作为Indexer

将logstash配置为索引器,并将logstash的日志数据存储到Elasticsearch

input {

    file {

    type =>"syslog"

        path => ["/var/log/messages", "/var/log/syslog" ]

    }

    syslog {

        type => "syslog"

        port => "5544"

    }

}

output {

  stdout { codec=> rubydebug }

  elasticsearch {

        hosts => "localhost:9200"

        user => "tzhennan"

        password => "xxx"

    }

}

启动

$ /usr/local/logstash-6.2.3/bin/logstash -flogstash-indexer.conf

使用echo命令模拟写入日志

$ echo "`date` 本地系统测试" >>/var/log/messages

...

参考文章:

http://blog.51cto.com/baidu/1676798

https://my.oschina.net/itblog/blog/547250

相关推荐