
在logstash的那台机器上面查看下端口
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1670/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1897/master         
tcp6       0      0 192.168.59.138:9200    :::*                    LISTEN      2018/java           
tcp6       0      0 :::10514                :::*                    LISTEN      2077/java           
tcp6       0      0 192.168.59.138:9300    :::*                    LISTEN      2018/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1670/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1897/master         
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      2077/java 不难发现 可以看到127.0.0.1:9600 这个端口,并不是配置的IP地址。
# vim /etc/logstash/logstash.yml
//修改为如下:
http.host: "192.168.161.162"
# systemctl restart logstash
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1670/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1897/master         
tcp6       0      0 192.168.161.162:9200    :::*                    LISTEN      2018/java           
tcp6       0      0 :::10514                :::*                    LISTEN      2215/java           
tcp6       0      0 192.168.161.162:9300    :::*                    LISTEN      2018/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1670/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1897/master         
tcp6       0      0 192.168.161.162:9600    :::*                    LISTEN      2215/java  浏览器访问192.168.59.131:5601,到kibana配置索引



再次返回到 Discover

# vim /etc/logstash/conf.d/system.conf
input {
  file {
    path => "/var/log/messages"     #日志路径
    type => "systemlog"      #类型
    start_position => "beginning"    #logstash 从什么位置开始读取文件数据,默认是结束位置,也就是说 logstash 进程会以类似 tail -F 的形式运行。如果你是要导入原有数据,把这个设定改成"beginning",logstash 进程就从头开始读取,类似 less +F 的形式运行。
    stat_interval => "2"  #logstash 每隔多久检查一次被监听文件状态(是否有更新) ,默认是 1 秒。
  }
}
output {
  elasticsearch {
    hosts => ["192.168.59.131:9200"]      #指定hosts
    index => "logstash-systemlog-%{+YYYY.MM.dd}"    #索引名称
  }
}/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/system.conf -t     #检测配置文件是否有语法错误
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-03-14 12:00:19.746 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-03-14 12:00:19.765 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[INFO ] 2018-03-14 12:00:19.840 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2018-03-14 12:00:19.843 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2018-03-14 12:00:20.563 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] 2018-03-14 12:00:22.085 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstas# ll /var/log/messages 
-rw-------. 1 root root 791209 12月 27 11:43 /var/log/messages
#这里可以看到该日志文件是600权限,而elasticsearch是运行在elasticsearch用户下,这样elasticsearch是无法收集日志的。所以这里需要更改日志的权限,否则会报权限拒绝的错误。在日志中查看/var/log/logstash/logstash-plain.log 是否有错误。
# chmod 644 /var/log/messages
# systemctl restart logstash稍微等一下,就会看到另外一个索引出来:

按照刚刚相同的步骤继续操作即可!
在Discover界面去查看日志:

具体分析下:
看下系统内的日志:
less /var/log/messages
分下一下kibana收集的日志:

当然具体的图像成形设置可参考如下:
https://www.extlight.com/2017/10/31/Kibana-基础入门/
当然我本次安装的全系列为 E+L+K v6.2.2

新增的功能相当强大,近期正在学习K8S相关内容。后期一定会配合kibana整起来!
自从配置了logstash我们可以看到,页面的卡顿和资源的消耗还是蛮大的。下面我们可以尝试使用filebeat来解决这个问题。
在另外一台node上面安装filebeat。
# ls
filebeat-6.2.2-linux-x86_64.tar.gz
# rpm -ivh filebeat-6.2.2-x86_64.rpm vim /etc/filebeat/filebeat.yml //增加或者更改
filebeat.prospectors:
- input_type: log 
enabled: true
  paths:
    - /var/log/*.log
output.elasticsearch:
  hosts: ["192.168.59.131:9200"]启动服务:
# systemctl start filebeat
# ps aux | grep filebeat
root      2141  3.3  1.5 372612 15304 ?        Ssl  22:24   0:00 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat再次返回到kibana:

已经生成了filebeat的日志了。

这样就成功的收集到了我们所需的指定日志!