通过搭建篇,相信已经可以完美的搭建一套ELK日志分析系统了,我们就来看看如何使用这套系统进行实战 在kibana的web界面进行配置日志可视化 在搭建篇里最后我们通过logstash -f /etc/logstash/conf.d/elk.conf对系统日志和安全日志进行了采集,创建了系统和安全的索引,并且索引按类型做存放到了es中,我们可以通过elasticsearch-head插件查看,后面又创建了强大的可视化kibana组件进行样本数据的展示,不光炫并且还非常实用,现在我们来看看如何对es中的数据进行可视化展示 首先打开kibana的web界面,点击左边菜单栏中的设置,然后点击在Kibana下面的索引按钮,然后点击左上角的然后根据如图所示分别创建一个nagios-system-和nagios-secure-的索引模式
然后按照时间过滤,完成创建
索引匹配创建以后,点击左边最上面的菜单Discove,然后可以在左侧看到我们刚才创建的索引,然后就可以在下面添加要展示的标签,也可以对标签进行筛选,最终效果如图所示,可以看到刚才采集到的日志的所有信息
现在索引也可以创建了,现在可以来输出nginx、apache、message、secrue的日志到前台展示(Nginx有的话直接修改,没有自行安装)
编辑nginx配置文件,修改以下内容(在http模块下添加)
[root@elk-master ~]# vim /usr/local/nginx/conf/nginx.conf
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domian":"$host",'
'"host":"$server_addr",'
'"size":"$body_bytes_sent",'
'"responsetime":"$request_time",'
'"referer":"$http_referer",'
'"ua":"$http_user_agent"'
'}';
修改access_log的输出格式为刚才定义的json
access_log logs/elk.access.log json;
继续修改apache的配置文件
[root@elk-master ~]# vim /etc/httpd/conf/httpd.conf
LogFormat "{ \ \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \ \"@version\": \"1\", \ \"tags\":[\"apache\"], \ \"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \ \"clientip\": \"%a\", \ \"duration\": %D, \ \"status\": %>s, \ \"request\": \"%U%q\", \ \"urlpath\": \"%U\", \ \"urlquery\": \"%q\", \ \"bytes\": %B, \ \"method\": \"%m\", \ \"site\": \"%{Host}i\", \ \"referer\": \"%{Referer}i\", \ \"useragent\": \"%{User-agent}i\" \ }" ls_apache_json 修改输出格式为上面定义的json格式CustomLog logs/access_log ls_apache_json
编辑logstash配置文件,进行日志收集
[root@elk-master ~]# vim /etc/logstash/conf.d/elk.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.73.133:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.73.133:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.73.133:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.73.133:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
}
运行看看效果如何
[root@elk-master ~]# nohup logstash -f /etc/logstash/conf.d/elk.conf &
在head中可以发现所有创建日志的索引都已存在,然后就可以去Kibana创建日志索引,进行展示(按照上面的方法进行创建索引即可),看下展示的效果
优势:
如果日志量大可采用Kafka做缓冲队列,相比Redis更适合大吞吐量。
centos7.6 安装reids
# yum install -y redis修改redis的配置文件# vim /etc/redis.conf修改内容如下
bind 192.168.73.133daemonize yes
save ""#save 900 1#save 300 10#save 60 10000requirepass root123 #设置认证密码
启动redis服务# systemctl restart redis测试redis的是否启用成功
连接之后输入info如果有不报错即可
[root@elk-master ~]# redis-cli -h 192.168.73.133 192.168.73.133:6379> info# Serverredis_version:3.2.12```省略
运行logstash指定redis-out.conf的配置文件# logstash -f /etc/logstash/conf.d/redis-out.conf
运行成功以后,在logstash中输入内容(在本地电脑上使用客户端连接之后可以查看下效果)
测试成功没问题之后,编辑配置redis-in.conf配置文件,把reids的存储的数据输出到elasticsearch中
# vim /etc/logstash/conf.d/redis-in.conf
key => 'elk-test'
input{
redis {
host => "192.168.73.133"
port => "6379"
password => 'root123'
db => '1'
data_type => "list"
key => 'elk-test'
batch_count => 1 #这个值是指从队列中读取数据时,一次性取出多少条,默认125条(如果redis中没有125条,就会报错,所>
以在测试期间加上这个值)
}
}
output {
elasticsearch {
hosts => ['192.168.73.133:9200']
index => 'redis-test-%{+YYYY.MM.dd}'
}
}
把之前的配置文件elk.conf修改一下,变成所有的日志监控的来源文件都存放到redis中,然后通过redis在输出到elasticsearch中
# vim /etc/logstash/conf.d/elk.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/secure"
type => "secure"
start_position => "beginning"
}
file {
path => "/var/log/httpd/access_log"
type => "http"
start_position => "beginning"
}
file {
path => "/usr/local/nginx/logs/elk.access.log"
type => "nginx"
start_position => "beginning"
}
}
utput {
if [type] == "http" {
redis {
host => "192.168.1.202"
password => 'test'
port => "6379"
type => "http"
start_position => "beginning"
file {
path => "/usr/local/nginx/logs/elk.access.log"
start_position => "beginning"
}
}
output {
if [type] == "http" {
password => 'root123'
port => "6379"
db => "2"
data_type => "list"
key => 'nagios_http'
}
}
if [type] == "nginx" {
redis {
host => "192.168.73.133"
password => 'root123'
port => "6379"
db => "2"
data_type => "list"
key => 'nagios_nginx'
}
}
if [type] == "secure" {
redis {
host => "192.168.73.133"
password => 'root123'
port => "6379"
db => "2"
data_type => "list"
key => 'nagios_secure'
}
}
if [type] == "system" {
redis {
host => "192.168.73.133"
password => 'root123'
port => "6379"
db => "2"
data_type => "list"
key => 'nagios_system'
}
}
}
运行logstash指定shipper.conf的配置文件
# logstash -f /etc/logstash/conf.d/elk.conf
在redis中查看是否已经将数据写到里面(有时候输入的日志文件不产生日志,会导致redis里面也没有写入日志,刷新一下nginx和httpd)
把redis中的数据读取出来,写入到elasticsearch中(需要另外一台主机做实验,这里使用192.168.73.135)
# vim /etc/logstash/conf.d/redis-out.conf
input {
redis {
type => "system"
host => "192.168.73.133"
password => 'root123'
port => "6379"
db => "2"
data_type => "list"
key => 'nagios_system'
batch_count => 1
}
redis {
type => "http"
host => "192.168.73.133"
password => 'root123'
port => "6379"
db => "2"
data_type => "list"
key => 'nagios_http'
batch_count => 1
}
redis {
type => "nginx"
host => "192.168.73.133"
password => 'root123'
port => "6379"
db => "2"
data_type => "list"
key => 'nagios_nginx'
batch_count => 1
}
redis {
type => "secure"
host => "192.168.73.133"
password => 'root123'
port => "6379"
db => "2"
data_type => "list"
key => 'nagios_secure'
batch_count => 1
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.73.133:9200"]
index => "nagios-system-%{+YYYY.MM.dd}"
}
}
if [type] == "secure" {
elasticsearch {
hosts => ["192.168.73.133:9200"]
index => "nagios-secure-%{+YYYY.MM.dd}"
}
}
if [type] == "http" {
elasticsearch {
hosts => ["192.168.73.133:9200"]
index => "nagios-http-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.73.133:9200"]
index => "nagios-nginx-%{+YYYY.MM.dd}"
}
}
}
注意: input是从客户端收集的 output是同样也保存到192.168.1.202中的elasticsearch中,如果要保存到当前的主机上,可以把output中的hosts修改成localhost,如果还需要在kibana中显示,需要在本机上部署kabana,为何要这样做,起到一个松耦合的目的 说白了,就是在客户端收集日志,写到服务端的redis里或是本地的redis里面,输出的时候对接ES服务器即可 运行命令看看效果
[root@elk-master ~]# nohup logstash -f /etc/logstash/conf.d/redis-out.conf &
效果是和直接往ES服务器输出一样的(这样是先将日志存到redis数据库,然后再从redis数据库里取出日志
路径 固定
格式 尽量json
最后注意一下,因为ES保存日志是永久保存,所以需要定期删除一下日志,下面命令为删除指定时间前的日志
curl -X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d "-$n d