ELK技术栈nginx访问日志172.30.0.8 - - [26/Jun/2020:14:39:30 +0800] "GET //app/app/access_token?app_id=ce571941c2b7e4fb&rand=IRWDg_qd8LQk7ovExvLR8h8dBntkwYEW&signature=cf28da70ed09fff4d6d4e72ffe5baa0a56df2695 HTTP/1.1" 500 87 "-" "Yii2-Curl-Agent" "-"
172.30.0.5 - - [26/Jun/2020:14:39:30 +0800] "GET /devops/app/main?app_id=ce571941c2b7e4fb HTTP/1.0" 302 0 "http://local.fn.wiiqq.com/fn//devops/app/list?auth_code=zgqFYeJ6iuN2zoocxal7Cr6oPCQNz__Z&state=wii" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" "172.30.0.1"log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';Logstash输入Logstash从nginx的访问日志中读取数据,并在Elasticsearch中为日志创建索引,过程中还会根据grok模式对日志进行过滤和字段提取Grok表达式Logstash安装包中已经包含了一些常用grok表达式。可在github上查看https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns
apache通用格式日志的grok模式如下input {
file {
path => "/var/log/nginx/access.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
date {
match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
}
mutate {
convert => ["response", "integer"]
convert => ["bytes", "integer"]
}
}
output {
elasticsearch {
hosts=>"localhost"
}
}grok模式进行匹配,为消息分配时间戳字段,并根据需要转换某些字段的数据类型bin/logstash -f logstash.conflogstash,可以在控制台看到类似下面的输出Kibana可视化Kibanakibana并打开http://localhost:5601bin/kibanaipCountDate HistogramAverage,Field:bytesDate HistogramCountDate HistogramSplit Series:字段为clientip进行子聚合GuageGuageCountdashboard