1、背景
kong网关需要集成filebeat实现将kong网关的日志推送到公司kafka集群,考虑到kong网关可能部署到不同k8s版本的集群上(1.13-1.30),所以采用同一个负载启用多个容器的方式将filebeat组件集成到网关。
2、整体思路
首先需要在容器启动前动态创建日志落盘共享目录,避免启动filebeat容器时找不到目录;其次需要将kong网关的日志落盘路劲挂载到宿主机的hostPath路劲;然后需要在filebeat pod里边动态获取kong podIP,最后需要确保filebeat容器能够正确访问推送目标kafka集群的broker service地址。
3、改造部分
3.1、helm chart deployment.yaml模板部分,增加filebeat容器配置
- name: filebeat
image: "{{ .Values.filebeat.image.repository }}:{{ .Values.filebeat.image.tag }}"
imagePullPolicy: {{ .Values.filebeat.image.pullPolicy }}
resources:
{{- toYaml .Values.filebeat.resources | nindent 12 }}
volumeMounts:
- name: web-logs
mountPath: /data/web_logs
readOnly: true
- name: filebeat-config
mountPath: /etc/filebeat
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
3.2、values.yaml添加initcontainer以及自定义日志挂盘,以及filebeat配置
initContainers:
- name: create-log-dir
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/google-containers/busybox:1.27
command: [ "sh", "-c", "mkdir -p /data/web_logs/${NAMESPACE}/${POD_IP} && chmod 755 /data/web_logs/${NAMESPACE}/${POD_IP}" ]
securityContext:
runAsUser: 0
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: web-logs
mountPath: /data/web_logs
userDefinedVolumes:
# 新增日志卷
- name: web-logs
hostPath:
path: /data/web_logs/${NAMESPACE}/${POD_IP}
type: DirectoryOrCreate
- name: filebeat-config
configMap:
name: filebeat-config
userDefinedVolumeMounts:
- name: web-logs
mountPath: /data/web_logs
readOnly: false
- name: filebeat-config
mountPath: /usr/share/filebeat/filebeat.yml
subPath: filebeat.yml
filebeat:
enabled: true
kafkaHosts: ["nm-bigdata-kafka01:9092","nm-bigdata-kafka02:9092","nm-bigdata-kafka03:9092","nm-bigdata-kafka04:9092","nm-bigdata-kafka05:9092"]
kafkaTopic: "ucop-kong-proxy-access"
# 新增Kafka认证配置
kafkaAuth:
enabled: false # 控制是否启用认证
username: "user" # Kafka用户名
password: "pass" # Kafka密码
kafkaSSL:
enabled: false # 控制是否启用SSL
certificateAuthorities: ["/etc/certs/kafka/ca.pem"] # CA证书路径列表
certificate: "/etc/certs/kafka/client.pem" # 客户端证书路径
key: "/etc/certs/kafka/client.key" # 客户端私钥路径
image:
repository: hub.tech.21cn.com/library/elastic/filebeat
tag: 8.12.0
pullPolicy: IfNotPresent
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
3.3、filebeat日志采集部分配置以configmap的形式挂载,其filebeat-configmap.yaml如下
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
app: {{ template "kong.fullname" . }}
data:
filebeat.yml: |
filebeat.inputs:
- type: log
id: kong-access
enabled: true
fields:
app: kong-proxy
fields_under_root: true
paths:
- /data/web_logs/*/*_access.log
processors:
- copy_fields:
fail_on_error: false
ignore_missing: true
fields:
- from: log.file.path
to: source
output.kafka:
enabled: true
hosts: {{ .Values.filebeat.kafkaHosts | toJson }} # Kafka集群地址
topic: {{ .Values.filebeat.kafkaTopic | quote }}
required_acks: 1
compression: gzip
# 如果Kafka需要认证,添加以下配置
{{- if .Values.filebeat.kafkaAuth.enabled }}
username: {{ .Values.filebeat.kafkaAuth.username | quote }}
password: {{ .Values.filebeat.kafkaAuth.password | quote }}
{{- end }}
{{- if .Values.filebeat.kafkaSSL.enabled }}
ssl.certificate_authorities: {{ .Values.filebeat.kafkaSSL.certificateAuthorities | toJson }}
ssl.certificate: {{ .Values.filebeat.kafkaSSL.certificate | quote }}
ssl.key: {{ .Values.filebeat.kafkaSSL.key | quote }}
{{- end }}
logging.level: debug
path.data: /var/lib/filebeat
这里需要注意一下,filebeat采集的应该为容器文件系统的日志路劲,避免填写宿主机的文件系统日志路劲导致采集不到日志信息。
3.4、kong网关增加filebeat配置页面白屏化相关代码改造(略)
4、测试部署
4.1、本地k8s集群启动一个kafka单例,具体相关配置如下:
kafka.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-service
namespace: kafka
spec:
ports:
- port: 9092
nodePort: 39093
selector:
app: kafka
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-deployment
namespace: kafka
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/bitnami/kafka:3.4.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-service:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://10.251.90.7:9092
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
zookeeper.yaml
apiVersion: v1
kind: Service
metadata:
name: zookeeper-service
namespace: kafka
spec:
ports:
- port: 2181
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-deployment
namespace: kafka
spec:
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- name: zookeeper
image: registry.cn-hangzhou.aliyuncs.com/images-speed-up/zookeeper:3.8.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 2181
其中kafka环境变量KAFKA_ADVERTISED_LISTENERS为kafka broker service的监听地址,即svcIP+Port的形式,直接kubectl apply拉起就行。
4.2、kafka查看topic,以及查看消息消费记录命令行:
进入kafka脚本目录执行
4.2.1 查看指定topic以及最大10条消费记录
kafka-console-consumer.sh --bootstrap-server 172.19.0.93:39988 --topic ucop-kong-proxy-access --from-beginning --max-messages 10
4.2.1、查看所有topic
kafka-topics.sh --bootstrap-server localhost:9092 --list
4.2.2、实时监控查看新的消息消费记录
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic ucop-kong-proxy-access
4.2.3、模拟生产消息进行消费
kafka-console-producer.sh --bootstrap-server localhost:9092 --topic ucop-kong-proxy-access
4.4、使用curl模拟发起kong网关调用
curl -i http://<kong-svc>
4.5、查看本地的filebeat日志以及kafka消息消费记录,结果示例如下,即表示已经成功推送消息到kafka集群:
4.6、通过配置filebeat nginx_http_log_format参数,对日志输出格式进行规范,其具体配置如下:
accesslog "$time_iso8601\t$server_addr\t$remote_addr\t\"$http_x_forwarded_for\"\t$host\t$scheme\t\"$request_uri\"\t$request_length\t\"$http_referer\"\t$bytes_sent\t$body_bytes_sent\t\"$http_user_agent\"\t\"$request_body\"\t$request_time\t$status\t$upstream_host\t$upstream_addr\t\"$upstream_uri\"\t$upstream_response_time\t$upstream_connect_time\t$upstream_header_time\t$upstream_status"
至此,整个集成部署测试核心流程已经完成,如有纰漏还望留言斧正。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。