在本文中,我们将了解如何配置Filebeat作为DaemonSet在我们的Kubernetes集群中运行,以便将日志运送到Elasticsearch后端。我们使用Filebeat而不是FluentD或FluentBit,因为它是一个非常轻量级的实用程序,并且对Kubernetes有一流的支持,因此这是十分适合生产的配置。
Filebeat将在我们的Kubernetes集群中作为DaemonSet运行。它将会:
部署以下manifest以创建Filebeat pod所需的权限:
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
namespace: logging
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
我们应该从安全的角度出发,确保ClusterRole的权限尽可能地受到限制。如果与该服务账户相关联的任何一个pod被泄露,那么攻击者将无法获得对整个集群或其中运行的应用程序的访问权限。
使用以下 manifest 来创建一个 ConfigMap,它将由 Filebeat pod 使用:
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
# inputs:
# path: ${path.config}/inputs.d/*.yml
# reload.enabled: true
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
include_annotations: ["artifact.spinnaker.io/name","ad.datadoghq.com/tags"]
include_labels: ["app.kubernetes.io/name"]
labels.dedot: true
annotations.dedot: true
templates:
- condition:
equals:
kubernetes.namespace: myapp #Set the namespace in which your app is running, can add multiple conditions in case of more than 1 namespace.
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
multiline:
pattern: '^[A-Za-z ]+[0-9]{2} (?:[01]\d|2[0123]):(?:[012345]\d):(?:[012345]\d)'. #Timestamp regex for the app logs. Change it as per format.
negate: true
match: after
- condition:
equals:
kubernetes.namespace: elasticsearch
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
multiline:
pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}|^[0-9]{4}-[0-9]{2}-[0-9]{2}T'
negate: true
match: after
processors:
- add_cloud_metadata: ~
- drop_fields:
when:
has_fields: ['kubernetes.labels.app']
fields:
- 'kubernetes.labels.app'
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
关于 Filebeat ConfigMap 需要了解以下几个重要概念:
使用以下 manifest 来部署 Filebeat DaemonSet:
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
containers:
- name: filebeat
image: elastic/filebeat:6.5.4
args: [
"-c", "/usr/share/filebeat/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch.elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /usr/share/filebeat/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
让我们来看看这里发生了什么:
请注意 manifest 中的以下设置:
...
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
...
这确保了我们的 Filebeat DaemonSet 也会在 master 节点上调度一个 pod。一旦部署了 Filebeat DaemonSet,我们就可以检查我们的 pod 是否被正确调度。
root$ kubectl -n logging get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
filebeat-4kchs 1/1 Running 0 6d 100.96.8.2 ip-10-10-30-206.us-east-2.compute.internal <none> <none>
filebeat-6nrpc 1/1 Running 0 6d 100.96.7.6 ip-10-10-29-252.us-east-2.compute.internal <none> <none>
filebeat-7qs2s 1/1 Running 0 6d 100.96.1.6 ip-10-10-30-161.us-east-2.compute.internal <none> <none>
filebeat-j5xz6 1/1 Running 0 6d 100.96.5.3 ip-10-10-28-186.us-east-2.compute.internal <none> <none>
filebeat-pskg5 1/1 Running 0 6d 100.96.64.4 ip-10-10-29-142.us-east-2.compute.internal <none> <none>
filebeat-vjdtg 1/1 Running 0 6d 100.96.65.3 ip-10-10-30-118.us-east-2.compute.internal <none> <none>
filebeat-wm24j 1/1 Running 0 6d 100.96.0.4 ip-10-10-28-162.us-east-2.compute.internal <none> <none>
root$ kubectl -get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-10-28-162.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.28.162 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-28-186.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.28.186 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-29-142.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.29.142 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-29-252.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.29.252 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-30-118.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.30.118 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-30-161.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.30.161 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-30-206.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.30.206
如果我们跟踪其中一个 pod 的日志,我们可以清楚地看到它连接到 Elasticsearch,并且已经启动了文件的harvester。以下代码段可以看到这一点:
2019-11-19T06:22:03.435Z INFO log/input.go:138 Configured paths: [/var/lib/docker/containers/c2b29f5e06eb8affb2cce7cf2501f6f824a2fd83418d09823faf4e74a5a51eb7/*.log]
2019-11-19T06:22:03.435Z INFO input/input.go:114 Starting input of type: docker; ID: 4134444498769889169
2019-11-19T06:22:04.786Z INFO input/input.go:149 input ticker stopped
2019-11-19T06:22:04.786Z INFO input/input.go:167 Stopping Input: 4134444498769889169
2019-11-19T06:22:19.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641680,"time":{"ms":16}},"total":{"ticks":2471920,"time":{"ms":180},"value":2471920},"user":{"ticks":1830240,"time":{"ms":164}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549390018}},"memstats":{"gc_next":47281968,"memory_alloc":29021760,"memory_total":156062982472}},"filebeat":{"events":{"added":111,"done":111},"harvester":{"closed":2,"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":108,"batches":15,"total":108},"read":{"bytes":69},"write":{"bytes":123536}},"pipeline":{"clients":1847,"events":{"active":0,"filtered":3,"published":108,"total":111},"queue":{"acked":108}}},"registrar":{"states":{"current":87,"update":111},"writes":{"success":18,"total":18}},"system":{"load":{"1":0.98,"15":1.71,"5":1.59,"norm":{"1":0.0613,"15":0.1069,"5":0.0994}}}}}}
2019-11-19T06:22:49.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641720,"time":{"ms":44}},"total":{"ticks":2472030,"time":{"ms":116},"value":2472030},"user":{"ticks":1830310,"time":{"ms":72}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549420018}},"memstats":{"gc_next":47281968,"memory_alloc":38715472,"memory_total":156072676184}},"filebeat":{"events":{"active":12,"added":218,"done":206},"harvester":{"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":206,"batches":24,"total":206},"read":{"bytes":102},"write":{"bytes":269666}},"pipeline":{"clients":1847,"events":{"active":12,"published":218,"total":218},"queue":{"acked":206}}},"registrar":{"states":{"current":87,"update":206},"writes":{"success":24,"total":24}},"system":{"load":{"1":1.22,"15":1.7,"5":1.58,"norm":{"1":0.0763,"15":0.1063,"5":0.0988}}}}}}
2019-11-19T06:23:19.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641750,"time":{"ms":28}},"total":{"ticks":2472110,"time":{"ms":72},"value":2472110},"user":{"ticks":1830360,"time":{"ms":44}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549450017}},"memstats":{"gc_next":47281968,"memory_alloc":43140256,"memory_total":156077100968}},"filebeat":{"events":{"active":-12,"added":43,"done":55},"harvester":{"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":55,"batches":12,"total":55},"read":{"bytes":51},"write":{"bytes":70798}},"pipeline":{"clients":1847,"events":{"active":0,"published":43,"total":43},"queue":{"acked":55}}},"registrar":{"states":{"current":87,"update":55},"writes":{"success":12,"total":12}},"system":{"load":{"1":0.99,"15":1.67,"5":1.49,"norm":{"1":0.0619,"15":0.1044,"5":0.0931}}}}}}
2019-11-19T06:23:25.261Z INFO log/harvester.go:255 Harvester started for file: /var/lib/docker/containers/ccb7dc75ecc755734f6befc4965b9fdae74d59810914101eadf63daa69eb62e2/ccb7dc75ecc755734f6befc4965b9fdae74d59810914101eadf63daa69eb62e2-json.log
2019-11-19T06:23:49.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641780,"time":{"ms":28}},"total":{"ticks":2472310,"time":{"ms":196},"value":2472310},"user":{"ticks":1830530,"time":{"ms":168}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":21},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549480018}},"memstats":{"gc_next":47789200,"memory_alloc":31372376,"memory_total":156086697176,"rss":-1064960}},"filebeat":{"events":{"active":16,"added":170,"done":154},"harvester":{"open_files":16,"running":14,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":153,"batches":24,"total":153},"read":{"bytes":115},"write":{"bytes":207569}},"pipeline":{"clients":1847,"events":{"active":16,"filtered":1,"published":169,"total":170},"queue":{"acked":153}}},"registrar":{"states":{"current":87,"update":154},"writes":{"success":25,"total":25}},"system":{"load":{"1":0.87,"15":1.63,"5":1.41,"norm":{"1":0.0544,"15":0.1019,"5":0.0881}}}}}}
一旦我们运行了所有的 pods,那么我们就可以在 Kibana 中创建一个 filebeat-* 类型的索引模式。Filebeat 索引一般都是有时间戳的。只要我们创建了索引模式,就可以看到所有可搜索的可用字段,并导入。最后,我们可以搜索我们的应用程序日志,并在需要时创建 dashboard。强烈建议在我们的应用程序中使用 JSON logger,因为它使日志处理变得非常容易,并且可以轻松地解析消息。
我们的日志堆栈方案配置到此结束。所有提供的配置文件都已经在生产环境中进行了测试,并且是可以随时部署的。欢迎实践,也十分欢迎你向我们分享关于 Kubernetes 的各种实践。
原文链接: https://appfleet.com/blog/part-2-efk-stack-on-kubernetes/
领取专属 10元无门槛券
私享最新 技术干货