K8S 帮助文档: https://kubernetes.io/docs/home/
你必须连接到正确的主机,不这样做可能会导致零分。
Context
[candidate@base] $ ssh cks001091
针对 kubeadm创建的 cluster运行CIS基准测试工具时,发现了多个必须立即解决的问题。
Task
通过配置修复所有问题并重新启动受影响的组件以确保新的设置生效。
修复针对kubelet发现的所有以下违规行为:
1.1.1 确保将 anonymous-auth 参数设置为 false
1.1.2 确保 --authorization-mode 参数未设置为 AlwaysAllow
注意:尽可能使用Webhook身份验证/授权。
修复针对etcd发现的所有以下违规行为:
2.1.1 确保 --client-cert-auth 参数设置为 true
candidate@base:~$ ssh cks001091
candidate@master01:~$ sudo -i
root@master01:~#
root@master01:~# kubectl get pod
E0318 10:13:28.546257 9014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://11.0.1.111:6443/api?timeout=32s\": dial tcp 11.0.1.111:6443: connect: connection refused"
E0318 10:13:28.547769 9014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://11.0.1.111:6443/api?timeout=32s\": dial tcp 11.0.1.111:6443: connect: connection refused"
E0318 10:13:28.549668 9014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://11.0.1.111:6443/api?timeout=32s\": dial tcp 11.0.1.111:6443: connect: connection refused"
E0318 10:13:28.551336 9014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://11.0.1.111:6443/api?timeout=32s\": dial tcp 11.0.1.111:6443: connect: connection refused"
E0318 10:13:28.552830 9014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://11.0.1.111:6443/api?timeout=32s\": dial tcp 11.0.1.111:6443: connect: connection refused"
The connection to the server 11.0.1.111:6443 was refused - did you specify the right host or port?
root@master01:~# cd /var/lib/kubelet/
root@master01:/var/lib/kubelet# ll
total 52
drwxrwxr-x 9 root root 4096 Mar 18 10:12 ./
drwxr-xr-x 51 root root 4096 Nov 3 16:57 ../
drwx------ 2 root root 4096 Oct 27 18:54 checkpoints/
-rw-r--r-- 1 root root 1106 Mar 18 10:12 config.yaml
-rw------- 1 root root 62 Oct 27 18:54 cpu_manager_state
drwxr-xr-x 2 root root 4096 Mar 18 10:12 device-plugins/
-rw-r--r-- 1 root root 174 Oct 27 19:01 kubeadm-flags.env
-rw-r--r-- 1 root root 0 Aug 13 2024 .kubelet-keep
-rw------- 1 root root 61 Oct 27 18:54 memory_manager_state
drwxr-xr-x 2 root root 4096 Oct 27 18:54 pki/
drwxr-x--- 3 root root 4096 Oct 27 19:20 plugins/
drwxr-x--- 2 root root 4096 Mar 18 10:14 plugins_registry/
drwxr-x--- 2 root root 4096 Mar 18 10:12 pod-resources/
drwxr-x--- 12 root root 4096 Mar 18 10:14 pods/
root@master01:/var/lib/kubelet# cp config.yaml /opt/
root@master01:/var/lib/kubelet# vim config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false #将true改为false
webhook:
cacheTTL: 0s
enabled: true #启用webhook
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook #模式使用Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedT
root@master01:~# cd /etc/kubernetes/manifests
root@master01:/etc/kubernetes/manifests# ll
total 24
drwxrwxr-x 2 root root 4096 Mar 18 10:12 ./
drwxrwxr-x 6 root root 4096 Nov 3 17:11 ../
-rw------- 1 root root 2552 Mar 18 10:12 etcd.yaml
-rw------- 1 root root 3896 Mar 18 10:12 kube-apiserver.yaml
-rw------- 1 root root 3417 Oct 27 19:01 kube-controller-manager.yaml
-rw-r--r-- 1 root root 0 Aug 13 2024 .kubelet-keep
-rw------- 1 root root 1487 Oct 27 19:01 kube-scheduler.yaml
root@master01:/etc/kubernetes/manifests# cp etcd.yaml /opt/
root@master01:/etc/kubernetes/manifests# vim etcd.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/etcd.advertise-client-urls: https://11.0.1.111:2379
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://11.0.1.111:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true #将false改为true,启用客户端证书认证
- --data-dir=/var/lib/etcd
记得退回
candidate
用户,base
节点
root@master01:/etc/kubernetes/manifests# systemctl daemon-reload && systemctl restart kubelet
root@master01:/etc/kubernetes/manifests#
root@master01:/etc/kubernetes/manifests# kubectl get pod
NAME READY STATUS RESTARTS AGE
amd-gpu-6cbfd6c6d6-l5464 1/1 Running 7 (27m ago) 134d
cpu-65cf4d685c-lvnqk 1/1 Running 7 (27m ago) 134d
nvidia-gpu-64c4d44986-x5qgh 1/1 Running 7 (27m ago) 134d
root@master01:/etc/kubernetes/manifests#
root@master01:/etc/kubernetes/manifests# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001091 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分
[candidate@base] $ ssh cks000040
Context
您必须使用存储在TLS Secret中的SSL文件,来保护Web 服务器的安全访问。
Task
在clever-cactus namespace中为名为clever-cactus的现有Deployment创建名为clever-cactus的TLS Secret 。
使用以下SSL文件:
证书 ~/ca-cert/web.k8s.local.crt
密钥 ~/ca-cert/web.k8s.local.key
Deployment已配置为使用TLS Secret。
请勿修改现有的Deployment。
candidate@base:~$ ssh cks000040
candidate@master01:~$ kubectl get deployments.apps -n clever-cactus
NAME READY UP-TO-DATE AVAILABLE AGE
clever-cactus 0/1 1 0 134d
candidate@master01:~$
candidate@master01:~$ kubectl get pod -n clever-cactus
NAME READY STATUS RESTARTS AGE
clever-cactus-d58976778-8qxqf 0/1 ContainerCreating 0 134d
candidate@master01:~$
candidate@master01:~$ kubectl -n clever-cactus describe pod clever-cactus-d58976778-8qxqf
······
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 134d (x60 over 134d) kubelet MountVolume.SetUp failed for volume "tls-certificate" : secret "clever-cactus" not found
Warning FailedMount 134d (x63 over 134d) kubelet MountVolume.SetUp failed for volume "tls-key" : secret "clever-cactus" not found
Warning FailedMount 134d (x13 over 134d) kubelet MountVolume.SetUp failed for volume "tls-certificate" : secret "clever-cactus" not found
Warning FailedMount 134d (x14 over 134d) kubelet MountVolume.SetUp failed for volume "tls-key" : secret "clever-cactus" not found
TLS Secret
candidate@master01:~$ kubectl create secret tls clever-cactus -n clever-cactus --cert=ca-cert/web.k8s.local.crt --key=ca-cert/web.k8s.local.key
secret/clever-cactus created
candidate@master01:~$ kubectl -n clever-cactus get secrets
NAME TYPE DATA AGE
clever-cactus kubernetes.io/tls 2 25s
#等待三分钟左右,pod实例检测到 secret 证书存在后,重新拉起pod
candidate@master01:~$ kubectl -n clever-cactus get pod
NAME READY STATUS RESTARTS AGE
clever-cactus-d58976778-8qxqf 1/1 Running 0 134d
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks001094
Task
分析和编辑给定的Dockerfile /cks/docker/Dockerfile
并修复在文件中拥有的突出的安全/最佳实践问题的一个指令。
分析和编辑给定的清单文件 /cks/docker/deployment.yaml ,
并修复在文件中拥有突出的安全/最佳实践问题的一个字段。
注意:请勿添加或删除配置设置;只需修改现有的配置设置让以上两个配置设置都不再有安全/最佳实践问题。
注意:如果您需要非特权用户来执行任何项目,请使用用户ID 65535 的用户 nobody 。
candidate@base:~$ ssh cks001094
candidate@master01:~$ vim /cks/docker/Dockerfile
FROM ubuntu:20.04
RUN apt-get install -y wget curl gcc gcc-c++ make openssl-devel pcre-devel gd-devel \
iproute net-tools telnet && \
yum clean all && \
rm -rf /var/cache/apt/*
ADD nginx-1.15.5.tar.gz /
RUN cd nginx-1.15.5 && \
./configure --prefix=/usr/local/nginx \
--with-http_ssl_module \
--with-http_stub_status_module && \
make -j 4 && make install && \
mkdir /usr/local/nginx/conf/vhost && \
cd / && rm -rf nginx* && \
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
COPY sunnydale.sh .
ENTRYPOINT ["/sunnydale.sh"]
USER nobody #将 root用户修改为 题目要求的 nobod 用户
CMD ["./sunnydale.sh"]
ENV PATH $PATH:/usr/local/nginx/sbin
COPY nginx.conf /usr/local/nginx/conf/nginx.conf
WORKDIR /usr/local/nginx
EXPOSE 80
CMD ["nginx","-g","daemon off;"]
candidate@master01:~$ vim /cks/docker/deployment.yaml
·······
securityContext:
runAsUser: 65535 # 确保用户ID为 65535
readOnlyRootFilesystem: true # 禁止访问根目录
privileged: false # 禁止提权
capabilities:
drop: ["all"]
add:
- NET_BIND_SERVICE
····
candidate@master01:~$ exit
logout
Connection to cks001094 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000026
Context
Pod行为不当,对系统构成安全威胁。
Task
属于应用程序ollama的一个Pod出现异常。它正从敏感文件/dev/mem直接访问系统的内存读取数据。
首先,识别访问/dev/mem行为不当的Pod。
接下来,将行为不当Pod的Deployment缩放为零副本。
注意:
除缩小副本外,不要修改Deployment其他内容。
不要修改任何其他Deployment。
不要删除任何Deployment。
candidate@base:~$ ssh cks000026
candidate@node02:~$ sudo -i
root@node02:~#
root@node02:/etc/falco# vim falco_rules.yaml # 拷贝模板配置文件 395~408 行
- list: allowed_outbound_destination_domains
items: [google.com, www.yahoo.com]
- rule: Unexpected outbound connection destination
desc: Detect any outbound connection to a destination outside of an allowed set of ips, networks, or domain names
condition: >
outbound and not
((fd.sip in (allowed_outbound_destination_ipaddrs)) or
(fd.snet in (allowed_outbound_destination_networks)) or
(fd.sip.name in (allowed_outbound_destination_domains)))
enabled: false
output: Disallowed outbound connection destination (command=%proc.cmdline pid=%proc.pid connection=%fd.name user=%user.name user_loginuid=%user.loginuid container_id=%container.id image=%container.image.repository)
priority: NOTICE
tags: [host, container, network, mitre_command_and_control, TA0011]
root@node02:/etc/falco#
root@node02:/etc/falco# vim falco_rules.local.yaml
# Your custom rules!
- list: dev-file
items: [/dev/mem]
- rule: devmem
desc: devmem
condition: >
fd.name in (dev-file)
output: >
Shell (container_id=%container.id)
priority: NOTICE
tags: [file]
root@node02:/etc/falco# falco -M 30 -r /etc/falco/falco_rules.local.yaml > devmem.log
Tue Mar 18 11:12:43 2025: Falco version: 0.34.0 (x86_64)
Tue Mar 18 11:12:43 2025: Falco initialized with configuration file: /etc/falco/falco.yaml
Tue Mar 18 11:12:43 2025: Loading rules from file /etc/falco/falco_rules.local.yaml
Rules match ignored syscall: warning (ignored-evttype):
Loaded rules match the following events: write, mlock2, fsconfig, send, getsockname, getpeername, setsockopt, recv, sendmmsg, recvmmsg, semop, getrlimit, page_fault, sendfile, fstat, io_uring_enter, getdents64, mlock, pwrite, ppoll, mlockall, io_uring_register, copy_file_range, getegid, fstat64, semget, munlock, signaldeliver, access, stat64, epoll_wait, lseek, poll, munmap, mprotect, getdents, lstat64, pluginevent, getresgid, getresuid, geteuid, getuid, semctl, munlockall, mmap, read, splice, brk, switch, pwritev, getgid, nanosleep, preadv, writev, readv, pread, mmap2, getcwd, select, llseek, lstat, stat, futex
These events might be associated with syscalls undefined on your architecture (please take a look here: https://marcin.juszkiewicz.com.pl/download/tables/syscalls.html). If syscalls are instead defined, you have to run Falco with `-A` to catch these events
Tue Mar 18 11:12:43 2025: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
Tue Mar 18 11:12:43 2025: Starting health webserver with threadiness 2, listening on port 8765
Tue Mar 18 11:12:43 2025: Enabled event sources: syscall
Tue Mar 18 11:12:43 2025: Opening capture with Kernel module
Syscall event drop monitoring:
- event drop detected: 0 occurrences
- num times actions taken: 0
root@node02:/etc/falco#
root@node02:/etc/falco# cat devmem.log
11:12:50.180930443: Notice Shell (container_id=8348dccac054)
11:12:50.184015793: Notice Shell (container_id=8348dccac054)
11:12:50.184016518: Notice Shell (container_id=8348dccac054)
11:13:00.185975987: Notice Shell (container_id=8348dccac054)
11:13:00.187674496: Notice Shell (container_id=8348dccac054)
11:13:00.187675127: Notice Shell (container_id=8348dccac054)
11:13:10.189412955: Notice Shell (container_id=8348dccac054)
11:13:10.192991952: Notice Shell (container_id=8348dccac054)
11:13:10.192993061: Notice Shell (container_id=8348dccac054)
Events detected: 9
Rule counts by severity:
NOTICE: 9
Triggered rules by rule name:
devmem: 9
root@node02:/etc/falco#
考试时,先使用crictl ps查,如果报错了,则再使用docker ps查。 查出来的结果,最后一列是pod的name。
root@node02:/etc/falco# crictl ps | grep 8348dccac054
8348dccac0549 27a71e19c9562 2 hours ago Running busybox 7 f2891635649f6 cpu-65cf4d685c-lvnqk
root@node02:/etc/falco#
root@node02:/etc/falco# kubectl get pod,deploy | grep cpu
pod/cpu-65cf4d685c-lvnqk 1/1 Running 7 (112m ago) 134d
deployment.apps/cpu 1/1 1 1 134d
root@node02:/etc/falco# kubectl scale deployment cpu --replicas 0
deployment.apps/cpu scaled
root@node02:/etc/falco#
root@node02:/etc/falco# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
amd-gpu 1/1 1 1 134d
cpu 0/0 0 0 134d
nvidia-gpu 1/1 1 1 134d
root@node02:/etc/falco#
root@node02:/etc/falco# exit
logout
candidate@node02:~$ exit
logout
Connection to cks000026 closed.
candidate@base:~$
[candidate@base] $ ssh cks001097
Context
您必须更新一个现有的Pod,以确保其容器的不变性。
Task
修改sec-ns namespace 里的名为 secdep 的Deployment,以便其容器:
⚫ 使用用户ID 30000运行
⚫ 使用只读根文件系统
⚫ 禁止特权提升
您可以在 ~/sec-ns_deployment.yaml 找到 Deployment 的清单文件。
参考链接: https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/security-context/
candidate@base:~$ ssh cks001097
candidate@master01:~$ kubectl get -f sec-ns_deployment.yaml
NAME READY UP-TO-DATE AVAILABLE AGE
secdep 1/1 1 1 134d
candidate@master01:~$ vim sec-ns_deployment.yaml
kind: Deployment
metadata:
name: secdep
namespace: sec-ns
spec:
replicas: 1
selector:
matchLabels:
app: secdep
template:
metadata:
labels:
app: secdep
spec:
containers:
- name: sec-ctx-demo-1
image: busybox:1.28
imagePullPolicy: IfNotPresent
command: [ "sh", "-c", "sleep 12h" ]
securityContext: #按标题要求添加安全上下文
allowPrivilegeEscalation: false # 禁止提权
readOnlyRootFilesystem: true # 限制访问权限
runAsUser: 3000 # 使用用户ID:3000
volumeMounts:
- name: sec-ctx-vol-1
mountPath: /data/demo1
- name: sec-ctx-demo-2
image: busybox
imagePullPolicy: IfNotPresent
command: [ "sh", "-c", "sleep 12h" ]
securityContext: #按标题要求添加安全上下文
allowPrivilegeEscalation: false # 禁止提权
readOnlyRootFilesystem: true # 限制访问权限
runAsUser: 3000 # 使用用户ID:3000
volumeMounts:
- name: sec-ctx-vol-2
mountPath: /data/demo2
volumes:
- name: sec-ctx-vol-1
emptyDir: {}
- name: sec-ctx-vol-2
emptyDir: {}
candidate@master01:~$ kubectl apply -f sec-ns_deployment.yaml
deployment.apps/secdep configured
candidate@master01:~$
candidate@master01:~$ kubectl -n sec-ns get pod
NAME READY STATUS RESTARTS AGE
secdep-697685948b-krg2k 2/2 Running 0 21s
secdep-7478777cc8-5546d 2/2 Terminating 16 (3h1m ago) 134d
candidate@master01:~$ exit
logout
Connection to cks001097 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks001098
Context
您必须为 kubeadm 配置的集群实施审计。
Task
首先,重新配置集群的PAI服务器,以便:
⚫ 位于 /etc/kubernetes/logpolicy/sample-policy.yaml 的基本审计策略被使用
⚫ 日志存储在 /var/log/kubernetes/audit-logs.txt
⚫ 最多保留 2 个日志,保留时间为 10 天
注意:基本策略仅指定不记录的内容。
然后,编辑并扩展基本策略以记录:
⚫ RequestResponse 级别的 persistentvolumes 事件
⚫ front-apps namespace 中的 configmaps 事件的请求正文
⚫ Metadata 级别的所有 namespace 中的 ConfigMap 和 Secret 的更改
⚫ Metadata 级别记录所有其他请求
注意:确保API服务器使用扩展后的策略。
参考链接: https://kubernetes.io/zh-cn/docs/tasks/debug/debug-cluster/audit/
candidate@base:~$ ssh cks001098
candidate@master01:~$ sudo -i
root@master01:~#
root@master01:~# cp /etc/kubernetes/logpolicy/sample-policy.yaml /opt/
root@master01:~# vim /etc/kubernetes/logpolicy/sample-policy.yaml
root@master01:~# cat /etc/kubernetes/logpolicy/sample-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived"
rules:
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
#注意不要删除上面原有的规则,只可以在下面继续追加题目要求的规则。
# RequestResponse 级别的 persistentvolumes 事件
- level: RequestResponse
resources:
- group: ""
resources: ["persistentvolumes"]
# front-apps namespace 中的 configmaps 事件的请求正文
- level: Request
resources:
- group: ""
resources: ["configmaps"]
namespaces: ["front-apps namespace"]
# Metadata 级别的所有 namespace 中的 ConfigMap 和 Secret 的更改
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
# Metadata 级别记录所有其他请求
- level: Metadata
omitStages:
- "RequestReceived"
root@master01:~# cp /etc/kubernetes/manifests/kube-apiserver.yaml /opt/
root@master01:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 11.0.1.111:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=11.0.1.111
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --audit-policy-file=/etc/kubernetes/logpolicy/sample-policy.yaml #日志策略
- --audit-log-path=/var/log/kubernetes/audit-logs.txt #日志保存路径
- --audit-log-maxbackup=2 #日志数量
- --audit-log-maxage=10 #日志保存天数
······
root@master01:~# systemctl daemon-reload && systemctl restart kubelet.service
root@master01:~#
root@master01:~# kubectl get pod -A # 等待 2 分钟后,在检查,确保集群正常
NAME READY STATUS RESTARTS AGE
amd-gpu-6cbfd6c6d6-l5464 1/1 Running 7 (3h44m ago) 134d
nvidia-gpu-64c4d44986-x5qgh 1/1 Running 7 (3h44m ago) 134d
root@master01:~# tail /var/log/kubernetes/audit-logs.txt # 验证
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"a6c4f229-78be-41d7-a682-6f5597841f9c","stage":"ResponseComplete","requestURI":"/rAgent":"kubelet/v1.31.0 (linux/amd64) kubernetes/9edcffc","objectRef":{"resource":"nodes","name":"master01","apiVersion":"v1"},"responseStatus":{"metadata":ision":"allow","authorization.k8s.io/reason":""}}
·····
root@master01:~#
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001098 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000031
Context
您必须实施NetworkPolicy 来控制现有的Deployments跨namespace的流量。
Taks
首先,为了阻止所有入口流量,在prod namespace中创建一个名为deny-policy的NetworkPolicy。
PS: prod namespace 被标记为 env: prod
然后,为了仅允许来自prod namespace中Pod的入口流量,在data namespace中创建一个名为allow-from-prod的NetworkPolicy。
使用prod namespace 的标签来允许流量。
PS: data namespace 被标记为 env: data
注意:请勿修改或删除任何namespace或Pod。仅创建必须得NetworkPolicy。
参考链接: https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/
candidate@base:~$ cks000031
参考链接: https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/#default-policies
candidate@master01:~$ vim deny-policy.yaml
apiVersion: networking.k8s.io/v1
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-policy
namespace: prod
spec:
podSelector: {}
policyTypes:
- Ingres
candidate@master01:~$ kubectl apply -f deny-policy.yaml
networkpolicy.networking.k8s.io/deny-policy created
candidate@master01:~$
candidate@master01:~$ vim allow-from-prod.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-prod
namespace: data
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
env: prod
candidate@master01:~$ kubectl apply -f allow-from-prod.yaml
networkpolicy.networking.k8s.io/allow-from-prod created
candidate@master01:~$
candidate@master01:~$ exit
logout
Connection to cks000031 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000032
Context
您必须使用HTTPS路由来公开web应用程序。
Task
在prod02 namespace创建一个名为web的Ingress资源,并按照如下方式配置它:
⚫ 将主机web.k8sng.local和所有路径的流量路由到现有的web Service。
⚫ 使用现有的web-cert Secret来启用TLS终止。
⚫ 将HTTP请求重定向到HTTPS
PS: 你可以使用以下命令测试Ingress配置:
[candidate@cks000032] $ curl -Lk https://web.k8sng.local
参考链接: https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/#tls
candidate@base:~$ ssh cks000040
candidate@master01:~$ kubectl -n prod02 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web ClusterIP 10.108.163.130 <none> 80/TCP 134d
candidate@master01:~$
candidate@master01:~$ kubectl -n prod02 get secrets
NAME TYPE DATA AGE
web-cert kubernetes.io/tls 2 134d
candidate@master01:~$
candidate@master01:~$ vim ingress-web.yaml # 参考链接中 TLS 配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web # ingress名称
namespace: prod02 # 指定namespace
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true" # 重定向 HTTPS
spec:
tls:
- hosts:
- web.k8sng.local # 域名
secretName: web-cert # 现有 secret
rules:
- host: web.k8sng.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web # 对应的 svc 名称
port:
number: 80
candidate@master01:~$
candidate@master01:~$ kubectl apply -f ingress-web.yaml
ingress.networking.k8s.io/web created
candidate@master01:~$
candidate@master01:~$ kubectl -n prod02 get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
web nginx web.k8sng.local 10.110.73.189 80, 443 5m28s
candidate@master01:~$
candidate@master01:~$ curl -Lk https://web.k8sng.local
Hello World ^_^ # 符合预取
candidate@master01:~$
candidate@master01:~$ exit
logout
Connection to cks000031 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000033
Context
安全审计发现某个Deployment有不合规的服务账号令牌,这可能导致安全漏洞。
Task
首先,修改monitoring namespace中现有的stats-monitor-sa ServiceAccount,以关闭API凭据自动挂载。
然后,修改monitoring namespace中现有的stats-monitor Deployment,
以注入装载在/var/run/secrets/kubernetes.io/serviceaccount/token的ServiceAccount令牌。
使用名为token的投射卷,来注入ServiceAccount令牌,并确保它以只读方式挂载。
PS: 部署的清单配置文件可以在以下位置找到:
~/stats-monitor/deployment.yaml
参考链接: https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-service-account/
candidate@base:~$ ssh cks000033
candidate@master01:~$ kubectl -n monitoring edit serviceaccounts stats-monitor-sa
apiVersion: v1
automountServiceAccountToken: false #添加一行,:wq 保存退出
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"stats-monitor-sa","namespace":"monitoring"},"secrets":[{"name":"stats-monitor-sa-token"}]}
creationTimestamp: "2024-11-03T09:50:22Z"
name: stats-monitor-sa
namespace: monitoring
resourceVersion: "84538"
uid: 12451c09-4d2e-4bd4-9666-6658e5eaba5c
secrets:
- name: stats-monitor-sa-token
candidate@master01:~$ vim stats-monitor/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: stats-monitor
namespace: monitoring
labels:
app: stats-monitor
spec:
replicas: 1
selector:
matchLabels:
app: stats-monitor
template:
metadata:
labels:
app: stats-monitor
spec:
containers:
- name: nginx
image: vicuu/nginx:hello
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount/token #挂载目录
name: token #投射卷
readOnly: true #只读方式挂载
- mountPath: /usr/local/apache2/conf/httpd.conf
name: httpcf
subPath: httpd.conf
serviceAccountName: stats-monitor-sa
automountServiceAccountToken: false
volumes:
- name: token #指定卷名
projected:
sources:
- serviceAccountToken:
path: token
- configMap:
items:
- key: httpd.conf
path: httpd.conf
name: httpcf
name: httpcf
candidate@master01:~$
candidate@master01:~$ kubectl apply -f stats-monitor/deployment.yaml
deployment.apps/stats-monitor configured
candidate@master01:~$ kubectl -n monitoring get pod
NAME READY STATUS RESTARTS AGE
stats-monitor-548fbdcb46-zmqf4 1/1 Running 0 33s
candidate@master01:~$ exit
logout
Connection to cks000033 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000034
Context
kubeadm配置的集群最近进行了升级,由于工作负载兼容性问题,将一个节点保留在稍旧的版本上。
Task
升级集群节点node02以匹配control plane节点的版本。
使用如下所示命令连接到此计算节点:
[candidate@cks000034] ssh node02
PS: 不要修改集群中的任何正在运行的工作负责。
完成任务后,不要忘记退出此计算节点。
[candidate@node02] exit
candidate@base:~$ ssh cks000034
candidate@master01:~$ ssh node02 #在连到node02
candidate@node02:~$ sudo -i
root@node02:~#
candidate@node02:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
base Ready <none> 143d v1.31.1
master01 Ready control-plane 143d v1.31.1
node02 Ready <none> 143d v1.31.0
root@node02:~# apt install kubelet=1.31.1-1.1
·····
(Reading database ... 120457 files and directories currently installed.)
Preparing to unpack .../kubelet_1.31.1-1.1_amd64.deb ...
Unpacking kubelet (1.31.1-1.1) over (1.31.0-1.1) ...
Setting up kubelet (1.31.1-1.1) ...
root@node02:~# systemctl daemon-reload # 重载配置并重启kubelet服务
root@node02:~# systemctl restart kubelet
root@node02:~# kubectl get node
NAME STATUS ROLES AGE VERSION
base Ready <none> 143d v1.31.1
master01 Ready control-plane 143d v1.31.1
node02 Ready <none> 143d v1.31.1 # 升级完成了
root@node02:~# exit
logout
candidate@node02:~$ exit
logout
Connection to node02 closed.
candidate@master01:~$ exit
logout
Connection to cks000034 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000035
Task
在alpine namespace中的alpine Deployment,有三个运行不同版本的alpine镜像的容器。
首先,找出alpine镜像的哪个版本包含版本为3.1.4-r5的libcrypto3软件包。
其次,使用预安装的 bom 工具,在 ~/alpine.spdx 为找出的镜像版本创建一个SPDX文档。
最后,更新alpine Deployment并删除 使用找出的镜像版本的容器。
Deployment的清单文件可以在~/alipine-deployment.yaml中找到。
PS: 请勿修改Deployment的任何其他容器。
candidate@base:~$ ssh cks000035
candidate@master01:~$ cat /home/candidate/alipine-deployment.yaml
apiVersion: v1
kind: Namespace
metadata:
name: alpine
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: alpine
name: alpine
namespace: alpine
spec:
replicas: 1
selector:
matchLabels:
run: alpine
template:
metadata:
labels:
run: alpine
spec:
containers:
- name: alpine-a # 容器名 alpine-a
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.20.0
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
- name: alpine-b # 容器名 alpine-b
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
- name: alpine-c # 容器名 alpine-c
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.16.9
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
candidate@master01:~$
candidate@master01:~$ kubectl get pod -n alpine
NAME READY STATUS RESTARTS AGE
alpine-5b9c8fd489-wcjqm 3/3 Running 24 (4h7m ago) 136d
找出alpine镜像的哪个版本包含版本为3.1.4-r5的libcrypto3软件包
candidate@master01:~$ kubectl -n alpine exec -it alpine-5b9c8fd489-wcjqm -c alpine-a -- apk list | grep libcrypto3
libcrypto3-3.3.0-r2 x86_64 {openssl} (Apache-2.0) [installed]
candidate@master01:~$
candidate@master01:~$ kubectl -n alpine exec -it alpine-5b9c8fd489-wcjqm -c alpine-b -- apk list | grep libcrypto3
libcrypto3-3.1.4-r5 x86_64 {openssl} (Apache-2.0) [installed]
candidate@master01:~$
candidate@master01:~$ kubectl -n alpine exec -it alpine-5b9c8fd489-wcjqm -c alpine-c -- apk list | grep libcrypto3
candidate@master01:~$
candidate@master01:~$ bom generate --image registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1 --output alpine.spdx
INFO bom v0.6.0: Generating SPDX Bill of Materials
INFO Processing image reference: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1
INFO Reference registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1 points to a single image
INFO Generating single image package for registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1
INFO Package describes image registry.cn-qingdao.aliyuncs.com/containerhub/alpine:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0
INFO Image manifest lists 1 layers
INFO Scan of container image returned 15 OS packages in layer #0
WARN Document has no name defined, automatically set to SBOM-SPDX-05dca8b2-5bb2-41dc-98a1-f613d843a5e1
INFO Package SPDXRef-Package-sha256-6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 has 1 relationships defined
INFO Package SPDXRef-Package-registry.cn-qingdao.aliyuncs.com-containerhub-alpine-6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0-sha256-4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 has 15 relationships defined
candidate@master01:~$
candidate@master01:~$ vim alipine-deployment.yaml
apiVersion: v1
kind: Namespace
metadata:
name: alpine
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: alpine
name: alpine
namespace: alpine
spec:
replicas: 1
selector:
matchLabels:
run: alpine
template:
metadata:
labels:
run: alpine
spec:
containers:
- name: alpine-a
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.20.0
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
# 删除 alpine-b 容器
- name: alpine-c
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.16.9
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
candidate@master01:~$
candidate@master01:~$ kubectl apply -f alipine-deployment.yaml
namespace/alpine unchanged
deployment.apps/alpine configured
candidate@master01:~$
candidate@master01:~$ kubectl -n alpine get pod
NAME READY STATUS RESTARTS AGE
alpine-5b9c8fd489-wcjqm 3/3 Terminating 24 (4h16m ago) 136d
alpine-75997c7d75-4v2jp 2/2 Running 0 16s # 符合预期
candidate@master01:~$
candidate@master01:~$ exit
logout
Connection to cks000035 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000036
Context
为了符合要求,所有用户命名空间都强制执行受限的Pod安全标准。
Task
在confidential namespace中包含一个不符合限制性的Pod安全标准的Deployment。因此,其Pod无法被调度。
修改这个Deployment以符合标准,并验证Pod可以正常运行。
PS: 部署的清单文件可以在 ~/nginx-unprivileged.yaml找到。
candidate@base:~$ ssh cks000036
删除 deploy,然后再创建,提示错误
candidate@master01:~$ kubectl -n confidential get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-unprivileged-deployment 0/1 0 0 136d
candidate@master01:~$
candidate@master01:~$ kubectl delete -f nginx-unprivileged.yaml
deployment.apps "nginx-unprivileged-deployment" deleted
candidate@master01:~$
candidate@master01:~$ kubectl apply -f nginx-unprivileged.yaml
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
deployment.apps/nginx-unprivileged-deployment created
candidate@master01:~$
candidate@master01:~$ vim nginx-unprivileged.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-unprivileged-deployment
namespace: confidential
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
# 错误提示: 补全 安全上下午
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
runAsNonRoot: true
seccompProfile:
type: "RuntimeDefault"
candidate@master01:~$ kubectl apply -f nginx-unprivileged.yaml
deployment.apps/nginx-unprivileged-deployment configured
candidate@master01:~$
candidate@master01:~$ kubectl -n confidential get pod
NAME READY STATUS RESTARTS AGE
nginx-unprivileged-deployment-8db94f657-7mnn2 1/1 Running 0 13s
candidate@master01:~$
candidate@master01:~$ exit
logout
Connection to cks000036 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000037
Task
执行以下任务,以保护集群节点cks000037
从 docker 组中删除用户 developer
PS: 不要从任何其他组中删除用户
重新配置并重启Docker守护程序,以确保位于/var/run/docker.sock的套接字文件由root组拥有。
重新配置并重启Docker守护进程,以确保它不监听任何TCP端口。
PS: 完成工作后,确保Kubernetes集群保持健康状态。
candidate@base:~$ ssh cks000037
candidate@node02:~$ sudo -i
root@node02:~#
root@node02:~# id developer
uid=1001(developer) gid=0(root) groups=0(root),40(src),100(users),998(docker)
root@node02:~#
root@node02:~# gpasswd -d developer docker
Removing user developer from group docker
root@node02:~#
root@node02:~# id developer
uid=1001(developer) gid=0(root) groups=0(root),40(src),100(users)
root@node02:~#
root@node02:~# vim /usr/lib/systemd/system/docker.socket
root@node02:~#
root@node02:~# cat /usr/lib/systemd/system/docker.socket
[Unit]
Description=Docker Socket for the API
[Socket]
# If /var/run is not implemented as a symlink to /run, you may need to
# specify ListenStream=/var/run/docker.sock instead.
ListenStream=/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=root # 修改为 root 用户
[Install]
WantedBy=sockets.
root@node02:~# vim /usr/lib/systemd/system/docker.service
···
# 删除 -H tcp://0.0.0.0:2375
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
···
root@node02:~# systemctl daemon-reload && systemctl restart docker.socket docker.service
root@node02:~# ls -l /var/run/docker.sock
srw-rw---- 1 root root 0 Mar 20 15:16 /var/run/docker.sock
root@node02:~#
root@node02:~# ss -tunlp | grep 2375
root@node02:~#
root@node02:~# exit
logout
candidate@node02:~$ exit
logout
Connection to cks000037 closed.
candidate@base:~$
https://docs.cilium.io/en/stable/network/kubernetes/policy/#ciliumnetworkpolicy
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000039
Context
这道题,您参考网址:
CiliumNetworkPolicy
Task
使用Cilium执行以下任务,以保护现有应用程序的内部和外部网络流量。
PS: 您可以使用浏览器访问Cilium的文档。
首先,在nodebb namespace里创建一个名为nodebb的L4 CiliumNetworkPolicy,并按照如下方式配置它:
⚫ 允许ingress-nginx namespace中运行的所有Pod访问nodebb Deployment的Pod
⚫ 要求相互身份验证
然后,将前一步创建的网络策略扩展如下:
⚫ 允许主机访问nodebb Deployment的Pod
⚫ 不要使用相互身份验证
参考连接: https://docs.cilium.io/en/stable/security/policy/language/#labels-dependent-layer-4-rule
candidate@base:~$ ssh cks000040
app=nodebb
candidate@master01:~$ kubectl -n nodebb get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nodebb-5ddc48575c-nh29n 1/1 Running 6 (6h25m ago) 135d app=nodebb,pod-template-hash=5ddc48575c
candidate@master01:~$
candidate@master01:~$ vim cilium.yaml
kind: CiliumNetworkPolicy
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "nodebb"
namespace: nodebb
spec:
endpointSelector:
matchLabels:
app: nodebb
ingress:
- fromEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: ingress-ngix
authentication:
mode: "required"
- fromEntities:
- host
candidate@master01:~$
candidate@master01:~$ kubectl apply -f cilium.yaml
ciliumnetworkpolicy.cilium.io/nodebb created
candidate@master01:~$
candidate@master01:~$ kubectl -n nodebb get ciliumnetworkpolicies.cilium.io
NAME AGE
nodebb 49s
candidate@master01:~$ exit
logout
Connection to cks000039 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks001094
Context
您必须将容器镜像扫描完全集成到kubeadm配置的集群中。
Task
假设位于 /etc/kubernetes/epconfig 中的不完整的配置,
以及具有 HTTPS 端点https://image-bouncer-webhook.default.svc:1323/image_policy 的功能性容器镜像扫描器,
请执行以下任务,来实现验证(Validating)准入控制器。
首先,重新配置API服务器,以启用所有准入插件,以支持提供的AdmissionConfiguration
其次,重新配置 ImagePolicyWebhook 以在后端失效时拒绝镜像。
最后,为了测试配置,部署在 ~/web1.yaml 中定义的测试资源,该资源使用应被拒绝的镜像。
您可以根据需要删除和重新创建这个资源。
candidate@base:~$ ssh cks000040
candidate@master01:~$ sudo -i
root@master01:~#
root@master01:~# cd /etc/kubernetes/epconfig
root@master01:/etc/kubernetes/epconfig#
root@master01:/etc/kubernetes/epconfig# cp image-policy-config.yaml /opt/
root@master01:/etc/kubernetes/epconfig# vim image-policy-config.yaml
imagePolicy:
kubeConfigFile: /etc/kubernetes/epconfig/kube-config.yaml
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: false # 将true 改为 false
root@master01:/etc/kubernetes/epconfig#
root@master01:/etc/kubernetes/epconfig# cp kube-config.yaml /opt/
root@master01:/etc/kubernetes/epconfig# vim kube-config.yaml # 添加web server地址
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /etc/kubernetes/epconfig/server.crt
server: https://image-bouncer-webhook.default.svc:1323/image_policy # 题目要求地址
name: bouncer_webhook
contexts:
- context:
cluster: bouncer_webhook
user: api-server
name: bouncer_validator
current-context: bouncer_validator
preferences: {}
users:
- name: api-server
user:
client-certificate: /etc/kubernetes/pki/front-proxy-client.crt
client-key: /etc/kubernetes/pki/front-proxy-client.key
root@master01:/etc/kubernetes/epconfig#
ImagePolicyWebhook
root@master01:~# cd /etc/kubernetes/manifests/
root@master01:/etc/kubernetes/manifests# cp kube-apiserver.yaml /opt/
root@master01:/etc/kubernetes/manifests# vim kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 11.0.1.111:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=11.0.1.111
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook #添加ImagePolicyWebhook 策略
·····
root@master01:/etc/kubernetes/manifests# systemctl daemon-reload && systemctl restart kubelet
root@master01:/etc/kubernetes/manifests# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
alpine alpine-75997c7d75-4v2jp 2/2 Running 0 150m
calico-apiserver calico-apiserver-6b9cd8cf69-kv9r4 0/1 Running 14 (17s ago) 143d
····
root@master01:/etc/kubernetes/manifests# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001094 closed.
candidate@base:~$
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks001092
Context
出于测试目的,由 kubeadm 创建的 cluster 的 Kubernetes API 服务器临时配置为,允许未经身份验证和未经授权的访问。
Task
首先,按照以下方式配置集群的API服务器,以确保其安全:
⚫ 禁止匿名身份验证
⚫ 使用授权模式 Node,RBAC
⚫ 使用准入控制器 NodeRestriction
注意:所有kubectl 配置环境/文件也被配置使用未经身份验证和未经授权的访问。
你不必更改它,但请注意,一旦完成集群的安全加固, kubectl 的配置将无法工作。
您可以使用集群位于 /etc/kubernetes/admin.conf 的原始kubectl配置文件来访问受保护的集群。
然后,请删除ClusterRoleBinding system:anonymous 来进行清理。
candidate@base:~$ ssh cks001092
candidate@master01:~$ sudo -i
root@master01:~#
root@master01:~# cp /etc/kubernetes/manifests/kube-apiserver.yaml /opt/
root@master01:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml
root@master01:~#
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 11.0.1.111:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=11.0.1.111
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction # 修改 AlwaysAdmit 为 NodeRestriction
·····
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --anonymous-auth=false # 将默认的 true 改成 false
····
root@master01:~# systemctl daemon-reload && systemctl restart kubelet.service
root@master01:~#
root@master01:~# kubectl --kubeconfig /etc/kubernetes/admin.conf delete clusterrolebindings.rbac.authorization.k8s.io system:anonymous
clusterrolebinding.rbac.authorization.k8s.io "system:anonymous" deleted
root@master01:~#
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001092 closed.
candidate@base:~$
最后祝各位考试顺利