解题
kubectl config use-context k8s
kubectl create clusterrole deployment-clusterrole --verb=create -- resource=deployments,daemonsets,statefulsets
kubectl create serviceaccount cicd-token -n app-team1
kubectl create rolebinding cicd-token --serviceaccount=app-team1:cicd-token -- clusterrole=deployment-clusterrole -n app-team1
参考资料:https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/
将名为ek8s-node1 的node设置为不可用,并重新调度该node上所有运行的pods
kubectl config use-context ek8s
kubectl cordon ek8s-node-1 # 设置为不可调度
kubectl drain ek8s-node-1 --ignore-daemonsets # 驱逐节点上 Pod
注:如果执行 drain 提示错误,根据提示再加上选项,例如--delete-local-data force
现有的kubernetes集群正在运行版本1.20.0 。仅将主节点上的所有kubernetes控制平面和节点组件升级到版本1.20.1
确保在升级之前drain主节点,并在升级后uncordon主节点
另外,在主节点上升级kubelet和kubectl
kubectl config use-context ek8s
kubectl drain k8s-master-0 --ignore-daemonsets ssh mk8s-master-0
sudo -i
apt install kubeadm=1.20.1-00 –y kubeadm upgrade plan
kubeadm upgrade apply v1.20.1 --etcd-upgrade=false # 题目要求不升级 etcd kubectl uncordon k8s-master-0
# 升级 kubelet 和 kubectl
apt install kubelet=1.20.1-00 kubectl=1.20.1-00 -y systemctl restart kubelet
#查看升级结果
kubectl get node
参考资料:https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm- upgrade/
首先,为运行在https://127.0.0.1:2379上的现有etcd实例创建快照并将快照保存到/data/backup/etcd-snapshot.db
然后还原位于/data/backup/etcd-snapshot-previous.db的现有先前快照
# 备份
ETCDCTL_API=3 etcdctl snapshot save /data/backup/etcd-snapshot.db -- endpoints=https://127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt -- cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key
# 恢复
# 先暂停 kube-apiserver 和 etcd 容器
mv /etc/kubernetes/manifests /etc/kubernetes/manifests.bak mv /var/lib/etcd/ /var/lib/etcd.bak
ETCDCTL_API=3 etcdctl snapshot restore /data/backup/etcd-snapshot-previous.db -- data-dir=/var/lib/etcd
mv /etc/kubernetes/manifests.bak /etc/kubernetes/manifests
注:执行备份命令如果提示没证书文件,exit 退回上一步操作
参 考 资 料 : https://kubernetes.io/zh/docs/tasks/administer-cluster/configure- upgrade-etcd/
在现有的namespace my-app中创建一个名为allow-port-from-namespace的新NetworkPolicy
确保新的NetworkPolicy允许namespace my-app中的pods来连接到namespace big-corp中的端口8080
进一步确保新的NetworkPolicy:
不允许对没有在监听端口8080的pods的访问
不允许不来自namespace my-app中的pods的访问
kubectl config use-context hk8s
vi networkpolicy.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: my-app spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: big-corp
ports:
- protocol: TCP
port: 8080
kubectl apply -f networkpolicy.yaml
参 考 资 料 : https://kubernetes.io/zh/docs/concepts/services-networking/network- policies/
请重新配置现有的部署front-end以及添加名为http的端口规范来公开现有容器nginx的端口80/tcp
创建一个名为front-end-svc的新服务,以公开容器端口http
配置此服务,以通过在排定的节点上的NodePort来公开各个pods
kubectl config use-context k8s
kubectl edit deployment front-end
…
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- name: http
protocol: TCP
containerPort: 80
…
kubectl expose deployment front-end --port=80 --target-port=80 --type=NodePort -- name=front-end-svc
如下创建一个新的nginx ingress资源
名称:pong
Namespace:ing-internal
使用服务端口5678在路径/hello上公开服务hello
kubectl config use-context k8s
apiVersion: networking.k8s.io/v1 kind: Ingress
metadata:
name: pong
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678
kubectl get ingress -n ing-internal curl -kL <获取 ingress 的 IP 地址>/hello
参考资料:https://kubernetes.io/zh/docs/concepts/services-networking/ingress/
kubectl config use-context k8s
kubectl scale deployment loadbalancer --replicas=5
按如下要求调度一个pod
名称:nginx-kusc00401
Image: nginx
Node selector:disk=ssd
kubectl config use-context k8s
apiVersion: v1 kind: Pod metadata:
name: nginx-kusc00401 spec:
containers:
- name: nginx
image: nginx
nodeSelector:
disk: ssd
kubectl get po nginx-kusc00401 -o wide
参考资料: https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/
检查有多少worker nodes已准备就绪(不包括被打上taint:Noschedule的节点),并将数量写入
/opt/KUSC00402/kusc00402.txt
kubectl config use-context k8s
kubectl describe node $(kubectl get nodes|grep Ready|awk '{print $1}') |grep Taint|grep -vc NoSchedule > /opt/KUSC00402/kusc00402.txt
注:如果记不住这条命令,手动统计应该也可以
创建一个名为kucc4的pod,在pod里面分别为一下每个images单独运行一个app container(可能会有1-4个images):
nginx+redis+memcached
kubectl config use-context k8s
apiVersion: v1 kind: Pod metadata:
name: kucc4 spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: memcached
image: memcached
创建名为app-data的persistent volume容量为2Gi,访问模式为ReadWriteOnce。volume类型为hostpath,位于/srv/app-data
kubectl config use-context hk8s
apiVersion: v1
kind: PersistentVolume metadata:
name: app-data
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/srv/app-data"
参考资料: https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure- persistent-volume-storage/
创建一个新的PersistentVolumeClaim:
名称:pv-volume
Class: csi-hostpath-sc
容量:10Mi
创建一个新的pod,此pod将作为volume挂载到PersistentVolumeClaim:
名称:web-server
image:nginx
挂载路径:/usr/share/nginx/html
配置新的pod,以对volume具有ReadWriteOnce权限
最后,使用kubectl edit或kubectl patch将
PersistentVolumeClaim的容量扩展为70Mi,并记录此更改
kubectl config use-context ok8s
apiVersion: v1
kind: PersistentVolumeClaim metadata:
name: pv-volume spec:
storageClassName: csi-hostpath-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
---
apiVersion: v1 kind: Pod metadata:
name: web-server spec:
containers:
- name: web-server
image: nginx
ports:
- containerPort: 80
name: http-server
volumeMounts:
- mountPath: /usr/share/nginx/html
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: pv-volume
# 扩容 PVC 容量
kubectl edit pvc pv-volume --save-config
参考资料: https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure- persistent-volume-storage/
监控pod bar的日志并:
提取与错误file-not-found想对应的日志行
将这些日志行写入/opt/KUTR00101/bar
kubectl config use-context k8s
kubectl logs foobar | grep file-not-found > /opt/KUTR00101/bar
参考资料:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
将一个现有的pod集成到kubernetes的内置日志记录体系结构中(例如kubectl logs)添加streaming sidecar容器是实现此要求的一种好方法
使用busybox Image来将名为sidecar的sidecar容器添加到现有的pod legacy-app中。新的sidecar容器必须运行以下命令
/bin/sh -c tail -n+1 -f /var/log/legacy-app.log
使用安装在/var/log的volume。使日志文件legacy-app.log可用于sidecar容器
注:除了添加所需的volume mount以外,请勿更改现有容器的规格
kubectl config use-context k8s
apiVersion: v1 kind: Pod metadata:
name: legacy-app spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/legacy-app.log;
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: sidecar
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
kubectl get pod big-corp-app -o yaml > big-corp-app.yaml # 导出后修改文件kubectl delete pod big-corp-app
kubectl apply -f big-corp-app.yaml
参考资料:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/
注:由于 Pod 不能在线增加容器,可先导出 yaml 再添加最后 apply
通过pod label name=cpu-utilizer,找到运行时占用大量cpu的pod,并将占用CPU最高的pod名称写入文件
/opt/KUTR00401/KUTR00401.txt(已存在)
kubectl config use-context k8s
kubectl top pod -l name=cpu-utilizer --sort-by="cpu"
echo "<podname>" > /opt/KUR00401.txt # 将第一个 Pod 名称写到文件
名为wk8s-node-0的kubernetes worker node处于NotReady 状态。调查发生这种情况的原因,并采取相应措施将node恢复为Ready状态,确保所做的任何更改永久有效
kubectl config use-context wk8s
kubectl get node ssh wk8s-node-0 sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet