首先我们先来了解下Kubernetes的一个概念:有状态服务与无状态服务。
其中无状态服务在我们前面文章中使用的Deployment编排对象已经可以满足,因为无状态的应用不需要很多要求,只要保持服务正常运行就可以,Deployment删除掉任意中的Pod也不会影响服务的正常,但面对相对复杂的应用,比如有依赖关系或者需要存储数据,Deployment就无法满足条件了,Kubernetes项目也提供了另一个编排对象StatefulSet。
StatefulSet 的核心功能,就是通过某种方式记录这些状态,然后在 Pod 被重新创建时,能够为新 Pod 恢复这些状态。它包含Deployment控制器ReplicaSet的所有功能,增加可以处理Pod的启动顺序,为保留每个Pod的状态设置唯一标识,同时具有以下功能:
[root@yygh-de state]# vim statefulset.yaml
apiVersion: v1
kind: Service #定义一个负载均衡网络
metadata:
name: stateful-tomcat
labels:
app: stateful-tomcat
spec:
ports:
- port: 8123
name: web
targetPort: 8080
clusterIP: None #NodePort:任意机器+NodePort都能访问,ClusterIP:集群内能用这个ip、service域名能访问,clusterIP: None;不要分配集群ip。headless;无头服务。稳定的域名
selector:
app: stateful-tomcat
---
apiVersion: apps/v1
kind: StatefulSet #控制器。
metadata:
name: stateful-tomcat
spec:
selector:
matchLabels:
app: stateful-tomcat # has to match .spec.template.metadata.labels
serviceName: "stateful-tomcat" #这里一定注意,必须提前有个service名字叫这个的
replicas: 3 # by default is 1
template:
metadata:
labels:
app: stateful-tomcat # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: tomcat
image: tomcat:7
ports:
- containerPort: 8080
name: web
[root@yygh-de state]# kubectl get svc,statefulset
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15d
service/stateful-tomcat ClusterIP None <none> 8123/TCP 4h24m
NAME READY AGE
statefulset.apps/stateful-tomcat 3/3 4h24m
[root@yygh-de state]# kubectl get pod -l app=stateful-tomcat
NAME READY STATUS RESTARTS AGE
stateful-tomcat-0 1/1 Running 0 4h24m
stateful-tomcat-1 1/1 Running 0 3h38m
删除一个,重启后名字,server名字等都是一样的。保证了状态
[root@yygh-de state]# kubectl get statefulset
NAME READY AGE
stateful-tomcat 3/3 4h32m
[root@yygh-de state]# kubectl get pod -l app=stateful-tomcat -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stateful-tomcat-0 1/1 Running 0 17h 10.244.66.68 yygh-te <none> <none>
stateful-tomcat-1 1/1 Running 0 21h 10.244.66.121 yygh-te <none> <none>
stateful-tomcat-2 1/1 Running 0 21h 10.244.66.122 yygh-te <none> <none>
[root@yygh-de ~]# kubectl delete pod stateful-tomcat-0
pod "stateful-tomcat-0" deleted
[root@yygh-de ~]# kubectl get pod -l app=stateful-tomcat -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stateful-tomcat-0 1/1 Running 0 11s 10.244.66.66 yygh-te <none> <none>
stateful-tomcat-1 1/1 Running 0 21h 10.244.66.121 yygh-te <none> <none>
stateful-tomcat-2 1/1 Running 0 21h 10.244.66.122 yygh-te <none> <none>
# pod出现问题重新拉起,名字是固定的,ip是变的(pod有可能都不在之前的机器)但它的名字是有固定顺序的,名字不变。
[root@yygh-de ~]# kubectl exec -it stateful-tomcat-0 bash
root@stateful-tomcat-0:/usr/local/tomcat# curl stateful-tomcat-2.stateful-tomcat.default:8080
root@stateful-tomcat-0:/usr/local/tomcat# curl stateful-tomcat-2.stateful-tomcat.default:8080
root@stateful-tomcat-0:/usr/local/tomcat# curl stateful-tomcat-2.stateful-tomcat.default:8080
root@stateful-tomcat-0:/usr/local/tomcat# ping stateful-tomcat-2.stateful-tomcat.default
PING stateful-tomcat-2.stateful-tomcat.default.svc.cluster.local (10.244.66.122) 56(84) bytes of data.
64 bytes from stateful-tomcat-2.stateful-tomcat.default.svc.cluster.local (10.244.66.122): icmp_seq=1 ttl=63 time=0.063 ms
64 bytes from stateful-tomcat-2.stateful-tomcat.default.svc.cluster.local (10.244.66.122): icmp_seq=2 ttl=63 time=0.123 ms
64 bytes from stateful-tomcat-2.stateful-tomcat.default.svc.cluster.local (10.244.66.122): icmp_seq=3 ttl=63 time=0.122 ms
64 bytes from stateful-tomcat-2.stateful-tomcat.default.svc.cluster.local (10.244.66.122): icmp_seq=4 ttl=63 time=0.130 ms
[root@yygh-de ~]# kubectl explain statefulset.spec
podManagementPolicy<string>
podManagementPolicy controls how pods are created during initial scale up,
when replacing pods on nodes, or when scaling down. The default policy is
`OrderedReady`, where pods are created in increasing order (pod-0, then
pod-1, etc) and the controller will wait until each pod is ready before
continuing. When scaling down, the pods are removed in the opposite order.
The alternative policy is `Parallel` which will create pods in parallel to
match the desired scale without waiting, and on scale down will delete all
pods at once.
# OrderedReady一台一台启动
# Parallel多台同时启动
[root@yygh-de state]# vim statefulset.yaml
apiVersion: v1
kind: Service #定义一个负载均衡网络
metadata:
name: stateful-tomcat
labels:
app: stateful-tomcat
spec:
ports:
- port: 8123
name: web
targetPort: 8080
clusterIP: None #NodePort:任意机器+NodePort都能访问,ClusterIP:集群内能用这个ip、service域名能访问,clusterIP: None;不要分配集群ip。headless;无头服务。稳定的域名
selector:
app: stateful-tomcat
---
apiVersion: apps/v1
kind: StatefulSet #控制器。
metadata:
name: stateful-tomcat
spec:
selector:
matchLabels:
app: stateful-tomcat # has to match .spec.template.metadata.labels
serviceName: "stateful-tomcat" #这里一定注意,必须提前有个service名字叫这个的
podManagementPolicy: Parallel #指定多台同时启动
replicas: 3 # by default is 1
template:
metadata:
labels:
app: stateful-tomcat # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: tomcat
image: tomcat:7
ports:
- containerPort: 8080
name: web
[root@yygh-de state]# kubectl apply -f statefulset.yaml
[root@yygh-de ~]# kubectl get pod -l app=stateful-tomcat -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
stateful-tomcat-0 1/1 Running 0 2m39s 10.244.66.126 yygh-te <none> <none>
stateful-tomcat-1 1/1 Running 0 2m39s 10.244.66.76 yygh-te <none> <none>
stateful-tomcat-2 1/1 Running 0 2m39s 10.244.66.65 yygh-te <none> <none>
如果文章有任何错误欢迎不吝赐教,其次大家有任何关于运维的疑难杂问,也欢迎和大家一起交流讨论。