Kubernetes集群大体分为一主多从和多主多从两大类: 一主多从:一个master节点和多个node节点,搭建简单,存在单点故障,一般用于测试环境 多主多从:多个master节点和多个node节点,搭建复杂,安全性高,用于生产环境
Kubernetes的安装方式有3种,minikube,kubeadm,二进制包 minikube: 用户快速搭建单节点kubernetes的工具(不推荐) kubeadm: 一个用户快速搭建kubernetes集群的工具 二进制包: 从官网下载二进制包,依次安装,此方式较复杂,但是对于理解kubernetes有帮助
我们这里采用kubeadm的方式来安装
序号 | 主机地址 | 节点类型 | 操作系统 | 配置 |
---|---|---|---|---|
1 | 192.168.100.100 | master | CentOS 7.6 | 2CPU 3G 20G硬盘 |
2 | 192.168.100.101 | node1 | CentOS 7.6 | 2CPU 3G 20G硬盘 |
3 | 192.168.100.102 | node2 | CentOS 7.6 | 2CPU 3G 20G硬盘 |
这里就不细说,大家自行百度
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
cat >> /etc/hosts <<EOF
192.168.100.100 master
192.168.100.101 node1
192.168.100.102 node2
EOF
#启动时间同步服务
systemctl start chronyd
#开机启动时间同步
systemctl enable chronyd
4.禁用防火墙 centos6版本是iptables,centos7是firewalld
# kubernetes和docker会产生很多iptables规则,这些规则会和系统规则混淆,直接关闭系统规则
#关闭防火墙
systemctl stop firewalld
#禁用防火墙
systemctl disable firewalld
#关闭防火墙
systemctl stop iptables
#禁用防火墙
systemctl disable iptables
# 编辑/etc/selinux/config文件,修改SELINUX的值为disabled
# 修改完毕后需要重启linux服务
SELINUX=disabled
# 打开/etc/fstab,注释掉swap分区所在的行
# 注意修改完毕后需要重启linux服务
/dev/mapper/centos-root / xfs defaults 0 0
UUID=dfba36c7-cbf4-4e53-9a2c-af7c1ea381e7 /boot xfs defaults
0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
# 修改linux的内核参数,添加网桥过滤和地址转发功能
# 编辑/etc/sysctl.d/kubernetes.conf文件,添加如下配置
net.bridge.bridge-nf-call-ip6tables =1
net.bridge.bridge-nf-call-iptables =1
net.ipv4.ip_forward = 1
#修改完后重新加载配置
[root@master ~]# sysctl -p
#加载网桥过滤模块
[root@master ~]# modprobe br_netfilter
#查看网桥过滤模块是否加载成功
[root@master ~]# lsmod | grep br_netfilter
#1.安装ipset和ipvsadm模块
yum install ipset ipvsadm -y
#2.添加需要加载的模块写入脚本文件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
#3.为脚本文件添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
#4.执行脚本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
#5.查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum list docker-ce --showduplicates
yum install docker-ce-18.06.3.ce-3.el7 -y
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker
systemctl enable docker
docker version
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install --setopt=obsoletes=0 -y kubelet-1.18.17 kubeadm-1.18.17 kubectl-1.18.17
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
systemctl enable kubelet
由于kubeadm运行过程中,底层会自动下载kubenetes的镜像,由于这些镜像被墙了,所以无法下载,因此我们需要事先下载镜像准备好,这样就可以顺利安装kubenetes的集群
kubeadm config images list
上述命令执行结果如下
[root@node1 ~]# kubeadm config images list
I0326 18:46:59.015283 32344 version.go:252] remote version is much newer:
v1.20.5; falling back to: stable-1.18
W0326 18:47:05.964186 32344 configset.go:202] WARNING: kubeadm cannot validate
component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.17
k8s.gcr.io/kube-controller-manager:v1.18.17
k8s.gcr.io/kube-scheduler:v1.18.17
k8s.gcr.io/kube-proxy:v1.18.17
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
我们可以看到我们依赖的镜像的版本,由于这些镜像被墙了,所以我们无法拉取镜像,但是我们可以通过阿里云提供的镜像拉取,然后打tag变成这些目标镜像 所有k8s.gcr.io开头的镜像都可以用如下前缀替换
registry.aliyuncs.com/google_containers
比如: 我们要下载镜像k8s.gcr.io/kube-apiserver:v1.18.17,就可以使用命令
#先使用阿里的前缀拉取镜像
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.17
#使用tag切换成k8s.gcr.io
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.17 k8s.gcr.io/kube-apiserver:v1.18.17
#删除原来的镜像
docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.17
这样就完成了镜像的下载,我们还可以通过命令把镜像保存起来,以后就不要再次下载
docker save -o kube-apiserver.tar k8s.gcr.io/kube-apiserver:v1.18.17
只需将保存的tar文件上传到服务器,然后执行命令导入docker镜像仓库
docker load -i kube-apiserver.tar
kubeadm init \
--kubernetes-version=v1.18.17 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.100.100 #这里的IP地址需要修改为自己的master的IP地址
也可以是使用如下命令指定镜像源为阿里云的镜像源
kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.18.17 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.100.100 #这里的IP地址需要修改为自己的master的IP地址
创建必要文件(所有机器都要执行,不执行的无法使用kubectl命令)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 192.168.100.100:6443 --token 1coiqe.i0zt321f61aanqf9 \
--discovery-token-ca-cert-hash sha256:asjdf8972345hlk;jfds9yg3245h322397fdsoifaowiufew
kubernetes支持的网络插件很多,如flannel,calico,canal等等,人选一种使用即可,本次选择flannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
由于文件非常难下载,我这里附上kube-flannel.yml的源码
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.12.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.12.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
kubectl apply -f kube-flannel.yml
执行这个命令后,后端做的操作就是在kube-system命令空间下拉取flannel的相关镜像,可以使用命令查看镜像拉取情况
kubectl describe pod kube-flannel-ds-amd64-54n98 -n kube-system
成功拉取镜像后执行如下命令:
kubectl describe pod kube-flannel-ds-amd64-54n98 -n istio-system
显示结果如下:
Name: kube-flannel-ds-amd64-54n98
Namespace: kube-system
Priority: 0
Node: master/192.168.100.100
Start Time: Fri, 26 Mar 2021 16:36:56 +0800
Labels: app=flannel
controller-revision-hash=56bf6995cf
pod-template-generation=3
tier=node
Annotations: <none>
Status: Running
IP: 192.168.100.100
IPs:
IP: 192.168.100.100
Controlled By: DaemonSet/kube-flannel-ds-amd64
Init Containers:
install-cni:
Container ID: docker://2e3f779eb93e47f79fc325b6a31b4bffa381f743a582caab0a826af75b512530
Image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
Image ID: docker-pullable://quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
Port: <none>
Host Port: <none>
Command:
cp
Args:
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 26 Mar 2021 16:48:37 +0800
Finished: Fri, 26 Mar 2021 16:48:37 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/cni/net.d from cni (rw)
/etc/kube-flannel/ from flannel-cfg (rw)
/var/run/secrets/kubernetes.io/serviceaccount from flannel-token-ksh5r (ro)
Containers:
kube-flannel:
Container ID: docker://01122e880b4df0fa3a63f90fad1b476cb3140c2402ad4b5741633de35090bec8
Image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
Image ID: docker-pullable://quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10
Port: <none>
Host Port: <none>
Command:
/opt/bin/flanneld
Args:
--ip-masq
--kube-subnet-mgr
State: Running
Started: Fri, 26 Mar 2021 16:48:38 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_NAME: kube-flannel-ds-amd64-54n98 (v1:metadata.name)
POD_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/etc/kube-flannel/ from flannel-cfg (rw)
/run/flannel from run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from flannel-token-ksh5r (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
run:
Type: HostPath (bare host directory volume)
Path: /run/flannel
HostPathType:
cni:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
flannel-cfg:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-flannel-cfg
Optional: false
flannel-token-ksh5r:
Type: Secret (a volume populated by a Secret)
SecretName: flannel-token-ksh5r
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events: <none>
kubectl get nodes
显示结果如下:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3h5m v1.18.17
node1 Ready <none> 3h4m v1.18.17
node2 Ready <none> 3h3m v1.18.17
自此,kubernetes的集群环境搭建完成