前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >部署kubernetes-v1.25.3(k8s)- 基于containerd容器运行时

部署kubernetes-v1.25.3(k8s)- 基于containerd容器运行时

原创
作者头像
秋意零
发布2022-11-15 16:53:07
2.3K0
发布2022-11-15 16:53:07
举报
文章被收录于专栏:YeTechLog

前言

<p align="center" ><font color="EA5607" ><b>大家好,我是秋意临。<b></font></p>

今日分享,kuberneter-v1.25.3版本部署(目前2022年11月最新版),由于自 1.24 版起,Dockershim 已从 Kubernetes 项目中移除,所以我们的 **容器运行时(容器运行时负责运行容器的软件)** 已不在是docker。本文将采用containerd作为 **容器运行时**。

Kubernetes 中几个常见**的容器运**行时。(具体用法见kubernetes官方文档

  • containerd
  • CRI-O
  • Docker Engine
  • Mirantis Container Runtime

一、准备开始

本文操作配置,如下:

|系统 | CPU | RAM| IP | 网卡 | 主机名

|--|--|--|--|--|--|

| Linux | 2 | 4G| 192.168.200.5| NAT | master|

|Linux |2 | 4G| 192.168.200.6|NAT| node |

最低配置:CPU核心不低于2个,RAM不低于2G。

<font color=“red”>**注意命令在那台节点上执行的。** <font>

二、环境配置(所有节点操作)

修改主机名

代码语言:shell
复制
#master节点

hostnamectl set-hostname master

bash

#node节点

hostnamectl set-hostname node

bash

配置hosts映射

代码语言:shell
复制
cat >> /etc/hosts << EOF

192.168.200.5 master

192.168.200.6 node

EOF

关闭防火墙

代码语言:shell
复制
systemctl stop firewalld

systemctl disable firewalld

关闭selinux

代码语言:shell
复制
setenforce 0

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭交换分区

为了保证 kubelet 正常工作,必须禁用交换分区。

代码语言:shell
复制
swapoff -a

sed -i 's/.\*swap.\*/#&/' /etc/fstab

转发 IPv4 并让 iptables 看到桥接流

为了让 Linux 节点的 iptables 能够正确查看桥接流量,请确认 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 设置为 1。

代码语言:shell
复制
#转发 IPv4 并让 iptables 看到桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

overlay

br\_netfilter

EOF



sudo modprobe overlay

sudo modprobe br\_netfilter

lsmod | grep br\_netfilter #验证br\_netfilter模块

# 设置所需的 sysctl 参数,参数在重新启动后保持不变

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables  = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip\_forward                 = 1

EOF



# 应用 sysctl 参数而不重新启动

sudo sysctl --system

配置 时间同步

代码语言:shell
复制
#删除centos默认repo包,配置阿里云Centos-7.repo包

rm -rf /etc/yum.repos.d/\*

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo



#方式1:安装配置chrony时间同步

IP=`ip addr | grep 'state UP' -A2 | grep inet | egrep -v '(127.0.0.1|inet6|docker)' | awk '{print $2}' | tr -d "addr:" | head -n 1 | cut -d / -f1`

yum install -y chrony

sed -i '3,6s/^/#/g' /etc/chrony.conf

sed -i "7s|^|server $IP iburst|g" /etc/chrony.conf

echo "allow all" >> /etc/chrony.conf

echo "local stratum 10" >> /etc/chrony.conf

systemctl restart chronyd

systemctl enable chronyd

timedatectl set-ntp true

sleep 5

systemctl restart chronyd

chronyc sources



#方式2:时间同步  注意:系统重启后恢复成原时间

yum install ntpdate -y

ntpdate ntp1.aliyun.com

三、安装containerd(所有节点操作)

3.1、安装containerd

下载containerd包

首先访问https://github.com/,搜索containerd,进入项目找到Releases,下拉找到对应版本的tar包,如图所示:

在这里插入图片描述
在这里插入图片描述
代码语言:shell
复制
$ tar Cvzxf /usr/local containerd-1.6.9-linux-amd64.tar.gz



# 通过 systemd 启动 containerd

$ vi /etc/systemd/system/containerd.service

# Copyright The containerd Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.



[Unit]

Description=containerd container runtime

Documentation=https://containerd.io

After=network.target local-fs.target



[Service]

#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration

#Environment="ENABLE\_CRI\_SANDBOXES=sandboxed"

ExecStartPre=-/sbin/modprobe overlay

ExecStart=/usr/local/bin/containerd



Type=notify

Delegate=yes

KillMode=process

Restart=always

RestartSec=5

# Having non-zero Limit\*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNPROC=infinity

LimitCORE=infinity

LimitNOFILE=infinity

# Comment TasksMax if your systemd version does not supports it.

# Only systemd 226 and above support this version.

TasksMax=infinity

OOMScoreAdjust=-999



[Install]

WantedBy=multi-user.target
代码语言:shell
复制
# 加载配置、启动

systemctl daemon-reload

systemctl enable --now containerd

# 验证

ctr version

#生成配置文件

mkdir /etc/containerd

containerd config default > /etc/containerd/config.toml

systemctl restart containerd

3.2、安装runc

代码语言:shell
复制
#下载runc地址:https://github.com/opencontainers/runc/releases

# 安装

install -m 755 runc.amd64 /usr/local/sbin/runc

# 验证

runc -v

3.3、安装CNI

代码语言:shell
复制
#下载CNI地址:https://github.com/containernetworking/plugins/releases

mkdir -p /opt/cni/bin

tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

3.4、配置加速器

代码语言:shell
复制
#参考:https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration

#添加 config\_path = "/etc/containerd/certs.d"

sed -i 's/config\_path\ =.\*/config\_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.toml

mkdir /etc/containerd/certs.d/docker.io -p



cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF

server = "https://docker.io"

[host."https://vh3bm52y.mirror.aliyuncs.com"]

  capabilities = ["pull", "resolve"]

EOF

  

systemctl daemon-reload && systemctl restart containerd

四、cgroup 驱动(所有节点操作)

在 Linux 上,控制组(CGroup)用于限制分配给进程的资源。

kubelet 和底层容器运行时都需要对接控制组 为 Pod 和容器管理资源 ,如 CPU、内存这类资源设置请求和限制。

若要对接控制组(CGroup),kubelet 和容器运行时需要使用一个 cgroup 驱动。 关键的一点是 **kubelet 和容器运行时需使用相同的 cgroup 驱动**并且采用相同的配置。

代码语言:shell
复制
#把SystemdCgroup = false修改为:SystemdCgroup = true

sed -i 's/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/g' /etc/containerd/config.toml

#把sandbox\_image = "k8s.gcr.io/pause:3.6"修改为:sandbox\_image="registry.aliyuncs.com/google\_containers/pause:3.8"

sed -i 's/sandbox\_image\ =.\*/sandbox\_image\ =\ "registry.aliyuncs.com\/google\_containers\/pause:3.8"/g' /etc/containerd/config.toml|grep sandbox\_image



systemctl daemon-reload 

systemctl restart containerd

五、安装crictl(所有节点操作)

kubernetes中使用crictl管理容器,不使用ctr。

crictl 是 CRI 兼容的容器运行时命令行接口。 可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。

代码语言:shell
复制
#配置crictl对接ctr容器运行时。

tar -vzxf crictl-v1.25.0-linux-amd64.tar.gz

mv crictl /usr/local/bin/



cat >>  /etc/crictl.yaml << EOF

runtime-endpoint: unix:///var/run/containerd/containerd.sock

image-endpoint: unix:///var/run/containerd/containerd.sock

timeout: 10

debug: true

EOF



systemctl restart containerd

六、kubeadm部署集群

6.1、安装kubeadm、kubelet、kubectl(所有节点操作)

代码语言:shell
复制
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86\_64/

enabled=1

gpgcheck=1

repo\_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

yum install  --nogpgcheck kubelet-1.25.3 kubeadm-1.25.3 kubectl-1.25.3 -y

systemctl enable kubelet
  • **yum安装时出现错误**,如下:
代码语言:shell
复制
# 详情参考:https://blog.csdn.net/Dan1374219106/article/details/112450922



[root@master ~]# yum install  --nogpgcheck kubelet

Loaded plugins: fastestmirror

Existing lock /var/run/yum.pid: another copy is running as pid 8721.

Another app is currently holding the yum lock; waiting for it to exit...

  The other application is: yum

    Memory :  44 M RSS (444 MB VSZ)

    Started: Fri Nov 11 20:40:32 2022 - 02:07 ago

    State  : Traced/Stopped, pid: 8721



#解决方法

[root@master ~]# rm -f /var/run/yum.pid

6.1.1、配置ipvs

代码语言:shell
复制
#参考:https://cloud.tencent.com/developer/article/1717552#:~:text=%E5%9C%A8%E6%89%80%E6%9C%89%E8%8A%82%E7%82%B9%E4%B8%8A%E5%AE%89%E8%A3%85ipset%E8%BD%AF%E4%BB%B6%E5%8C%85%20yum%20install%20ipset%20-y,%E4%B8%BA%E4%BA%86%E6%96%B9%E4%BE%BF%E6%9F%A5%E7%9C%8Bipvs%E8%A7%84%E5%88%99%E6%88%91%E4%BB%AC%E8%A6%81%E5%AE%89%E8%A3%85ipvsadm%20%28%E5%8F%AF%E9%80%89%29%20yum%20install%20ipvsadm%20-y



#安装ipset和ipvsadm

yum install ipset ipvsadm -y

#由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块

cat > /etc/sysconfig/modules/ipvs.modules << EOF

#!/bin/bash

modprobe -- ip\_vs

modprobe -- ip\_vs\_rr

modprobe -- ip\_vs\_wrr

modprobe -- ip\_vs\_sh

modprobe -- nf\_conntrack\_ipv4

EOF

#执行加载模块脚本

/bin/bash /etc/sysconfig/modules/ipvs.modules

#查看对应的模块是否加载成功

lsmod | grep -e ip\_vs -e nf\_conntrack\_ipv4



#配置kubelet

cat >>  /etc/sysconfig/kubelet << EOF

# KUBELET\_CGROUP\_ARGS="--cgroup-driver=systemd"

KUBE\_PROXY\_MODE="ipvs"

EOF

6.2、kubeadm初始化(master节点操作)

代码语言:shell
复制
#查看我们kubeadm版本,这里为GitVersion:"v1.25.3"

[root@master ~]# kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:55:36Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}



# 生成默认配置文件

$ kubeadm config print init-defaults > kubeadm.yaml

$ vi kubeadm.yaml

apiVersion: kubeadm.k8s.io/v1beta3

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

  token: abcdef.0123456789abcdef

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.200.5  # 修改为宿主机ip

  bindPort: 6443

nodeRegistration:

  criSocket: unix:///var/run/containerd/containerd.sock

  imagePullPolicy: IfNotPresent

  name: master   # 修改为宿主机名

  taints: null

---

apiServer:

  timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta3

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns: {}

etcd:

  local:

    dataDir: /var/lib/etcd

imageRepository: registry.aliyuncs.com/google\_containers # 修改为阿里镜像

kind: ClusterConfiguration

kubernetesVersion: 1.25.3  # kubeadm的版本为多少这里就修改为多少

networking:

  dnsDomain: cluster.local

  serviceSubnet: 10.96.0.0/12

  podSubnet: 10.244.0.0/16   ## 设置pod网段

scheduler: {}



###添加内容:配置kubelet的CGroup为systemd

---

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

cgroupDriver: systemd



#下载镜像

$ kubeadm config images pull --image-repository=registry.aliyuncs.com/google\_containers  --kubernetes-version=v1.25.3

#初始化

$ kubeadm init --config kubeadm.yaml

Your Kubernetes control-plane has initialized successfully!



To start using your cluster, you need to run the following as a regular user:



  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config



Alternatively, if you are the root user, you can run:



  export KUBECONFIG=/etc/kubernetes/admin.conf



You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/



Then you can join any number of worker nodes by running the following on each as root:



kubeadm join 192.168.200.5:6443 --token abcdef.0123456789abcdef \

        --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f
代码语言:shell
复制
# master节点执行

[root@master ~]# mkdir -p $HOME/.kube

[root@master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config





# node节点执行

[root@node ~]#  kubeadm join 192.168.200.5:6443 --token abcdef.0123456789abcdef         --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f
代码语言:shell
复制
# master节点执行

[root@master ~]# kubectl get nodes

NAME     STATUS     ROLES           AGE     VERSION

master   NotReady   control-plane   3m25s   v1.25.4

node     NotReady   <none>          118s    v1.25.4

6.3、部署网络(master节点操作)

6.3.1、说明

本博客测试时,由于ctr拉取镜像特别慢,所有我们这里采用docker拉取镜像。首先在node节点安装docker-ce,拉取calico网络插件需要的镜像,再使用docker save命令打包后上传镜像到master节点。步骤如下:

<font color="red" size="5">注意:</font>使用calico.yaml下载地址下载后有可能不能使用,报错信息如下。打开配置文件后发现镜像版本为 v3.14.2。这个版本本人测试时是不可用的。报错信息:resource mapping not found for name: "bgpconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

代码语言:txt
复制
`版本“apiextension.k8s.io/v1beta1”中的类型“CustomResourceDefinition”不匹配`

#如果遇到上述情况,请下载使用本博客提供的calico.yaml文件。

代码语言:shell
复制
yum install -y wget

wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml --no-check-certificate



#部署calico报错,使用本博客所提供的calico文件部署

[root@master ~]# kubectl apply -f calico.yaml

configmap/calico-config configured

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers configured

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged

clusterrole.rbac.authorization.k8s.io/calico-node configured

clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged

Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead

daemonset.apps/calico-node configured

serviceaccount/calico-node unchanged

deployment.apps/calico-kube-controllers configured

serviceaccount/calico-kube-controllers unchanged

resource mapping not found for name: "bgpconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "bgppeers.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "blockaffinities.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "clusterinformations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "felixconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "globalnetworkpolicies.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "globalnetworksets.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "hostendpoints.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "ipamblocks.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "ipamconfigs.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "ipamhandles.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "ippools.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "kubecontrollersconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "networkpolicies.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

resource mapping not found for name: "networksets.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

ensure CRDs are installed first

6.3.2、操作(calico下载)

代码语言:txt
复制
关注微信公众号:CloudLog无名小歌 或 秋意临,回复calico获取下载。

**node节点安装docker-ce**,并拉取镜像如下

代码语言:shell
复制
#安装docker-ce

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce



#配置加速器

mkdir /etc/docker

cat > /etc/docker/daemon.json << EOF

{

"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"]

}

EOF



sudo systemctl daemon-reload

sudo systemctl restart docker



#拉取镜像、打包镜像

docker pull docker.io/calico/node:v3.24.4

docker save -o calico\_node\_v3.24.4.tar docker.io/calico/node:v3.24.4



docker pull docker.io/calico/cni:v3.24.4

docker save -o calico\_cni\_v3.24.4.tar docker.io/calico/cni:v3.24.4



docker pull docker.io/calico/kube-controllers:v3.24.4

docker save -o calico\_kube-controllers\_v3.24.4.tar docker.io/calico/kube-controllers:v3.24.4

master节点执行

由于container有命名空间的概念,kubernetes的名称空间为k8s.io。

代码语言:shell
复制
#导入镜像到k8s.io名称空间

ctr -n k8s.io image import calico\_node\_v3.24.4.tar

ctr -n k8s.io image import calico\_cni\_v3.24.4.tar

ctr -n k8s.io image import calico\_kube-controllers\_v3.24.4.tar



#查看镜像是否导入到k8s.io名称空间

[root@master ~]# crictl images

...

...

IMAGE                                                             TAG                 IMAGE ID            SIZE

docker.io/calico/cni                                              v3.24.4             0b046c51c02a8       198MB

docker.io/calico/kube-controllers                                 v3.24.4             0830ebe059a9e       71.4MB

docker.io/calico/node                                             v3.24.4             32c45127e587f       226MB

registry.aliyuncs.com/google\_containers/coredns                   v1.9.3              5185b96f0becf       14.8MB

registry.aliyuncs.com/google\_containers/etcd                      3.5.4-0             a8a176a5d5d69       102MB

registry.aliyuncs.com/google\_containers/kube-apiserver            v1.25.3             0346dbd74bcb9       34.2MB

registry.aliyuncs.com/google\_containers/kube-controller-manager   v1.25.3             6039992312758       31.3MB

registry.aliyuncs.com/google\_containers/kube-proxy                v1.25.3             beaaf00edd38a       20.3MB

registry.aliyuncs.com/google\_containers/kube-scheduler            v1.25.3             6d23ec0e8b87e       15.8MB

registry.aliyuncs.com/google\_containers/pause                     3.8                 4873874c08efc       311kB





#下载本博客提供的calico.yaml文件后运行。运行后等待几分钟后即可。

[root@master ~]# kubectl apply -f calico.yaml

[root@master ~]# kubectl get pod -A

NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE

kube-system   coredns-c676cc86f-ddp44          1/1     Running   0          87m

kube-system   coredns-c676cc86f-mg278          1/1     Running   0          87m

kube-system   etcd-master                      1/1     Running   0          87m

kube-system   kube-apiserver-master            1/1     Running   0          87m

kube-system   kube-controller-manager-master   1/1     Running   0          87m

kube-system   kube-proxy-75svm                 1/1     Running   0          87m

kube-system   kube-proxy-7bl66                 1/1     Running   0          87m

kube-system   kube-scheduler-master            1/1     Running   0          87m

[root@master ~]# kubectl get nodes

NAME     STATUS   ROLES           AGE   VERSION

master   Ready    control-plane   87m   v1.25.3

node     Ready    <none>          86m   v1.25.3

至此kubernetes集群搭建完毕~~~

总结

我是秋意临,欢迎大家一键三连、加入**云社区**

(⊙o⊙),我们下期再见!!!

参考文档

containerd:https://github.com/containerd/containerd/blob/main/docs/getting-started.md

kubernetes:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 前言
  • 一、准备开始
  • 二、环境配置(所有节点操作)
  • 三、安装containerd(所有节点操作)
    • 3.1、安装containerd
      • 3.2、安装runc
        • 3.3、安装CNI
          • 3.4、配置加速器
          • 四、cgroup 驱动(所有节点操作)
          • 五、安装crictl(所有节点操作)
          • 六、kubeadm部署集群
            • 6.1、安装kubeadm、kubelet、kubectl(所有节点操作)
              • 6.1.1、配置ipvs
            • 6.2、kubeadm初始化(master节点操作)
              • 6.3、部署网络(master节点操作)
                • 6.3.1、说明
                • 6.3.2、操作(calico下载)
            • 总结
            • 参考文档
            相关产品与服务
            容器服务
            腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
            领券
            问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档