上一篇中已经介绍了如何基于在线环境制作离线安装包。介于篇幅限制,离线部署部分介绍不够完善。本篇将讲解离线部署的一些问题和注意事项。 天行1st,公众号:编码如写诗信创:鲲鹏(arm64)+麒麟(kylin v10)离线部署k8s和kubesphere(含离线部署新方式)
本文主要介绍以下两项
服务器配置
主机名 | IP | CPU | OS | 核数 | 内存 | 用途 |
---|---|---|---|---|---|---|
master-1 | 192.168.10.2 | Kunpeng-920 | Kylin V10 SP2 | 32H | 64GB | 离线环境 KubeSphere/k8s-master |
master-2 | 192.168.10.3 | Kunpeng-920 | Kylin V10 SP2 | 32H | 64GB | 离线环境 KubeSphere/k8s-master |
master-3 | 192.168.10.4 | Kunpeng-920 | Kylin V10 SP2 | 32H | 64GB | 离线环境 KubeSphere/k8s-master |
实战环境涉及软件版本信息
制作好的离线包
实际上只要kubesphere.tar.gz就够了,为了大家熟悉过程和展示,这里进行了拆分。百度云地址:https://pan.baidu.com/s/1lKtCRqxGMUxyumd4XIz4Bg?pwd=4ct2
2.1 卸载podman
所有节点都需要操作
yum remove podman -y
所有节点都需要操作。安装k8s的依赖包,上传k8s-init-Kylin_V10-arm.tar.gz并解压后执行install.sh。
install.sh内容
[root@node1 k8s-init]# cat install.sh
#!/bin/bash
#
rpm -ivh *.rpm --force --nodeps
2.3~2.6 为拆解步骤,这里分步介绍是为明确安装的服务。该阶段已合并到2.7中执行。
只在安装ks的一个节点操作即可,上传docker-24.0.7-arm.tar.gz解压后执行install.sh,install内容如下:
[root@node1 kubesphere]# ls
create_project_harbor.sh docker-24.0.7-arm.tar.gz harbor-arm.tar.gz install.sh k8s-init k8s-init-Kylin_V10-arm.tar.gz ks3.3.1-images.tar.gz ks3.3.1-offline push-images.sh
[root@node1 kubesphere]# tar zxf docker-24.0.7-arm.tar.gz
[root@node1 kubesphere]# cd docker/
[root@node1 docker]# ls
docker-24.0.7.tgz docker-compose docker.service install.sh
[root@node1 docker]# cat install.sh
#!/bin/bash
#
tar zxf docker-24.0.7.tgz
cp -p docker/* /usr/bin
cp docker.service /etc/systemd/system/
chmod +x /etc/systemd/system/docker.service
systemctl daemon-reload
systemctl start docker
systemctl enable docker.service
cp docker-compose /usr/local/bin
cp docker-compose /usr/bin
chmod +x /usr/local/bin/docker-compose
chmod +x /usr/bin/docker-compose
上传harbor-arm.tar.gz解压后执行install.sh,install内容如下:
[root@node1 kubesphere]# tar zxf harbor-arm.tar.gz
[root@node1 kubesphere]# cd harbor/
[root@node1 harbor]# ls
create_project_harbor.sh harbor-offline-installer-arm-v2.7.1.tar.gz helm-push_0.10.4_linux_arm64.tar.gz helm.sh install.sh ssl.zip
[root@node1 harbor]# cat install.sh
#!/bin/bash
#
read -p "请输入harbor仓库要按照的IP地址:" IP
do_ssl() {
unzip ssl.zip
mkdir -p /etc/ssl/registry/ssl
cp ssl/* /etc/ssl/registry/ssl/
}
do_docker() {
mkdir -p /etc/docker/
cat > /etc/docker/daemon.json <<EOF
{
"log-opts": {
"max-size": "5m",
"max-file":"3"
},
"registry-mirrors": ["https://registry.docker-cn.com","http://hub-mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries": ["dockerhub.kubekey.local","core.harbor.dked:30002"]
}
EOF
systemctl daemon-reload && systemctl restart docker
sleep 10
}
do_harbor() {
tar zxf harbor-offline-installer-arm-v2.7.1.tar.gz
mv harbor /opt
cd /opt/harbor/
./install.sh --with-chartmuseum --with-trivy
echo "$IP dockerhub.kubekey.local" >> /etc/hosts
echo "$IP core.harbor.dked" >> /etc/hosts
docker login -u admin -p Harbor12345 core.harbor.dked:30002
docker login -u admin -p Harbor12345 dockerhub.kubekey.local
}
do_ssl
do_docker
do_harbor
执行create_project_harbor.sh和push-images.sh
create_project_harbor.sh:
#!/usr/bin/env bash
url="https://dockerhub.kubekey.local" #修改url的值为https://dockerhub.kubekey.local
user="admin"
passwd="Harbor12345"
harbor_projects=(
kubesphereio
kubesphere
other
csmp
)
for project in "${harbor_projects[@]}"; do
echo "creating $project"
curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
done
push-images.sh:
[root@node1 kubesphere]# cat push-images.sh
#!/bin/bash
#
docker load < ks3.3.1-images.tar.gz
docker login -u admin -p Harbor12345 dockerhub.kubekey.local
docker push dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1
docker push dockerhub.kubekey.local/kubesphereio/alpine:3.14
docker push dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12
docker push dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12
docker push dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12
docker push dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12
docker push dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0
docker push dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0
docker push dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
docker push dockerhub.kubekey.local/kubesphereio/cni:v3.23.2
docker push dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2
docker push dockerhub.kubekey.local/kubesphereio/node:v3.23.2
docker push dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0
docker push dockerhub.kubekey.local/kubesphereio/fluent-bit:v1.8.11
docker push dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1
docker push dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1
docker push dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2
docker push dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0
docker push dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0
docker push dockerhub.kubekey.local/kubesphereio/elasticsearch-oss:6.8.22
docker push dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1
docker push dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0
docker push dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0
docker push dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0
docker push dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0
docker push dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0
docker push dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0
docker push dockerhub.kubekey.local/kubesphereio/docker:19.03
docker push dockerhub.kubekey.local/kubesphereio/pause:3.5
docker push dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0
docker push dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0
docker push dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0
docker push dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
docker push dockerhub.kubekey.local/kubesphereio/log-sidecar-injector:1.1
docker push dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
docker push dockerhub.kubekey.local/kubesphereio/elasticsearch-curator:v5.7.6
docker push dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
docker push dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
docker push dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4
docker push dockerhub.kubekey.local/kubesphereio/redis:5.0.14-alpine
docker push dockerhub.kubekey.local/kubesphereio/haproxy:2.3
上传ks3.3.1-offline,修改kubesphere-v331-v12212.yaml中节点信息,后执行install.sh
[root@node1 ks3.3.1-offline]# cat install.sh
#!/usr/bin/env bash
rm -rf ./kubekey/pki/*
./kk create cluster -f kubesphere-v331-v12212.yaml --with-packages
2.3~2.6可合并执行:上传kubesphere.tar.gz解压后修改ks3.3.1-offline/kubesphere-v331-12212.yaml节点信息后执行install.sh:
[root@node1 kubesphere]# cat install.sh
#!/usr/bin/env bash
read -p "请先修改机器配置文件ks3.3.1-offline/kubesphere-v331-v12212.yaml中相关IP地址,是否已修改(yes/no)" B
do_k8s_init(){
echo "--------开始进行依赖包初始化------"
tar zxf k8s-init-Kylin_V10-arm.tar.gz
cd k8s-init && ./install.sh
cd -
# rm -rf k8s-init
}
install_docker(){
echo "--------开始安装docker--------"
tar zxf docker-24.0.7-arm.tar.gz
cd docker && ./install.sh
cd -
}
install_harbor(){
echo "-------开始安装harbor----------"
tar zxf harbor-arm.tar.gz
cd harbor && ./install.sh
cd -
echo "--------开始推送镜像----------"
source create_project_harbor.sh
source push-images.sh
echo "--------镜像推送完成--------"
}
install_ks(){
echo "--------开始安装kubesphere--------"
# tar zxf ks3.3.1-offline.tar.gz
cd ks3.3.1-offline && ./install.sh
}
if [ "$B" = "yes" ] || [ "$B" = "y" ]; then
do_k8s_init
install_docker
install_harbor
install_ks
else
echo "请先配置集群配置文件"
exit 1
fi
等待大约10分钟左右会看到部署成功的消息,若有报错可按以下步骤排查。
制作离线安装包时,由于着急解决问题,有些问题并未记录。制作过程或者安装过程如有问题,欢迎留言反馈。
根据网友反馈,安装时报错coredns架构不匹配,通过inspect查看确实为"Architecture": "amd64",可是我的两套集群却没报错,很奇怪。本地windows到hub.docker.com拉取coredns/coredns --platform arm64发现,除1.11.0识别为arm外其他都是amd64,比较费解,没搞明白为什么。如若出现此问题可使用1.11.0版本重命名为1.8.0
通过查看部署日志,卡在deploying minio。
原因:
鲲鹏麒麟v10默认自带了nfs,需要将nfs添加到开机自启动。kylin V10 sp2/sp3启动nfs-server过程中如遇到错误,可通过以下方式排查。主要启动其nfs-idmapd.service依赖服务。
#查看nsf服务的依赖服务
systemctl list-dependencies nfs-server.service
#启动nsf依赖服务
systemctl start nfs-idmapd.service
#设置开机自启动
systemctl enable nfs-server rpcbind --now
#客户端
若nfs依赖服务中还有其他依赖异常,可通过麒麟应用商店下载对应版本rpm包
https://update.cs2c.com.cn/NS/V10/V10SP2/os/adv/lic/base/aarch64/Packages/
安装ks时启用了日志插件,但是没有es相关镜,因为我司使用的opensearch。若使用es可自行下载es-arm镜像替换。
fluent-bit问题,默认的arm版为1.8.11,该版本在鲲鹏+麒麟时会因为系统的pagesize报错,可使用2.0.6版本。
使用opensearch时,需要替换fluent-bit保密字典中的内容,具体参照 天行1st,公众号:编码如写诗KubeSphere3.3.1更换ES为OpenSearch
[Output]
Name es
Match_Regex (?:kube|service)\.(.*)
Host opensearch-cluster-data.kubesphere-logging-system.svc
Port 9200
HTTP_User admin
HTTP_Passwd admin
Logstash_Format true
Logstash_Prefix ks-whizard-logging
Time_Key @timestamp
Suppress_Type_Name true
tls On
tls.verify false
本文主要补充说明了在鲲鹏CPU和ky10.aarch64操作系统离线安装的步骤和遇到的一些问题。
离线安装可简化为
愿为国产发展贡献点滴力量,加油中国!加油kubesphere!
点击查看原文,获取制作好的ks离线安装包。