使用外包精通--0成本学习IT运维kubespray--master高可用方案部署的k8s测试环境,本测试环境基于CentOS 9进行测试及开发,不同的Linux版本
争取在后面能出一版。
[root@node1 minio]# kubectl get node
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready control-plane 23h v1.28.6 192.168.2.129 <none> CentOS Stream 9 5.14.0-435.el9.x86_64 docker://20.10.20
node2 Ready control-plane 23h v1.28.6 192.168.2.158 <none> CentOS Stream 9 5.14.0-435.el9.x86_64 docker://20.10.20
node3 Ready <none> 23h v1.28.6 192.168.2.234 <none> CentOS Stream 9 5.14.0-435.el9.x86_64 docker://20.10.20
[root@node1 minio]#
名称 | 版本 |
---|---|
Linux | CentOS Stream release 9 |
kubespray | kubespray-2.24.1 |
kubernetes | v1.28.6 |
[root@node3 ~]# cat /etc/os-release
NAME="CentOS Stream"
VERSION="9"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="9"
PLATFORM_ID="platform:el9"
PRETTY_NAME="CentOS Stream 9"
ANSI_COLOR="0;31"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:centos:centos:9"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
[root@node3 ~]#
通过修改kubespray实现k8s集群部署的高可用
实现原理及架构
kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod的故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据node失效状态自动在其他Node拉起对应的Pod,实现了应用层的高可用。
针对kubernetes集群,高可用性还应包括以下两个层面的考虑:ETCD数据库的高可用性和kubernetes Master组件的高可用性。而kubeadm搭建的k8s集群,etcd只启动一个,存在单点,所以搭建一套ETCD集群。
Master节点扮演着总控中心的角色,通过不断与工作节点上的kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。
Master节点主要有三个服务kube-apiserver
、kube-controller-manager
和kube-scheduler
,其中kube-controller-manager和kube-scheduler组件自身通过选举机制已经实现高可用,所以master高可用主要针对kube-apiserever组件,而该组件是以HTTP API提供服务,因此对它高可用与web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。
GitHub地址:https://github.com/kubernetes-sigs/kubespray
检查系统是否支持yum安装
[root@node1 ~]# yum list|grep haproxy
ghc-io-streams-haproxy.x86_64 1.0.1.0-18.el9 epel
ghc-io-streams-haproxy-devel.x86_64 1.0.1.0-18.el9 epel
ghc-io-streams-haproxy-doc.noarch 1.0.1.0-18.el9 epel
ghc-io-streams-haproxy-prof.x86_64 1.0.1.0-18.el9 epel
haproxy.x86_64 2.4.22-3.el9 appstream
pcp-pmda-haproxy.x86_64 6.2.1-1.el9 appstream
[root@node1 ~]#
[root@node1 ~]# yum list|grep keepalived
keepalived.x86_64 2.2.8-3.el9 appstream
[root@node1 ~]# yum list|grep nginx
collectd-nginx.x86_64 5.12.0-24.el9 epel
lemonldap-ng-nginx.noarch 2.18.2-1.el9 epel
munin-nginx.noarch 2.0.75-1.el9 epel
nginx.x86_64 1:1.20.1-16.el9 appstream
nginx-all-modules.noarch 1:1.20.1-16.el9 appstream
nginx-core.x86_64 1:1.20.1-16.el9 appstream
nginx-filesystem.noarch 1:1.20.1-16.el9 appstream
nginx-mod-fancyindex.x86_64 0.5.2-3.el9 epel
nginx-mod-http-image-filter.x86_64 1:1.20.1-16.el9 appstream
nginx-mod-http-perl.x86_64 1:1.20.1-16.el9 appstream
nginx-mod-http-xslt-filter.x86_64 1:1.20.1-16.el9 appstream
nginx-mod-mail.x86_64 1:1.20.1-16.el9 appstream
nginx-mod-modsecurity.x86_64 1.0.3-8.el9 epel
nginx-mod-stream.x86_64 1:1.20.1-16.el9 appstream
nginx-mod-vts.x86_64 0.2.1-1.el9 epel
pcp-pmda-nginx.x86_64 6.2.1-1.el9 appstream
python3-certbot-nginx.noarch 2.9.0-1.el9 epel
sympa-nginx.x86_64 6.2.72-2.el9 epel
[root@node1 ~]#
GitHub地址:https://github.com/kubernetes-sigs/kubespray
官网参考链接:https://docs.ansible.com/ansible/latest/collections/ansible/builtin/yum_module.html
根据开发思路及需求进行学习及修改代码
新增代码,安装nginx、keepalived、haproxy,代码路径
kubespray是以二进制开发的,为了不麻烦使用yum在线安装
vim roles/kubernetes/node/tasks/loadbalancer/haproxy.yml
- name: Install a list of packages (nginx,keepalive,haproxy)
ansible.builtin.yum:
name:
- nginx
- keepalived
- haproxy
state: present
复制
ansible官方参考案例
添加描述
嘻嘻嘻
yaml路径
vim roles/kubernetes/node/tasks/loadbalancer/haproxy.yml
新增以下代码内容
- name: Keepalived | Make Keepalived directory
file:
path: "{{ keepalived_config_dir }}"
state: directory
mode: 0755
owner: root
- name: Keepalived | Write Keepalived configuration
template:
src: "loadbalancer/keepalived.conf.j2"
dest: "{{ Keepalived_config_dir }}/keepalived.conf"
owner: root
mode: 0755
backup: yes
- name: Keepalived | Get checksum from config
stat:
path: "{{ Keepalived_config_dir }}/keepalived.conf"
get_attributes: no
get_checksum: yes
get_mime: no
register: Keepalived_stat
- name: Keepalived | Write static pod
template:
src: manifests/Keepalived.manifest.j2
dest: "{{ kube_manifest_dir }}/Keepalived.yml"
mode: 0640
配置文件
vim roles/kubespray-defaults/defaults/main/main.yml
新增全局配置路径
# keepalived configure
keepalived_config_dir : "/etc/keepalived"
vim roles/kubernetes/node/templates/loadbalancer/keepalived.conf.j2
代码如下:
[root@node1 kubespray-2.24.1]# cat roles/kubernetes/node/templates/loadbalancer/keepalived.conf.j2
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh" # 判断返回状态码
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # 修改为实际网卡名称
virtual_router_id 51 # VRRP路由ID实例,每个实例时唯一的
priority 100 # 指定VRRP心跳包通告时间,默认为1秒
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
{{ k8s_lb_vip }}
}
track_script {
check_nginx
}
}
[root@node1 kubespray-2.24.1]#
vim inventory/prod/group_vars/all/all.yml
新增代码
# location vip
k8s_lb_vip: 192.168.2.88/24
vim roles/kubernetes/node/templates/loadbalancer/nginx.conf.j2
haproxy模板--默认已经开发完成
vim roles/kubernetes/node/templates/loadbalancer/haproxy.cfg.j2
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。