前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >kubernetes(七) 二进制部署k8s(1.18.4版本)

kubernetes(七) 二进制部署k8s(1.18.4版本)

作者头像
alexhuiwang
发布2021-04-09 16:34:50
1K0
发布2021-04-09 16:34:50
举报
文章被收录于专栏:运维博客

二进制部署k8s(1.18.4版本)

部署说明

软件名称

下载地址

备注

centos7.7+

https://mirrors.aliyun.com/centos/7.7.1908/isos/x86_64/CentOS-7-x86_64-Minimal-1908.iso

宿主机操作系统

kubernetes-server

https://dl.k8s.io/v1.18.4/kubernetes-server-linux-amd64.tar.gz

etcd

https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

k8s数据存储

cfssl

https://pkg.cfssl.org/R1.2/cfssl_linux-amd64<br/>https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64<br/>https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

证书签发工具

docker

https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

CNR运行引擎

cni

https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

网络环境

部署规划

主机

ip

作用

部署的软件

备注

centos7-node4

192.168.56.14

master

kube-apiserver,kube-controller-manager,kube-scheduler,docker,etcd

后期介绍master扩容

centos7-node5

192.168.56.15

node

kubelet,kube-proxy,docker,etcd

后期介绍master扩容

centos7-node6

192.168.56.16

node

kubelet,kube-proxy,docker,etcd

后期介绍master扩容

系统初始化(所有节点执行)

  • 软件安装路径默认路径为/data
代码语言:javascript
复制
# 更新yum源
yum -y install wget && wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo && yum -y install epel-release 
# 关闭selinux,firewalld,swap
sed -i 's/enforcing/disabled/' /etc/selinux/config
systemctl disable firewalld && systemctl stop firewalld
sed -ri 's/.*swap.*/#&/' /etc/fstab && swapoff -a
# 设置好主机名与主机名解析
cat >> /etc/hosts << EOF 
192.168.56.14 centos7-node4
192.168.56.15 centos7-node5
192.168.56.16 centos7-node6
192.168.56.14 k8s-master
192.168.56.15 k8s-node1
192.168.56.16 k8s-node2
192.168.56.17 k8s-master2
EOF
# 将桥接的IPv4流量传递到iptables的链
modprobe br_netfilter
cat  > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf #配置生效
#时间同步
yum install chrony -y && systemctl enable chronyd && systemctl start chronyd
timedatectl set-timezone Asia/Shanghai && timedatectl set-ntp yes

部署ETCD集群

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库, 为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也 可以使用5台组建集群,可容忍2台机器故障。

节点hostname

节点名称

ip

centos7-node4

etcd-1

192.168.56.14

centos7-node5

etcd-2

192.168.56.15

centos7-node6

etcd-3

192.168.56.16

注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能 连接到就行。

生成etcd证书配置

  • 准备cfssl证书管理工具,使用json文件生成证书,相比openssl更方便使用
代码语言:javascript
复制
# 软件安装
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl*
  • 准备ca与证书配置
代码语言:javascript
复制
mkdir ~/TLS/{etcd,k8s} && cd ~/TLS/etcd
# 自谦CA配置文件
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
        "expiry": "87600h",        
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

# 自签csr配置文件
cat > ca-csr.json << EOF
{
  "CN": "etcd CA",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "ca": {
     "expiry": "87600h"
  },
  "names": [
    {
      "C": "CN",
      "L": "BJ",
      "ST": "BeiJing"
    }
  ]
}
EOF

# 生成CA证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem

# 签发ETCD https证书

cat > server.json << EOF
{
    "CN": "etcd",
    "hosts": [
        "192.168.56.14",
        "192.168.56.15",
        "192.168.56.16"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",            
            "ST": "BeiJing"
        }
    ]
}
EOF

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩 容可以多写几个预留的IP。

签发证书

代码语言:javascript
复制
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
ls server*pem  #签发证书文件

部署ETCD集群

单个节点配置
代码语言:javascript
复制
# 安装路径准备
mkdir /data/etcd/{bin,cfg,ssl,data} -p
# 二进制文件准备
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz && tar xf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/etcd* /data/etcd/bin/
# 当前节点192.168.56.14配置文件
cat > /data/etcd/cfg/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/data/etcd/data/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.56.14:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.56.14:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.56.14:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.56.14:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.56.14:2380,etcd-2=https://192.168.56.15:2380,etcd-3=https://192.168.56.16:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
# systemctl 启动管理文件配置(所有节点配置是一致的)
cat /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/data/etcd/cfg/etcd.conf 
ExecStart=/data/etcd/bin/etcd \
    --cert-file=/data/etcd/ssl/server.pem \
    --key-file=/data/etcd/ssl/server-key.pem \
    --peer-cert-file=/data/etcd/ssl/server.pem \
    --peer-key-file=/data/etcd/ssl/server-key.pem \
    --trusted-ca-file=/data/etcd/ssl/ca.pem \
    --peer-trusted-ca-file=/data/etcd/ssl/ca.pem \
    --logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
# 拷贝证书文件
mv ~/TLS/etcd/*pem /data/etcd/ssl
# 启动当前节点
systemctl daemon-reload && systemctl enable etcd && systemctl start etcd

配置文件字段介绍 ETCD_NAME:节点名称,集群中唯一 ETCD_DATA_DIR:数据目录 ETCD_LISTEN_PEER_URLS:集群通信监听地址 ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址 ETCD_INITIAL_CLUSTER:集群节点地址 ETCD_INITIAL_CLUSTER_TOKEN:集群Token ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集 群

其余两个节点配置
  1. 数据分发(从192.168.56.14分发到15,16两个节点)
代码语言:javascript
复制
scp -rp /data/etcd 192.1658.56.15:/data 
scp -rp /data/etcd 192.1658.56.16:/data 
  1. 配置文件修改

然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP

代码语言:javascript
复制
vi /data/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.56.14:2380" # 修改此处为当前服务器IP ETCD_LISTEN_CLIENT_URLS="https://192.168.56.14:2379" # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.56.14:2380" # 修改此处为当前服务 器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.56.14:2379" # 修改此处为当前服务器IP ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.56.14:2380,etcd-2=https://192.168.56.15:2380,etcd-3=https://192.168.56.16:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

最后启动etcd并设置开机启动。

代码语言:javascript
复制
systemctl daemon-reload && systemctl enable etcd && systemctl start etcd

验证Etcd部署状态是否成功

任意节点执行如下命令

代码语言:javascript
复制
ETCDCTL_API=3 /data/etcd/bin/etcdctl --cacert=/data/etcd/ssl/ca.pem --cert=/data/etcd/ssl/server.pem --key=/data/etcd/ssl/server-key.pem --endpoints="https://192.168.56.14:2379,https://192.168.56.15:2379,https://192.168.56.16:2379" endpoint health

正常返回结果

代码语言:javascript
复制
https://192.168.56.15:2379 is healthy: successfully committed proposal: took = 11.567437ms
https://192.168.56.14:2379 is healthy: successfully committed proposal: took = 11.946454ms
https://192.168.56.16:2379 is healthy: successfully committed proposal: took = 13.121313ms
集群异常排查
代码语言:javascript
复制
1. 查看/var/log/message日志或者journalctl -xe -f -uetcd
2. 一般配置文件没问题的话就ok,最大的问题还有一点就是网络通信和防火墙,注意响应的策略放开即可

所有节点安装docker

代码语言:javascript
复制
# 下载和解压docker二进制文件
wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz && tar xf docker-19.03.9.tgz 
# 转移可执行文件
scp docker/* 192.168.56.15:/usr/bin/
scp docker/* 192.168.56.16:/usr/bin/
mv docker/* /usr/bin/
# 配置systemd管理docker (其余的两个节点也需要安装)
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
docker配置与启动
代码语言:javascript
复制
# 配置docker阿里云镜像加速和存储路径(graph)
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF 
{
  "graph": "/data/docker",
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
# 服务启动
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

k8s master节点的安装与部署

当前部署master节点 ip: 192.168.56.14

生成k8s证书配置

代码语言:javascript
复制
cd ~/TLS/k8s
cat > ca-config.json <<EOF
{
    "signing":{
        "default":{
            "expiry":"87600h"
        },
        "profiles":{
            "kubernetes":{
                "expiry":"87600h",
                "usages":[
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN":"kubernetes",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"Beijing",
            "ST":"Beijing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
EOF

# 生成CA证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

自签apiserver证书

代码语言:javascript
复制
cd ~/TLS/k8s
cat > server-csr.json << EOF
{
    "CN":"kubernetes",
    "hosts":[
        "10.0.0.1",
        "172.0.0.1",
        "127.0.0.1",
        "192.168.56.13",
        "192.168.56.14",
        "192.168.56.15",
        "192.168.56.16",
        "192.168.56.17",
        "192.168.56.18",
        "192.168.56.19",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
EOF

注:上述文件hosts字段中IP为所有Master/LB/VIP IP一个都不能少!为了方便后期扩容可以多 写几个预留的IP。

生成apiserver证书

代码语言:javascript
复制
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
ls *pem

安装ApiServer

代码语言:javascript
复制
# 创建软件目录
mkdir -p /data/kubernetes/{cfg,bin,ssl,logs}
# 文件下载与拷贝
wget https://dl.k8s.io/v1.18.4/kubernetes-server-linux-amd64.tar.gz && tar xf kubernetes-server-linux-amd64.tar.gz 
cp kubernetes/server/bin/kube-apiserver /data/kubernetes/bin/
cp kubernetes/server/bin/kube-controller-manager /data/kubernetes/bin/
cp kubernetes/server/bin/kube-scheduler /data/kubernetes/bin/ 
cp kubernetes/server/bin/kubectl /usr/bin/
创建apiserver配置文件
代码语言:javascript
复制
# 创建配置文件
cat > /data/kubernetes/cfg/kube-apiserver.conf << EOF 
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--etcd-servers=https://192.168.56.14:2379,https://192.168.56.15:2379,https://192.168.56.16:2379 \\
--bind-address=192.168.56.14 \\
--secure-port=6443 \\
--advertise-address=192.168.56.14 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestricti on \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\ 
--token-auth-file=/data/kubernetes/cfg/token.csv \\ 
--service-node-port-range=30000-32767 \\ 
--kubelet-client-certificate=/data/kubernetes/ssl/server.pem \\ 
--kubelet-client-key=/data/kubernetes/ssl/server-key.pem \\ 
--tls-cert-file=/data/kubernetes/ssl/server.pem \\ 
--tls-private-key-file=/data/kubernetes/ssl/server-key.pem \\ 
--client-ca-file=/data/kubernetes/ssl/ca.pem \\ 
--service-account-key-file=/data/kubernetes/ssl/ca-key.pem \\ 
--etcd-cafile=/data/etcd/ssl/ca.pem \\
--etcd-certfile=/data/etcd/ssl/server.pem \\ 
--etcd-keyfile=/data/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/data/kubernetes/logs/k8s-audit.log"
EOF

# 拷贝证书
mv ~/TLS/k8s/*pem /data/kubernetes/ssl/

注意事项: --logtostderr:启用日志 ---v:日志等级 --log-dir:日志目录 --etcd-servers:etcd集群地址 --bind-address:监听地址 --secure-port:https安全端口 --advertise-address:集群通告地址 --allow-privileged:启用授权 --service-cluster-ip-range:Service虚拟IP地址段 --enable-admission-plugins:准入控制模块 --authorization-mode:认证授权,启用RBAC授权和节点自管理 --enable-bootstrap-token-auth:启用TLS bootstrap机制 --token-auth-file:bootstrap token文件 --service-node-port-range:Service nodeport类型默认分配端口范围 --kubelet-client-xxx:apiserver访问kubelet客户端证书 --tls-xxx-file:apiserver https证书 --etcd-xxxfile:连接Etcd集群证书 --audit-log-xxx:审计日志

启用TLS Bootstrap机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube- apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需 要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制 来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由 apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是 由我们统一颁发一个证书。 TLS bootstraping 工作流程:

  • 根据上述配置token文件
代码语言:javascript
复制
cat > /data/kubernetes/cfg/token.csv << EOF
2b4b65d2e33e24dc0beafddda6dd4b23,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

格式:token,用户名,UID,用户组 token也可自行生成替换: head -c 16 /dev/urandom | od -An -t x | tr -d ' '

使用systemctl管理apiserver
  • 生成配置文件
代码语言:javascript
复制
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-apiserver.conf
ExecStart=/data/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  • 启动并设置开机自启动
代码语言:javascript
复制
systemctl daemon-reload && systemctl start kube-apiserver && systemctl enable kube-apiserver
  • 授权kubelet-bootstrap用户允许请求证书
代码语言:javascript
复制
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

部署kube-controller-manager

创建配置文件
代码语言:javascript
复制
cat > /data/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\ 
--service-cluster-ip-range=10.0.0.0/24 \\ 
--cluster-signing-cert-file=/data/kubernetes/ssl/ca.pem \\ 
--cluster-signing-key-file=/data/kubernetes/ssl/ca-key.pem \\ 
--root-ca-file=/data/kubernetes/ssl/ca.pem \\ 
--service-account-private-key-file=/data/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

--master:通过本地非安全本地端口8080连接apiserver。 --leader-elect:当该组件启动多个时,自动选举(HA) --cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver 保持一致

systemctl管理controller-manager
代码语言:javascript
复制
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/data/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  • 启动&开机自启
代码语言:javascript
复制
systemctl daemon-reload && systemctl start kube-controller-manager && systemctl enable kube-controller-manager

部署kube-scheduler

创建配置文件
代码语言:javascript
复制
cat > /data/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_dataS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--leader-elect \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1"
EOF

--master:通过本地非安全本地端口8080连接apiserver。 --leader-elect:当该组件启动多个时,自动选举(HA)

systemctl管理kube-scheduler
代码语言:javascript
复制
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-scheduler.conf
ExecStart=/data/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_dataS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  • 启动&开机自启
代码语言:javascript
复制
systemctl daemon-reload && systemctl start kube-scheduler && systemctl enable kube-scheduler

至此master部署完成,集群状态查看

代码语言:javascript
复制
kubectl get cs               

返回如下结果,证明mater部署ok

代码语言:javascript
复制
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}  

部署WorkNode

当前操作节点是: 192.168.56.14(将这个master也作为node)

  • 需要的软件:
    • kubelet
    • kube-proxy

基础软件包准备

代码语言:javascript
复制
# 创建软件目录
mkdir -p /data/kubernetes/{cfg,bin,ssl,logs}
# 文件下载与拷贝
wget https://dl.k8s.io/v1.18.4/kubernetes-server-linux-amd64.tar.gz && tar xf kubernetes-server-linux-amd64.tar.gz 
cp kubernetes/server/bin/kube-proxy /data/kubernetes/bin/
cp kubernetes/server/bin/kubelet /data/kubernetes/bin/

部署kubelet

创建kubelet配置文件
代码语言:javascript
复制
cat > /data/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--hostname-override=k8s-master \\
--network-plugin=cni \\
--kubeconfig=/data/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/data/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/data/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF

--hostname-override:显示名称,集群中唯一 --network-plugin:启用CNI --kubeconfig:空路径,会自动生成,后面用于连接apiserver --bootstrap-kubeconfig:首次启动向apiserver申请证书 --config:配置参数文件 --cert-dir:kubelet证书生成目录 --pod-infra-container-image:管理Pod网络容器的镜像

创建参数配置文件
代码语言:javascript
复制
cat > /data/kubernetes/cfg/kubelet-config.yml << EOF 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /data/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
生成bootstrap.kubeconfig文件
代码语言:javascript
复制
KUBE_APISERVER="https://192.168.56.14:6443" # apiserver IP:PORT TOKEN="2b4b65d2e33e24dc0beafddda6dd4b23" # 与token.csv里保持一致
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes --certificate-authority=/data/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" --token=${TOKEN} --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user="kubelet-bootstrap" --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#拷贝生成的配置到cfg
cp bootstrap.kubeconfig /data/kubernetes/cfg
systemctl管理kubelet
  • 创建启动文件
代码语言:javascript
复制
cat > /usr/lib/systemd/system/kubelet.service << EOF 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/data/kubernetes/cfg/kubelet.conf
ExecStart=/data/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
  • 启动kubelet&开机自启动
代码语言:javascript
复制
systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet

如有异常,及时查看日志解决,大多数问题会出现在/data/kubernetes/cfg/kubelet-config.yml格式上

批准kubelet证书申请并加入集群

  • 查看kubelet证书请求
代码语言:javascript
复制
kubectl get csr  

返回结果

代码语言:javascript
复制
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-zJmrG00TW4zKRNPKoNo3ag0ojgPwEM2M3ARCsvVVyiI   60s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
  • 批准kubelet证书申请,并加入集群
代码语言:javascript
复制
 kubectl certificate approve node-csr-zJmrG00TW4zKRNPKoNo3ag0ojgPwEM2M3ARCsvVVyiI
  • 查看节点
代码语言:javascript
复制
kubectl get node

返回结果

代码语言:javascript
复制
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   2s    v1.18.4

注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

部署kube-proxy

创建配置文件
代码语言:javascript
复制
cat > /data/kubernetes/cfg/kube-proxy.conf << EOF 
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--config=/data/kubernetes/cfg/kube-proxy-config.yml"
EOF
配置参数文件
代码语言:javascript
复制
cat > /data/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /data/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF
生成kube-proxy.kubeconfig文件
  • 证书签发
代码语言:javascript
复制
cd ~/TLS/k8s/
cat > kube-proxy-csr.json << EOF
{
    "CN":"system:kube-proxy",
    "hosts":[

    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  • 生成kubeconfig配置文件
代码语言:javascript
复制
KUBE_APISERVER="https://192.168.56.14:6443"
kubectl config set-cluster kubernetes --certificate-authority=/data/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=./kube-proxy.pem --client-key=./kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 拷贝配置到cfg下
cp kube-proxy.kubeconfig /data/kubernetes/cfg/
systemctl管理kube-proxy
  • 创建启动文件
代码语言:javascript
复制
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-proxy.conf
ExecStart=/data/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
  • 设置开机自启动
代码语言:javascript
复制
systemctl daemon-reload && systemctl start kube-proxy && systemctl enable kube-proxy

部署CNI网络

代码语言:javascript
复制
wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz &&  mkdir /opt/cni/bin -p
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/
  • 部署flannel
代码语言:javascript
复制
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml
kubectl apply -f kube-flannel.yml
  • 查看部署状态
代码语言:javascript
复制
kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    <none>   79m   v1.18.4

授权apiserver访问kubelet

  • 创建配置
代码语言:javascript
复制
cat > apiserver-to-kubelet-rbac.yaml  <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

## 创建授权
kubectl apply -f apiserver-to-kubelet-rbac.yaml

新增workNode

同步文件和配置

  • 将192.168.56.14上的node相关的配置拷贝到192.168.56.15,192.168.56.16上
代码语言:javascript
复制
# kubelet,kube-proxy拷贝
scp /data/kubernetes/bin/kubelet root@192.168.56.15:/data/kubernetes/bin/
scp /data/kubernetes/bin/kube-proxy root@192.168.56.15:/data/kubernetes/bin/
scp /data/kubernetes/bin/kubelet root@192.168.56.16:/data/kubernetes/bin/
scp /data/kubernetes/bin/kube-proxy root@192.168.56.16:/data/kubernetes/bin/
# cni插件拷贝
scp -rp /opt/cni/ root@192.168.56.15:/opt
scp -rp /opt/cni/ root@192.168.56.16:/opt
# 证书拷贝
scp /data/kubernetes/ssl/ca.pem 192.168.56.15:/data/kubernetes/ssl/
scp /data/kubernetes/ssl/ca.pem 192.168.56.16:/data/kubernetes/ssl/
# 配置文件拷贝
scp /data/kubernetes/cfg/kube-proxy* 192.168.56.15:/data/kubernetes/cfg/
scp /data/kubernetes/cfg/kube-proxy* 192.168.56.16:/data/kubernetes/cfg/
scp /data/kubernetes/cfg/kubelet* 192.168.56.15:/data/kubernetes/cfg/
scp /data/kubernetes/cfg/kubelet* 192.168.56.16:/data/kubernetes/cfg/
scp /data/kubernetes/cfg/bootstrap.kubeconfig 192.168.56.15:/data/kubernetes/cfg/
scp /data/kubernetes/cfg/bootstrap.kubeconfig 192.168.56.16:/data/kubernetes/cfg/
# 启动文件拷贝
scp /usr/lib/systemd/system/kubelet.service 192.168.56.15:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/kubelet.service 192.168.56.16:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.15:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.16:/usr/lib/systemd/system/

删除证书和配置文件

192.168.56.14

代码语言:javascript
复制
rm /data/kubernetes/cfg/kubelet.kubeconfig
rm -f /data/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。

配置新的Node节点

  • 修改kubelet和kube-proxy配置文件
代码语言:javascript
复制
vi /opt/kubernetes/cfg/kubelet.conf 
--hostname-override=k8s-node1
vi /opt/kubernetes/cfg/kube-proxy-config.yml 
hostnameOverride: k8s-node1
  • 配置kubectl和kube-proxy开机启动
代码语言:javascript
复制
systemctl daemon-reload && systemctl start kubelet && systemctl start kube-proxy
systemctl enable kubelet && systemctl enable kube-proxy

在master节点上准许node加入

  • 获取准入的node信息
代码语言:javascript
复制
kubectl get csr  
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-63HqXs5ifBWopOS6dZAO8bRJ8PImXljxbOt-2wV5hHg   7m57s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-t6XNO793xatm4gCwQiYH4QDOeIY4yMx8C0SUXSNye7c   38s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
  • 准入node
代码语言:javascript
复制
kubectl certificate approve node-csr-63HqXs5ifBWopOS6dZAO8bRJ8PImXljxbOt-2wV5hHg
kubectl certificate approve node-csr-t6XNO793xatm4gCwQiYH4QDOeIY4yMx8C0SUXSNye7c
  • 查看状态
代码语言:javascript
复制
kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    <none>   3h32m   v1.18.4
k8s-node1    Ready    <none>   106s    v1.18.4
k8s-node2    Ready    <none>   105s    v1.18.4

如果新加的node不是Ready,那就重新apply 一下kube-flannel.yml

部署Dashboard和CoreDNS

部署Dashboard

代码语言:javascript
复制
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
# 部署dashboard
kubectl apply -f recommended.yml  
# 查看状态
kubectl get pods,svc -n kubernetes-dashboard
  • 创建dashboard访问token
代码语言:javascript
复制
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')     #获取token

此时可以直接访问https://NodeIP:30001 NodeIP也就是宿主机的IP

  • 谷歌浏览器访问的时候证书存在问题,需要重新自签证书才能访问
代码语言:javascript
复制
# 使用cfssl生成证书,继续在192.168.56.14 master节点操作
cd ~/TLS/k8s/
cat > dashboard-csr.json <<EOF
dashboard-csr.json 
{
    "CN":"system:kubernetes-dashboard",
    "hosts":[

    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
# 签发证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare kubernetes-dashboard
# 生成证书如下
ll kubernetes-dashboard*pem
-rw------- 1 root root 1675 Jul  9 10:53 kubernetes-dashboard-key.pem
-rw-r--r-- 1 root root 1415 Jul  9 10:53 kubernetes-dashboard.pem
#拷贝证书到/data/kubernetes/ssl
cp kubernetes-dashboard*pem /data/kubernetes/ssl/
  • 删除默认的secret,使用自签证书创建新的secret
代码语言:javascript
复制
#删除secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
#自签证书创建新的secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=/data/kubernetes/ssl/kubernetes-dashboard-key.pem --from-file=/data/kubernetes/ssl/kubernetes-dashboard.pem -n kubernetes-dashboard
  • 修改dasoboard.yml (recommanded.yml)
代码语言:javascript
复制
vim recommended.yaml
args:
            - --auto-generate-certificates
            - --tls-key-file=kubernetes-dashboard-key.pem
            - --tls-cert-file=kubernetes-dashboard.pem
            - --namespace=kubernetes-dashboard
# apply
kubectl apply -f recommended.yaml
# 查看token,使用这个token,可以直接访问https://NodeIP:30001  NodeIP也就是宿主机的IP
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')  

部署coreDNS

用于集群内部Service名称解析

代码语言:javascript
复制
# 下载coredns
git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes/
# 修改部署脚本
vim deploy.sh
if [[ -z $CLUSTER_DNS_IP ]]; then
  # Default IP to kube-dns IP
  # CLUSTER_DNS_IP=$(kubectl get service --namespace kube-system kube-dns -o jsonpath="{.spec.clusterIP}")
  CLUSTER_DNS_IP=10.10.0.2

# 执行部署
yum -y install epel-release jq
./deploy.sh | kubectl apply -f - 
  • dns解析测试
代码语言:javascript
复制
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh 
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

高可用架构

Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法 实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。

针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可 用进行说明和实施。

Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整 个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。

Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube- controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主 要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增 加负载均衡器对其负载均衡即可,并且可水平扩容。

多Master架构图:

扩容流程

新增主机:centos7-node7, 角色k8s-master2

  • 系统初始化
  • 安装docker
代码语言:javascript
复制
wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
tar xf docker-19.03.9.tgz
mv docker/* /usr/bin/
mkdir /data/docker
mkdir /etc/docker
  • 创建ETCD证书目录(mkdir /data/etcd/ssl -p)
  • 拷贝文件,从master-1拷贝到新机器
代码语言:javascript
复制
# 创建目录
mkdir /data/kubernetes/{ssl,bin,cfg,logs} -pv
# CNI
scp -rp /opt/cni/ 192.168.56.17:/opt
# 证书
scp -rp /data/etcd/ssl/* 192.168.56.17:/data/etcd/ssl/
scp -rp /data/kubernetes/ssl/* 192.168.56.17:/data/kubernetes/ssl/
# 二进制文件
scp -rp /data/kubernetes/bin/kube* 192.168.56.17:/data/kubernetes/bin
# 配置文件
scp /data/kubernetes/cfg/* 192.168.56.17:/data/kubernetes/cfg/

# 启动脚本
scp -rp /usr/lib/systemd/system/kube* 192.168.56.17:/usr/lib/systemd/system/
scp -rp /usr/lib/systemd/system/docker.service 192.168.56.17:/usr/lib/systemd/system/
  • 修改配置文件
代码语言:javascript
复制
$ vim /data/kubernetes/cfg/kube-apiserver.conf
--bind-address=192.168.56.17 \
--advertise-address=192.168.56.17 \
$ vim /data/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master2
$ vim /data/kubernetes/cfg/kube-proxy-config.yml 
hostnameOverride: k8s-master2
  • 启动服务
代码语言:javascript
复制
systemctl daemon-reload && systemctl start docker && systemctl start kube-apiserver && systemctl start kube-controller-manager && systemctl start kube-scheduler && systemctl start kubelet && systemctl start kube-proxy
systemctl enable kube-apiserver && systemctl enable docker && systemctl enable kube-controller-manager && systemctl enable kube-scheduler && systemctl enable kubelet && systemctl enable kube-proxy
  • master-1查看集群状态
代码语言:javascript
复制
$ kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}  
  • k8s-mater1 准入新master节点
代码语言:javascript
复制
$ kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-wAP8aDK22Olbn5G34KDaH9xvAn49UyE2DkacElw4SFE   2m19s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
$ kubectl certificate approve node-csr-wAP8aDK22Olbn5G34KDaH9xvAn49UyE2DkacElw4SFE
$ kubectl get node
  • 此时,dashboard使用原来的信息登陆的时候会拿不到系统资源
代码语言:javascript
复制
# 创建 授权
cat > admin.yml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
# 开始授权
kubectl apply -f admin.yml
# 获取token
$ kubectl get secret -A | grep admin-user
kubernetes-dashboard   admin-user-token-678nn             kubernetes.io/service-account-token   3      26s
$ kubectl describe secret admin-user-token-678nn -n kubernetes-dashboard

部署ngixn负载均衡

Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。

Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中, Keepalived主要根据Nginx运行状态判断是否需要故障转移(偏移VIP),例如当Nginx主节点挂 掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。

资源规划

节点hostname

ip

角色

软件

centos7-node8

192.168.56.18

nginx+keepalived

nginx+keepalived

centos7-node9

192.168.56.19

nginx+keepalived

nginx+keepalived

软件安装与配置

  • 软件安装
代码语言:javascript
复制
# 软件安装
yum install epel-release -y
yum install nginx keepalived -y
  • 软件配置
    • nginx配置
代码语言:javascript
复制
# nginx配置
$ vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡 
stream {
    log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log /var/log/nginx/k8s-access.log main;
    upstream k8s-apiserver { 
        server 192.168.56.14:6443; 
        server 192.168.56.74:6443;
    }

    server {
        listen 6443;
        proxy_pass k8s-apiserver;
    }
}

http {
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
        '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';
    access_log /var/log/nginx/access.log main;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;

    include  /etc/nginx/mime.types;
    default_type application/octet-stream;

    server {
        listen 80;
        server_name _;
        location / {
        }
    }
}   
  • keepalived配置
代码语言:javascript
复制
$ vim /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
      acassen@firewall.loc
      failover@firewall.loc
      sysadmin@firewall.loc

   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}

vrrp_script check_nginx {
   script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface eth1
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.56.111/24
    }
    track_script {
        check_nginx
    }
}

注意: virtual_router_id # VRRP 路由 ID实例,每个实例是唯一的 priority 100 # 优先级,备服务器设置 90,所以两台机器上的这个优先级设置是不一样的

  • 上述脚本中的健康检查脚本
代码语言:javascript
复制
$ vim /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then 
    exit 1
else
    exit 0
fi

$ chmod +x /etc/keepalived/check_nginx.sh
  • 服务设置开机自启动
代码语言:javascript
复制
systemctl daemon-reload && systemctl start nginx && systemctl start keepalived && systemctl enable nginx && systemctl enable keepalived
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2020/07/09 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 二进制部署k8s(1.18.4版本)
    • 部署说明
      • 部署规划
        • 系统初始化(所有节点执行)
          • 部署ETCD集群
            • 生成etcd证书配置
            • 签发证书
            • 部署ETCD集群
            • 验证Etcd部署状态是否成功
          • 所有节点安装docker
            • k8s master节点的安装与部署
              • 生成k8s证书配置
              • 自签apiserver证书
              • 安装ApiServer
              • 部署kube-controller-manager
              • 部署kube-scheduler
            • 部署WorkNode
              • 基础软件包准备
              • 部署kubelet
              • 批准kubelet证书申请并加入集群
              • 部署kube-proxy
              • 部署CNI网络
            • 授权apiserver访问kubelet
              • 新增workNode
                • 同步文件和配置
                • 删除证书和配置文件
                • 配置新的Node节点
                • 在master节点上准许node加入
              • 部署Dashboard和CoreDNS
                • 部署Dashboard
                • 部署coreDNS
              • 高可用架构
                • 扩容流程
                • 部署ngixn负载均衡
                • 资源规划
                • 软件安装与配置
            相关产品与服务
            容器镜像服务
            容器镜像服务(Tencent Container Registry,TCR)为您提供安全独享、高性能的容器镜像托管分发服务。您可同时在全球多个地域创建独享实例,以实现容器镜像的就近拉取,降低拉取时间,节约带宽成本。TCR 提供细颗粒度的权限管理及访问控制,保障您的数据安全。
            领券
            问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档