在没使用helm之前,向kubernetes
部署应用,我们要依次部署deployment
,service
,configMap
等,步骤较繁琐。况且随着很多项目微服务化,复杂的应用在容器中部署以及管理显得较为复杂.
helm
通过打包的方式,支持发布的版本管理和控制,很大程度上简化了Kubernetes
应用的部署和管理
Helm是官方提供的类似于Centos系统中YUM的包管理器,是部署环境的流程封装。 Helm有几个重要的概念:chart、release、repository。
Helm包含两个组件:Helm客户端和Tiller服务器 。如下图所示:
Helm客户端负责chart和release的创建和管理以及和Tiller的交互。Tiller服务器运行在k8s集群中,它会处理Helm客户端的请求,与k8s API Server进行交互。
做为Kubernetes
的一个包管理工具,Helm具有如下功能:
chart
chart
打包成tgz
格式
chart
到 chart
仓库或从仓库中下载 chart
chart
仓库是: https://hub.helm.shKubernetes
集群中安装或卸载chart
Helm
管理安装的chart
的发布周期
在此安装 2.16.12
版本
# 下载
$ wget https://get.helm.sh/helm-v2.16.12-linux-amd64.tar.gz
# 解压
$ tar zxvf helm-v2.16.12-linux-amd64.tar.gz
# 将helm加入系统环境
$ cd linux-amd64/
$ cp helm /usr/local/bin/
为了安装服务端Tiller,还需要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具可以在这台机器上访问 API Server 且正常使用。这里的node1节点已经配置好了kubectl。
因为 k8s API Server开启了RBAC访问控制,所以需要创建Tiller使用的 service account: tiller并分配合适的角色给它。详细内容可以查看helm文档中的 Role-base Account Control 。这里简单起见直接分配 cluster-admin 这个集群内置的 ClusterRole 给它。创建rbac-config yaml文件:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
创建:
$ kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
将Tiller部署到k8s集群中:
$ helm init --service-account tiller --skip-refresh
$ kubectl get pod -n kube-system
tiller-deploy-565984b594-vtr9h 1/1 Running 0 17m
$ helm version
Client: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
官方helm仓库地址:https://artifacthub.io/
# 创建文件夹
$ mkdir hello-world
$ cd hello-world
# 创建自描述文件 Chart.yaml,这个文件必须有name 和version定义
$ cat << 'EOF' > Chart.yaml
name: hello-world
version: 1.0.0
EOF
# 创建模板文件,用于生产k8s资源清单(mainfests)
$ mkdir templates
$ cat << 'EOF' > templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: hub.adaixuezhang.cn/library/myapp:v1
ports:
- containerPort: 80
protocol: TCP
EOF
$ cat << 'EOF' > templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: hello-world
EOF
创建release:
# 命令 helm install RELATIVE_PATH_TO_CHART 创建一次release
$ helm install .
$ helm --help
# 配置体现在配置文件 values.yaml
$ cat << 'EOF' > templates/values.yaml
image:
repository: hub.adaixuezhang.cn/library/myapp
tag: 'v1'
EOF
# 这个文件中定义的值,在模板文件中可以通过 .Values对象访问到
$ cat << 'EOF' > templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: 80
protocol: TCP
EOF
# 在values.yaml 中的值可以被部署 release 时用到的参数 --values YAML_FILE_PATH 或 --set key1=value1, key2=value2覆盖掉
$ helm install --set image.tag='v2'
$ helm upgrade -f values.yaml test .
# 使用模板动态生成k8s资源清单,非常需要能提前预览生成的结果
# 使用 --dry-run --debug 选项来打印出生成的清单文件内容,而不执行部署
$ helm install . --dry-run --debug --set image.tag='v2'
准备:
# 预先准备helm模板文件
$ helm fetch stable/kubernetes-dashboard
$ tar zxvf kubernetes-dashboard-1.11.1.tgz
$ cd kubernetes-dashboard/
$ ls
Chart.yaml README.md templates values.yaml
# 准备 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 ,需要到可以访问外网的机器下载镜像
创建 kubernetes-dashboard.yaml:
image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64
tag: v1.10.1
ingress:
enabled: true
hosts:
- k8s.frognew.com
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
tls:
- secretName: frognew-com-tls-secret
hosts:
- k8s.frognew.com
rbac:
clusterAdminRole: true
部署:
$ helm install . -n kubernetes-dashboard \
--namespace kube-system \
-f kubernetes-dashboard.yaml
查看:
$ kubectl get svc -n kube-system
kubernetes-dashboard ClusterIP 10.98.209.153 <none> 443/TCP 7m46s
# 将svc的type改为 NodePort 以便于访问
$ kubectl -n kube-system edit svc kubernetes-dashboard
spec:
...
type: NodePort
$ kubectl get svc -n kube-system
kubernetes-dashboard NodePort 10.98.209.153 <none> 443:31128/TCP 10m
访问dashboard:https://host1:31128
开始配置dashboard。
选择通过令牌方式登陆:
获取登陆用的 kubernetes-dashboard-token:
$ kubectl -n kube-system get secret |grep kubernetes-dashboard-token
kubernetes-dashboard-token-zk5h5 kubernetes.io/service-account-token 3 14m
$ kubectl -n kube-system describe secret kubernetes-dashboard-token-zk5h5
Name: kubernetes-dashboard-token-zk5h5
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 67e98daf-5ad2-48bb-99cb-3645bfe47782
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Imtmck13OU5WbUhwSXJhX3RrNkZHWk1sTjI4T0pfeWVEOGJLN0tqa1p1U2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi16azVoNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY3ZTk4ZGFmLTVhZDItNDhiYi05OWNiLTM2NDViZmU0Nzc4MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.Va3Cv5Px0M1CJvcj6u2ssr--2vNNMqKlHD3o0KZx7ZQs4QOt8HOC0_fn93s7N7qAMUGil0Oi9oOXG08EWH6sHX4V2w0HsYNTseUgjXmcxxPpoVzBZCTMeWd7GNGBSaH3DlVV_pVSnuWSpyIqGwiOC1CUJuufVNp1GLaUuk5J4CqniR-1Jtu2_Qab0wWVexJoK6hJQ-c1cvGPXIaeLNp09PMMgi-CdOrgqCdWAQhD3O-VHaGZzMRKDfwMal-IZ0ZE7xTGhmwHTjWI67tcoJxAQWwLY3vmH52QaBv_kPSsBBq73wAf0-T8Y1PA1x4MsgFUGBs8pgzvhr7giC8eKh4tsA
将token填入令牌认证位置即可!
在使用token令牌进行登陆时报错404,检查日志发现如下错误信息:
2020/10/01 12:16:28 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
问题分析:Heapster是容器集群监控和性能分析工具,HPA、Dashborad、Kubectl top都依赖于heapster收集的数据。但是Heapster从kubernetes 1.8以后已经被遗弃了,被metrics-server所替代…
解决办法 :安装 heapster。
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:heapster
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: heapster
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
# image: k8s.gcr.io/heapster-amd64:v1.5.4 将默认google的官方镜像替换为阿里云镜像,否则你懂得
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.4
command:
- /heapster
- --source=kubernetes:https://kubernetes.default?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an add-on, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
参考 。
# clone 项目
$ git clone https://github.com/prometheus-operator/kube-prometheus.git
修改grafana-service.yaml文件,使用 NodePort 方式访问 Grafana:
$ cd kube-prometheus/manifests
$ vim grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
type: NodePort # 添加内容
ports:
- name: http
port: 3000
targetPort: http
nodePort: 30100 # 添加内容
selector:
app: grafana
修改prometheus-service.yaml文件,使用 NodePort 方式访问 prometheus:
$ vim prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
name: prometheus-k8s
namespace: monitoring
spec:
type: NodePort
ports:
- name: web
port: 9090
targetPort: web
nodePort: 30200
selector:
app: prometheus
prometheus: k8s
sessionAffinity: ClientIP
修改alertmanager-service.yaml文件,使用 NodePort 方式访问 alertmanager:
$ vim alertmanager-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
alertmanager: main
name: alertmanager-main
namespace: monitoring
spec:
type: NodePort
ports:
- name: web
port: 9093
targetPort: web
nodePort: 30300
selector:
alertmanager: main
app: alertmanager
sessionAffinity: ClientIP
部署:
# 部署
$ kubectl apply -f manifests/setup/
$ kubectl apply -f manifests/
# 查看部署详情
$ kubectl -n monitoring get pods
# 说明:有些pod因无法拉取镜像会创建失败,需要手动下载并导入相应的镜像(可以通过 kubectl -n monitoring describe pods Pod_Name 查看)
# 测试是否部署成功
$ kubectl top pods -n kube-system
地址:http://host1:30200
查看targets信息:
Prometheus的web界面提供了基本的查询k8s机器中每个Pod的CPU使用情况,查询条件如下:
sum by (pod_name)( rate(container_cpu_usage_seconds_total{image!="", pod_name!=""}[1m] ) )
地址:http://host1:30100
默认账号密码:admin,admin。更改密码为 admin12345。
首先接入数据源和Dashboard模板: 数据源选择Prometheus,参数保持默认即可
Horizontal Pod Autoscaling 可以根据CPU利用率自动伸缩一个Replication Controller、Deployment或者Replica Set中的Pod数量。
使用helm部署EFK(Elasticsearch、Fluentd、Kibana)。
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ kubectl create namespace efk
$ helm fetch incubator/elasticsearch
$ helm install --name els1 --namespace efk -f values.yaml
$ kubectl run cirror-$RANDOM --rm -it --image=cirros -- /bin/sh
$ curl Elasticsearch:Port/_cat/nodes
$ helm fetch stable/fluentd-elasticsearch
$ vim valules.yaml
# 更改其中 Elasticsearch 访问地址
$ helm install --name flu1 --namespace=efk -f values.yaml
$ helm fetch stable/kibana --version 0.14.8 # 保证Kibana和es版本一致
$ vim valules.yaml
# 更改其中 Elasticsearch 访问地址
$ helm install --name kib1 --namespace efk -f values.yaml