作者:许涛 原文发布于微信公众号 - 云服务与SRE架构师社区(ai-cloud-ops)
有三种方法把k8s集群上的服务提供给外部访问:
使用ingress之前要先在k8s集群部署ingress controller,ingress controller本身需要LoadBalancer支持,一个基本的访问流如下:
Internet ←-> Public Cloud LoadBalancer ←-> k8s ingress controller(ingress) ←-> k8s service ←-> k8s pods
当自己拿几个裸机或者虚拟机搭建k8s集群时,往往没有公有云LoadBalancer支持,只能迂回通过NodePort方式把ingress controller服务暴露出去,访问流相应变为如下:
client(/etc/hosts) ←-> Node IP+port ←-> k8s ingress controller(ingress) ←-> k8s service ←-> k8s pods
客户端访问ingress所提供服务涉及的组件如图所示:
下面反着访问流逐一说明各组件部署及其检测。
示例使用了k8s仓库中已有的luksa/kubia镜像,该镜像使用NodeJS创建了一个web服务,监听在8080端口,将返回POD的主机名给访问者。POD的定义文件如下:
kubia-replicaset.yaml
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
部署并检查POD状态,使用exec连接到POD,使用curl检查POD所提供服务
# kubectl apply -f kubia-replicaset.yaml
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kubia-4mcp9 1/1 Running 1 20h 10.244.1.32 slcaa872.us.abc.com
kubia-ftmdz 1/1 Running 1 20h 10.244.2.31 slcaa873.us.abc.com
kubia-zwh72 1/1 Running 1 20h 10.244.1.33 slcaa872.us.abc.com
# kubectl exec -it kubia-4mcp9 bash
root@kubia-4mcp9:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 747324 27980 ? Ssl Sep19 0:00 node app.js
root 84 0.0 0.0 20252 3268 pts/2 Ss 09:52 0:00 bash
root 93 0.0 0.0 17500 2072 pts/2 R+ 09:52 0:00 ps aux
root@kubia-4mcp9:/# curl --noproxy "*" localhost:8080
You've hit kubia-4mcp9
root@kubia-4mcp9:/# curl --noproxy "*" 10.244.2.31:8080
You've hit kubia-ftmdz
root@kubia-4mcp9:/# exit
Service使用标签选择器(selector)选择POD,具体见配置文件:kubia-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: kubia
部署并检测Service,从Service的Endpoints可以看到其和后台PODs的关联
# kubectl apply -f kubia-svc.yaml
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
kubia ClusterIP 10.107.59.127 <none> 80/TCP 21h
[root@slcaa871 Chapter05]# kubectl describe svc kubia
Name: kubia
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kubia
Type: ClusterIP
IP: 10.107.59.127
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.1.32:8080,10.244.1.33:8080,10.244.2.31:8080
Session Affinity: None
Events: <none>
root@kubia-4mcp9:/# curl --noproxy "*" 10.107.59.127
You've hit kubia-ftmdz
root@kubia-4mcp9:/# curl --noproxy "*" 10.244.1.32:8080
You've hit kubia-4mcp9
root@kubia-4mcp9:/# exit
ingress controller就是部署在k8s集群上的代理服务,有多种ingress controller,这里使用最常见的Nginx ingress controller,其部署包括定义namespace、service account、cluster role binding、configmaps等一系列组件,还好当前版本的k8s已经把这些组件的定义放到了一个名为mandatory.yaml文件中,只要应用这个文件就会把这些组件都创建好,该文件过大,这里不予展示。
当k8s集群没有公有云的LoadBalancer支持时,需要采用NodePort的方式把ingress controller服务暴露出去,k8s git提供了Bare-metal的service-nodeport.yaml,应用这个文件就会创建NodePort类型的服务。由于这个文件没有在ports列表中为NodePort定义具体的端口号,k8s将随机选取一个端口号,每个工作节点将保留该端口用于转发请求到后台服务。
service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
部署并检测ingress controller,当controller部署后,可以登录其相应POD访问Nginx默认服务。
# kubectl apply -f mandatory.yaml
# kubectl get ns
NAME STATUS AGE
default Active 16d
ingress-nginx Active 10s
kube-node-lease Active 16d
kube-public Active 16d
kube-system Active 16d
# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-79f6884cf6-rwjx5 1/1 Running 0 2m58s
# kubectl exec -it nginx-ingress-controller-79f6884cf6-rwjx5 -n ingress-nginx bash
www-data@nginx-ingress-controller-79f6884cf6-rwjx5:/etc/nginx$ curl localhost
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
www-data@nginx-ingress-controller-79f6884cf6-rwjx5:/etc/nginx$ curl localhost/nginx_status
Active connections: 3
server accepts handled requests
265 265 264
Reading: 0 Writing: 1 Waiting: 2
www-data@nginx-ingress-controller-79f6884cf6-rwjx5:/etc/nginx$ exit
# kubectl apply -f service-nodeport.yaml
service/ingress-nginx created
# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-controller-79f6884cf6-rwjx5 1/1 Running 0 12m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx NodePort 10.104.151.10 <none> 80:30880/TCP,443:30087/TCP 7s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress-controller 1/1 1 1 12m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-controller-79f6884cf6 1 1 1 12m
注意所生成的service,及其CLUSTER-IP和PORT(S),检测其状态:
# kubectl exec -it kubia-4mcp9 bash
root@kubia-4mcp9:/# curl 10.104.151.10
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
root@kubia-4mcp9:/# curl 10.107.59.127
You've hit kubia-4mcp9
root@kubia-4mcp9:/# exit
exit
[root@slcaa871 ingress-controller-demo]# curl --noproxy "*" slcaa873:30880
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
直接访问ingress controller,返回404,通过工作节点的NodePort(30880),也能访问到ingress controller,但一样返回404。这都是因为目前还没有定义ingress,所以ingress controller还没有对应的后台服务进行相应处理。
ingress定义了域名和路径所对应的后台服务,这个其实和Web服务里的虚拟主机定义类似,具体参见文件ingress-myapp.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myapp
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myapp.kubia.com
http:
paths:
- path: /
backend:
serviceName: kubia
servicePort: 80
该文件里的serviceName定义为kubia,正是前面部署的Service。apply该文件,会在ingress controller里的Nginx的配置文件里添加对应的域名、路径和对应的后台服务,自此就可以从外部访问该后台服务了。注意这里使用/etc/hosts文件定义了myapp.kubia.com,ingress定义文件里引用了该域名。
# kubectl apply -f ingress-myapp.yaml
# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-myapp myapp.kubia.com 80 44s
# kubectl describe ingress ingress-myapp
Name: ingress-myapp
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
myapp.kubia.com
/ kubia:80 (10.244.1.32:8080,10.244.1.33:8080,10.244.2.31:8080)
# kubectl exec -it nginx-ingress-controller-79f6884cf6-rwjx5 -n ingress-nginx bash
www-data@nginx-ingress-controller-79f6884cf6-rwjx5:/etc/nginx$ curl -H "HOST: myapp.kubia.com" localhost
You've hit kubia-zwh72
# curl --noproxy "*" slcaa873:30880
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
# grep kubia /etc/hosts
10.242.18.72 slcaa873.us.abc.com slcaa873 myapp.kubia.com
# curl --noproxy "*" myapp.kubia.com:30880
You've hit kubia-4mcp9
当k8s集群位于公司的代理服务器后,一定要注意对于代理服务器的处理,否则会有一些故障而难以排查。一方面访问外部服务需要设置代理,另一方面访问k8s集群自身的一些服务不应该使用代理,可以参见如下文件进行配置:
vi /etc/systemd/system/docker.service.d/docker-sysconfig.conf
[Service]
Environment="HTTP_PROXY=http://www-proxy.us.abc.com:80/"
Environment="NO_PROXY=localhost,127.0.0.0/8,10.96.0.0/12,10.244.0.0/16"
从文中可以看到,在k8s集群上,一个简单的HTTP服务就有很长的访问路径,该路径上的任何一处配置出现问题,外部就无法访问该服务。虽然k8s集群自身提供了丰富的服务发现、负载均衡、故障自愈等功能,但如何用好k8s这个云操作系统,不是一件简单的事。
许涛,曾供职过民航信息运行部、中国惠普性能优化团队和Oracle系统架构和性能服务团队,目前在Oracle公司数据库研发部门工作。
Kubernetes in Action
Kubernetes: troubleshooting ingress and services traffic flows 利用公有云上的Kubernetes集群为单点应用提供高可用
云上构建高可用实例——应用负载均衡
本文分享自 云服务与SRE架构师社区 微信公众号,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文参与 腾讯云自媒体同步曝光计划 ,欢迎热爱写作的你一起参与!