目前基于k8s 服务的外网访问方式有以下几种:
其中第一种和第二种方案都要经过iptables 转发,第三种方案不经过iptables,本测试主要是为了测试这三种方案的性能损耗。
为了做到测试的准确性和全面性,我们提供以下测试工具和测试数据:
测试用例 | Pod 数 | 数据包大小 | 平均QPS |
---|---|---|---|
1 | 1 | 4k | |
2 | 1 | 100K | |
3 | 10 | 4k | |
4 | 10 | 100k |
root@VM-4-6-ubuntu:/etc/nginx# kubectl get node
NAME STATUS ROLES AGE VERSION
10.0.4.12 Ready <none> 3d v1.10.5-qcloud-rev1
10.0.4.3 Ready <none> 3d v1.10.5-qcloud-rev1
10.0.4.5 Ready <none> 3d v1.10.5-qcloud-rev1
10.0.4.6 Ready,SchedulingDisabled <none> 12m v1.10.5-qcloud-rev1
10.0.4.7 Ready <none> 3d v1.10.5-qcloud-rev1
10.0.4.9 Ready <none> 3d v1.10.5-qcloud-rev1
./wrk -c 200 -d 20 -t 10 http://carytest.pod.com/10k.html 单pod
./wrk -c 1000 -d 20 -t 100 http://carytest.pod.com/4k.html 10 pod
测试用例 | Pod 数 | 数据包大小 | 平均QPS |
---|---|---|---|
1 | 1 | 4k | 12498 |
2 | 1 | 100K | 2037 |
3 | 10 | 4k | 82752 |
4 | 10 | 100k | 7743 |
测试用例 | Pod 数 | 数据包大小 | 平均QPS |
---|---|---|---|
1 | 1 | 4k | 12568 |
2 | 1 | 100K | 2040 |
3 | 10 | 4k | 81752 |
4 | 10 | 100k | 7824 |
测试用例 | Pod 数 | 数据包大小 | 平均QPS |
---|---|---|---|
1 | 1 | 4k | 12332 |
2 | 1 | 100K | 2028 |
3 | 10 | 4k | 76973 |
4 | 10 | 100k | 5676 |
压测过程中,4k 数据包的情况下,应用的负载都在80% -100% 之间, 100k 情况下,应用的负载都在20%-30%
之间,压力都在网络消耗上,没有到达服务后端。
user nginx;
worker_processes 50;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 100000;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
# pod ip
upstream panda-pod {
#ip_hash;
# Pod ip
#server 10.0.4.12:30734 max_fails=2 fail_timeout=30s;
#server 172.16.1.5:80 max_fails=2 fail_timeout=30s;
#server 172.16.2.3:80 max_fails=2 fail_timeout=30s;
#server 172.16.3.5:80 max_fails=2 fail_timeout=30s;
#server 172.16.4.6:80 max_fails=2 fail_timeout=30s;
#server 172.16.4.5:80 max_fails=2 fail_timeout=30s;
#server 172.16.3.6:80 max_fails=2 fail_timeout=30s;
#server 172.16.1.4:80 max_fails=2 fail_timeout=30s;
#server 172.16.0.7:80 max_fails=2 fail_timeout=30s;
#server 172.16.0.6:80 max_fails=2 fail_timeout=30s;
#server 172.16.2.2:80 max_fails=2 fail_timeout=30s;
# svc ip
#server 172.16.255.121:80 max_fails=2 fail_timeout=30s;
# NodePort
server 10.0.4.12:30734 max_fails=2 fail_timeout=30s;
server 10.0.4.3:30734 max_fails=2 fail_timeout=30s;
server 10.0.4.5:30734 max_fails=2 fail_timeout=30s;
server 10.0.4.7:30734 max_fails=2 fail_timeout=30s;
server 10.0.4.9:30734 max_fails=2 fail_timeout=30s;
keepalive 256;
}
server {
listen 80;
server_name carytest.pod.com;
# root /usr/share/nginx/html;
charset utf-8;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://panda-pod;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}