首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >Kuryr-Kubernetes 部署安装及测试Demo

Kuryr-Kubernetes 部署安装及测试Demo

原创
作者头像
Nujil
发布2024-09-19 10:26:05
发布2024-09-19 10:26:05
56610
代码可运行
举报
文章被收录于专栏:CNotesCNotes
运行总次数:0
代码可运行

Before We Begin

Openstack Kuryr 项目旨在 在Kubernetes环境中使能原生的Neutron-based 网络,通过Kuryr-Kubernetes, 用户可以选择在同一个 Neutron 网络上运行 OpenStack 虚拟机和 Kubernetes Pod;或者使用不同的网络段,并在它们之间进行路由。

Kuryr-Kuberneters需配置的组件

  • Kuryr-K8s-Controller(Openstack 控制节点上):监控 Kubernetes API 的来获取 Kubernetes 资源的变化,管理监控Kubernetes中的Pod和Service等资源的创建、更新和删除事件。根据事件触发与OpenStack Neutron等组件的交互,以创建、更新或删除相应的网络资源,如网络、子网、端口等。
  • kuryr-cni(Kubernetes 工作节点上):由CNI运行的可执行文件,它将调用传递给kuryr-daemon
  • kuryr-daemon(每个Kubernetes节点上):它监视在其运行的节点上创建的pod,负责响应从CNI过来的请求,并attach VIFs,目前包含Watcher 和 Server两个进程

Kuryr-Kubernetes的部署模式

嵌套(Nested)模式

  • 适用场景:实现支持在OpenStack VM上运行Kubernetes的部署场景
  • 前提要求:Neutron必须启用Trunk插件,使能trunk port能力
  • 原理:Kuryr-Kubernetes要求K8s工作节点的VM的主接口是一个trunk端口。然后每个Pod将获得该端口的一个子端口,附加到其网络命名空间中

Standalone模式

主要用于测试验证,Openstack 和 Kubernetes 环境及Kuryr-Kubernetes组件均运行在一台设备上。本次测试使用了该部署模式。

测试环境:

本次kuryr-kubernetes能力测试在一台装有Devstack环境的Ubuntu 20.04 ESXi based VM - 172.21.104.121 中完成。具体环境信息如下:

  • 底层OS:Ubuntu 22.04.4
  • 虚拟机硬件配置

  • Devstack:

DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud from git source trees.

Git: https://opendev.org/openstack-dev/devstack

  • Openstack:

Installed and Configured during Devstack Installation

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ openstack --version
openstack 6.6.0
  • Kubernetes:

Installed and Configured during Devstack Installation

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.9
  • Kuryr-kubernetes 组件

Kubernetes integration with OpenStack networking

The OpenStack Kuryr project enables native Neutron-based networking in Kubernetes. With Kuryr-Kubernetes will be able to choose to run both OpenStack VMs and Kubernetes Pods on the same Neutron network

https://opendev.org/openstack/kuryr-kubernetes

安装过程:

Devstack的安装需要在非root用户下进行,以下步骤描述中,通过rootstack标签表明操作步骤所在的用户模式

创建stack用户

root

代码语言:javascript
代码运行次数:0
运行
复制
#创建stack用户,指定目录
sudo useradd -s /bin/bash -d /opt/stack -m stack

 #赋予执行权限
sudo chmod +x /opt/stack

#赋予sudo权限
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
sudo -u stack -i

更新部分apt源为国内源

注意⚠️:

安装过程中会从不同国外apt源及pip源拉取依赖包,组件安装包等,受网络条件影响较大,安装过程需要监控,中途可能会因为下载超时导致安装失败,需要具体查看报错内容,有时需要手动下载好超时报错的package后重新安装

root

代码语言:javascript
代码运行次数:0
运行
复制
#备份默认源文件
cp /etc/apt/sources.list /etc/apt/sources.list-bak

#更新部分apt源为aliyun源
sed -i 's#http://cn.archive.ubuntu.com/#http://mirrors.aliyun.com/#g' /etc/apt/sources.list
 
 #update & upgrade apt
sudo apt update
sudo apt upgrade

更新pip源

stack

代码语言:javascript
代码运行次数:0
运行
复制
#更改pip源(需要在stack用户下):
mkdir ~/.pip
cd ~/.pip/
vim pip.conf

#添加:
[global]
index-url=https://mirrors.aliyun.com/pypi/simple/
[install]
trusted-host=mirrors.aliyun.com

添加配置到bashrc解决git pull报错

stack

代码语言:javascript
代码运行次数:0
运行
复制
sudo vim ~/.bashrc
#添加  
export  GNUTLS_CPUID_OVERRIDE=0x1 
source ~/.bashrc

下载Devstack

stack

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ pwd
/opt/stack

git clone https://opendev.org/openstack-dev/devstack

下载local.conf 配置文件:

stack

代码语言:javascript
代码运行次数:0
运行
复制
sudo wget https://opendev.org/openstack/kuryr-kubernetes/src/branch/master/devstack/local.conf.sample
mv local.conf.sample local.conf

修改local.conf 配置文件参数:

stack

代码语言:javascript
代码运行次数:0
运行
复制
vim local.conf

[[local|localrc]]
HOST_IP=172.21.104.121
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes

# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False

# Credentials
ADMIN_PASSWORD=passwd
DATABASE_PASSWORD=passwd
RABBIT_PASSWORD=passwd
SERVICE_PASSWORD=passwd
SERVICE_TOKEN=passwd

# disable services, to conserve the resources usage
disable_service cinder
disable_service dstat
disable_service n-novnc
disable_service horizon
# If you plan to run tempest tests on devstack, you should comment out/remove
# below line
disable_service tempest

# Neutron services
# ================
enable_plugin neutron https://opendev.org/openstack/neutron
enable_service neutron-tag-ports-during-bulk-creation

# Kuryr K8S-Endpoint driver Octavia provider
# ==========================================
# Kuryr uses LBaaS to provide the Kubernetes services
# functionality.
# In case Octavia is used for LBaaS, you can choose the
# Octavia's Load Balancer provider.
# KURYR_EP_DRIVER_OCTAVIA_PROVIDER=default
# Uncomment the next lines to enable ovn provider. Note only one mode is
# supported on ovn-octavia. As the member subnet must be added when adding
# members, it must be set to L2 mode
KURYR_EP_DRIVER_OCTAVIA_PROVIDER=ovn
KURYR_K8S_OCTAVIA_MEMBER_MODE=L2
KURYR_ENFORCE_SG_RULES=False
KURYR_LB_ALGORITHM=SOURCE_IP_PORT

# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://opendev.org/openstack/octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
enable_service o-da
# OVN octavia provider plugin
enable_plugin ovn-octavia-provider https://opendev.org/openstack/ovn-octavia-provider

# CRI
# ===
# If you already have either CRI-O or Docker configured, running and with its
# socket writable by the stack user, you can omit the following lines.
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
# We are using CRI-O by default. The version should match K8s version:
CONTAINER_ENGINE="crio"
CRIO_VERSION="1.28"

# Kubernetes
# ==========
#
# Kubernetes is installed by kubeadm (which is installed from proper
# repository).
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service.
# TODO(gryf): review the part whith existsing cluster for kubelet
#             configuration instead of runing it via devstack - it need to be
#             configured for use our CNI.
#
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-master

# If you have the 6443 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# KURYR_K8S_API_PORT="6443"
#
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
#
# TODO(gryf): revisit this scenario. Do we even support this in devstack?
#
# KURYR_K8S_API_URL="http (or https, if K8S is SSL/TLS enabled)://k8s_api_ip:k8s_api_port"
#
# If kubernetes API server is 'https' enabled, set path of the ssl cert files
# KURYR_K8S_API_CERT="/etc/kubernetes/certs/kubecfg.crt"
# KURYR_K8S_API_KEY="/etc/kubernetes/certs/kubecfg.key"
# KURYR_K8S_API_CACERT="/etc/kubernetes/certs/ca.crt"
enable_service kubernetes-master

# Kuryr watcher
# =============
#
# Just like the Kubelet, you'll want to have the watcher enabled. It is the
# part of the codebase that connects to the Kubernetes API server to read the
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes
# Kuryr Daemon
# ============
#
# Kuryr can run CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# driver and the driver will pass requests to Kuryr daemon running on the node,
# instead of processing them on its own. This limits the number of Kubernetes
# API requests (as only Kuryr Daemon will watch for new pod events) and should
# increase scalability in environments that often delete and create pods.
# Since Rocky release this is a default deployment configuration.
enable_service kuryr-daemon

# Containerized Kuryr
# ===================
#
# Kuryr can be installed on Kubernetes as a pair of Deployment
# (kuryr-controller) and DaemonSet (kuryr-cni) or as systemd services. If you
# want DevStack to deploy Kuryr services as pods on Kubernetes, comment (or
# remove) next line.
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=False

# Increase Octavia amphorae timeout so that the first LB amphora has time to
# build and boot
#IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"

[[post-config|$OCTAVIA_CONF]]
[controller_worker]
amp_active_retries=9999
[api_settings]
enabled_provider_drivers = amphora:'Octavia Amphora driver',ovn:'Octavia OVN driver'

执行stack.sh 进行安装:

stack

代码语言:javascript
代码运行次数:0
运行
复制
cd devstack
./stack.sh

#安装过程中如有报错中断,解决报错后重新安装最好先清下环境:
./unstack.sh
./clean.sh

安装完成

出现以下log说明Devstack(包含Openstack和Kubernetes)安装完成

代码语言:javascript
代码运行次数:0
运行
复制
This is your host IP address: 172.21.104.121
This is your host IPv6 address: ::1
Keystone is serving at http://172.21.104.121/identity/
The default users are: admin and demo
The password: passwd

系统组件确认

stack

确认Openstack相关组件:

确认Kubernetes相关组件:
代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
ubuntu-121   Ready    control-plane   9d    v1.28.2

stack@ubuntu-121:~$ kubectl get pods -A
NAMESPACE     NAME                                 READY   STATUS    RESTARTS        AGE
default       test-cc4b79c94-9fq6b                 1/1     Running   233 (18m ago)   9d
kube-system   kube-apiserver-ubuntu-121            1/1     Running   1 (9d ago)      9d
kube-system   kube-controller-manager-ubuntu-121   1/1     Running   1               9d
kube-system   kube-scheduler-ubuntu-121            1/1     Running   1               9d

配置 & 测试kuryr-kubernetes能力

stack

Kubernetes : 创建一个Pod

使用busybox image起一个pod:
代码语言:javascript
代码运行次数:0
运行
复制
kubectl create deployment --image busybox test -- sleep 3600

stack@ubuntu-121:~$ kubectl get pods -o wide
NAME                   READY   STATUS    RESTARTS        AGE   IP           NODE         NOMINATED NODE   READINESS GATES
test-cc4b79c94-9fq6b   1/1     Running   233 (27m ago)   9d    10.0.0.117   ubuntu-121   <none>           <none>
确认Kubernetes pod 所使用的IP地址的确是分配到了openstack neutron port:
代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ openstack port list | grep 10.0.0.117
| 0b426f05-89bb-4e7a-a4de-042e3a86168d | default/test-cc4b79c94-9fq6b                         | fa:16:3e:85:0a:c0 | ip_address='10.0.0.117', subnet_id='670fc5c1-f783-4a1c-9f02-ca82d58a0b58'                          | ACTIVE |
查看 /etc/kuryr/kuryr.conf 确认kuryr所使用的subnets 和 network idcat /etc/kuryr/kuryr.conf
代码语言:txt
复制
cat /etc/kuryr/kuryr.conf

#截取其中neutron_defaults中的内容:
[neutron_defaults]
lbaas_activation_timeout = 1200
external_svc_net = 77d0a0ad-a3f5-4625-802c-8e0affba6ea1
ovs_bridge = br-int
pod_security_groups = 99d66fd5-d261-428a-95fe-1321cad77793,a7994714-af04-44af-83eb-a10f4e7cce41
service_subnets = c4c5a2fd-7916-465f-bde8-5a297a4fbfa2
service_subnet = c4c5a2fd-7916-465f-bde8-5a297a4fbfa2
pod_subnets = 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
pod_subnet = 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
project = 61c0d928b168418994287292e615467a

查看Kubernetes service 和 pod所使用的网段:
代码语言:javascript
代码运行次数:0
运行
复制
kubectl get configmap kubeadm-config -n kube-system -o yaml

Openstack: 创建一个VM

确认admin project

Note: 创建Openstack vm绑定security group时需要指定admin project下的default security

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ openstack project list
+----------------------------------+--------------------+
| ID                               | Name               |
+----------------------------------+--------------------+
| 107c9a12da5348ab999e19e3bf0a7976 | invisible_to_admin |
| 28883fbfc06a434a8599e46796a4564a | alt_demo           |
| 61c0d928b168418994287292e615467a | k8s                |
| 756401e171294146ae06bc9aedcd1ff9 | demo               |
| 7c28a495f1eb403c90d60b9ca9c0f6d4 | service            |
| 907ed2f443fb478f9c32aaec0b0ea31a | admin              |
+----------------------------------+--------------------+
确认admin project下的default security group id
代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ openstack security group list
+--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+
| ID                                   | Name                                    | Description            | Project                          | Tags |
+--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+
| 19bcc076-076a-425c-bceb-f4ed992fabf6 | default                                 | Default security group | 907ed2f443fb478f9c32aaec0b0ea31a | []   |
| 24a3c833-b346-4c26-9f1c-872108a59d33 | default                                 | Default security group | 756401e171294146ae06bc9aedcd1ff9 | []   |
| 32bc9516-61d2-4527-a24a-072eda9fee32 | lb-health-mgr-sec-grp                   | lb-health-mgr-sec-grp  | 907ed2f443fb478f9c32aaec0b0ea31a | []   |
| 3c8c617a-5b13-482a-88b1-082a4c2b4e2f | service_pod_access                      | service_pod_access     | 61c0d928b168418994287292e615467a | []   |
| 75896286-9505-431c-8c49-ea45f1da6ee8 | lb-mgmt-sec-grp                         | lb-mgmt-sec-grp        | 907ed2f443fb478f9c32aaec0b0ea31a | []   |
| 99d66fd5-d261-428a-95fe-1321cad77793 | default                                 | Default security group | 61c0d928b168418994287292e615467a | []   |
| a7994714-af04-44af-83eb-a10f4e7cce41 | octavia_pod_access                      | octavia_pod_access     | 61c0d928b168418994287292e615467a | []   |
| bdd3d977-cff8-41c4-b8d3-b9c20206c3f6 | default                                 | Default security group | 7c28a495f1eb403c90d60b9ca9c0f6d4 | []   |
| c0d0dd74-37d6-4059-81b9-4119e065c417 | lb-9f2fa0af-140a-4bc4-b894-71c7e7b42d59 |                        | 907ed2f443fb478f9c32aaec0b0ea31a | []   |
+--------------------------------------+-----------------------------------------+------------------------+----------------------------------+------+
确认kuryr所使用的network idstack@ubuntu-121:~$ openstack network list --long
确认cirros image id
代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID                                   | Name                     | Status |
+--------------------------------------+--------------------------+--------+
| 61786759-cbac-4eab-bc01-8872a509bb79 | amphora-x64-haproxy      | active |
| e9f842df-69fe-4162-bb18-3bb3e6729c32 | cirros-0.6.2-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+
使用cirros image创建一台 "cirros-vm"
代码语言:javascript
代码运行次数:0
运行
复制
openstack server create --image e9f842df-69fe-4162-bb18-3bb3e6729c32 --flavor ds1G --security-group 19bcc076-076a-425c-bceb-f4ed992fabf6 --nic net-id=cbf27f84-d5f0-4057-86d8-71a20e495834 vm-cirros
确认创建好的 cirros-vm
代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ openstack server list
+--------------------------------------+----------------------------------------------+--------+------------------------------------------------------+--------------------------+------------+
| ID                                   | Name                                         | Status | Networks                                             | Image                    | Flavor     |
+--------------------------------------+----------------------------------------------+--------+------------------------------------------------------+--------------------------+------------+
| 5f81414a-92d1-4c40-9f77-21eb0125e842 | vm-cirros                                    | ACTIVE | k8s-pod-net=10.0.0.104                               | cirros-0.6.2-x86_64-disk | ds1G       |
在 VM 和 POD 各自使用的security group 中添加rule放行icmp
代码语言:javascript
代码运行次数:0
运行
复制
首先确认kubernetes Pod对应的port所在的security group

stack@ubuntu-121:~$ openstack port list
stack@ubuntu-121:~$ openstack port show 0b426f05-89bb-4e7a-a4de-042e3a86168d

结果:security_group_ids: 
    99d66fd5-d261-428a-95fe-1321cad77793, 
    a7994714-af04-44af-83eb-a10f4e7cce41
    
创建Openstack VM (vm-cirros)所使用的security group id为:
    19bcc076-076a-425c-bceb-f4ed992fabf6

针对这几个security group 添加rule 放行icmp流量:
openstack security group rule create --protocol icmp --ingress 19bcc076-076a-425c-bceb-f4ed992fabf6
openstack security group rule create --protocol icmp --egress 19bcc076-076a-425c-bceb-f4ed992fabf6
openstack security group rule create --protocol icmp --ingress 99d66fd5-d261-428a-95fe-1321cad77793
openstack security group rule create --protocol icmp --egress 99d66fd5-d261-428a-95fe-1321cad77793
openstack security group rule create --protocol icmp --egress a7994714-af04-44af-83eb-a10f4e7cce41
openstack security group rule create --protocol icmp --ingress a7994714-af04-44af-83eb-a10f4e7cce41

确认rule添加成功:
#举例:
openstack security group show 19bcc076-076a-425c-bceb-f4ed992fabf6 | grep icmp

进行Openstack VM 和 Kubernetes Pod间的 Ping 测试验证连通性:

确认Openstack subnet 网段分配:
代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ openstack subnet list --long

stack@ubuntu-121:~$ openstack subnet show 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| allocation_pools     | 10.0.0.65-10.0.0.125                 |
| cidr                 | 10.0.0.64/26                         |
| created_at           | 2024-04-30T08:41:46Z                 |
| description          |                                      |
| dns_nameservers      |                                      |
| dns_publish_fixed_ip | None                                 |
| enable_dhcp          | False                                |
| gateway_ip           | 10.0.0.126                           |
| host_routes          |                                      |
| id                   | 670fc5c1-f783-4a1c-9f02-ca82d58a0b58 |
| ip_version           | 4                                    |
| ipv6_address_mode    | None                                 |
| ipv6_ra_mode         | None                                 |
| name                 | k8s-pod-subnet-IPv4                  |
| network_id           | cbf27f84-d5f0-4057-86d8-71a20e495834 |
| project_id           | 61c0d928b168418994287292e615467a     |
| revision_number      | 2                                    |
| segment_id           | None                                 |
| service_types        |                                      |
| subnetpool_id        | 324ac087-06b9-4d88-b1fa-df96f6d1545e |
| tags                 |                                      |
| updated_at           | 2024-04-30T08:41:50Z                 |
+----------------------+--------------------------------------+
进入Kubernetes Pod 进行Ping测试:

:进入busybox pod 后进行ping测试遇到了权限问题,暂时还没解决

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ kubectl exec -it test-cc4b79c94-9fq6b -- sh
/ #
/ # whoami
root
/ # ping 10.0.0.104
PING 10.0.0.104 (10.0.0.104): 56 data bytes
ping: permission denied (are you root?)


在后续LB测试中有创建其他的pods, 这里我们选用其他的pod来进行一下ping测试:

stack@ubuntu-121:~/devstack$ openstack server list
+------------------------------+------------------------------+--------+------------------------------+--------------------------+------------+
| ID                           | Name                         | Status | Networks                     | Image                    | Flavor     |
+------------------------------+------------------------------+--------+------------------------------+--------------------------+------------+
| 5f81414a-92d1-4c40-9f77-     | vm-cirros                    | ACTIVE | k8s-pod-net=10.0.0.104       | cirros-0.6.2-x86_64-disk | ds1G       |
| 21eb0125e842                 |                              |        |                              |                          |            |
| 1fffae63-501e-48c9-9136-     | amphora-c5ce48cf-0a37-4c9b-  | ACTIVE | k8s-service-net=10.0.0.188;  | amphora-x64-haproxy      | m1.amphora |
| 0b5a0412f75a                 | b673-ba59f7911eda            |        | lb-mgmt-net=192.168.0.48     |                          |            |
+------------------------------+------------------------------+--------+------------------------------+--------------------------+------------+


stack@ubuntu-121:~/devstack$ kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS        AGE     IP           NODE         NOMINATED NODE   READINESS GATES
demo-7c8d858ccf-2jzd7              1/1     Running   0               5d21h   10.0.0.77    ubuntu-121   <none>           <none>
demo-7c8d858ccf-87946              1/1     Running   0               5d21h   10.0.0.106   ubuntu-121   <none>           <none>
lb-testing-demo-6f9bfdd47c-mtzf9   1/1     Running   0               4d21h   10.0.0.66    ubuntu-121   <none>           <none>
lb-testing-demo-6f9bfdd47c-rfdqp   1/1     Running   0               4d21h   10.0.0.118   ubuntu-121   <none>           <none>
test-cc4b79c94-9fq6b               1/1     Running   501 (12m ago)   20d     10.0.0.117   ubuntu-121   <none>           <none>

stack@ubuntu-121:~/devstack$ kubectl exec -it demo-7c8d858ccf-87946 -- /bin/sh
~ $ ping 10.0.0.104
PING 10.0.0.104 (10.0.0.104) 56(84) bytes of data.
64 bytes from 10.0.0.104: icmp_seq=1 ttl=64 time=56.7 ms
64 bytes from 10.0.0.104: icmp_seq=2 ttl=64 time=16.5 ms
64 bytes from 10.0.0.104: icmp_seq=3 ttl=64 time=6.81 ms
进入Openstack VM "cirros-vm" 进行Ping 测试
代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~$ virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list
 Id   Name                State
-----------------------------------
 2    instance-00000005   running
 3    instance-0000000a   running

virsh # console instance-0000000a
Connected to domain 'instance-0000000a'
Escape character is ^] (Ctrl + ])

login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.

#第一次进入VM时eth0地址未被配置为10.0.0.104,这里我们需要手工配置上:
sudo ip a del 169.254.141.228 dev eth0
sudo ip a a 10.0.0.104/26 dev eth0

$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:DC:2E:90
          inet addr:10.0.0.104  Bcast:0.0.0.0  Mask:255.255.255.192
          inet6 addr: fe80::f816:3eff:fedc:2e90/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1442  Metric:1
          RX packets:286 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6948 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:26328 (25.7 KiB)  TX bytes:2329524 (2.2 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:71 errors:0 dropped:0 overruns:0 frame:0
          TX packets:71 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:6916 (6.7 KiB)  TX bytes:6916 (6.7 KiB)

$ ping 10.0.0.117
PING 10.0.0.117 (10.0.0.117) 56(84) bytes of data.
64 bytes from 10.0.0.117: icmp_seq=1 ttl=64 time=23.2 ms
64 bytes from 10.0.0.117: icmp_seq=2 ttl=64 time=13.7 ms
64 bytes from 10.0.0.117: icmp_seq=3 ttl=64 time=10.7 ms
64 bytes from 10.0.0.117: icmp_seq=4 ttl=64 time=7.03 ms

Kuryr-Kubernetes 方案中的Kubernetes services networking

Kuryr-Kubernetes 使用 OpenStack Octavia 作为 OpenStack 的 load balancing 解决方案,同时提供到 Kubernetes Services的网络连接

对应关系:

Kubernetes Entities:

Kubernetes Entities 在原生Kubernetes Networking中的部署方式:

Kubernetes Entities 在Kuryr默认配置中的部署方式:

Octavia 相关内容

在Openstack Octavia之上运行Kuryr有以下组件需求:

  • Nova
  • Neutron
  • Glance
  • Barbican (if TLS offloading functionality is enabled)
  • Keystone
  • Rabbit
  • MySQL

Amphorae (load balancer Nova VM)

Octavia works by instantiating a compute resource, i.e. a Nova VM, and running HAProxy inside. These single load balancer Nova VMs are called Amphorae, Running Kuryr with Octavia means that each Kubernetes service that runs in the cluster will need at least one Load Balancer VM, i.e., an Amphora. e.g. [amphora-c5ce48cf-0a37-4c9b-b673-ba59f7911eda]

  • Each Amphora has a separate linux network namespace where HAProxy runs and that is connected to the Kuryr services network. [k8s-service-net=10.0.0.188]
  • The VM host network namespace is used by Octavia to reconfigure and monitor the Load Balancer, which it talks to via HAProxy’s control unix domain socket. [lb-mgmt-net=192.168.0.48]

Octavia 通过 Load Balancer drivers 处理同 Amphorae (load balancer Nova虚机)之间的所有通信

Kuryr-Kubernetes默认使用Amphora Load Balancer Driver处理同Amphorae (load balancer Nova虚机)之间的所有通信. Kuryr 也支持 OVN Octavia driver. 【后续测试中我们使用的是OVN Provider】

OVN as Provider Driver for Octavia

当Kuryr 使用OVN Provider Driver 创建Load Balancer时,负载分担行为通过virtual switch data-path engine执行,不需要创建额外的VMs。

使用方式:

Layer2:

kuryr will tell Octavia to add a Neutron port to the pod network for each load balancer. Therefore the communication from Services to its Pods members and back will go directly through L2 layer. The drawback of this approach is the extra usage of neutron ports in the Pods subnet, that needs to be accordingly dimensioned.

Layer3:

Kuryr will tell Octavia not to add a Neutron port to the pod network for each load balancer. Instead, it relies on the pod and the service subnets being routable. This means that the communication from Pods to Services and back will go through the router. Depending on the SDN of your choice, this may have performance implications.

OVN的优势 (相较于Amphora provider driver)
  • 资源要求更低,不需要创建额外的VM。 (Kuryr 不需要为每个服务都提供一个负载均衡器虚拟机)。
  • 部署更快,通过对每个服务使用 OpenFlow 规则而不是 VM 来提高服务创建速度。
  • OVN同时对VMs和Containers支持虚拟化网络,可以在Kuryr Kubernetes中被使用。
  • 跨所有节点的分布式负载平衡操作,而不是集中到 Amphora 虚拟机中。
OVN的局限性
  • 目前仅支持TCP , UDP, SCTP. Layer-7 LB目前不支持
  • 健康状态监控: 仅支持TCP 和 UDP-CONNECT protocols
  • 目前仅支持 Listeners 和 associated Pools 之间的1:1的协议关联,i.e. 监听TCP协议的Listener只能和关联TCP的pool相关联
  • IPv6 support 未经测试
  • 不支持 IPv4 和 IPv6 members组合
  • load balancing algorithm 仅支持SOURCE_IP_PORT。 ROUND_ROBIN 和 LEAST_CONNECTIONS 暂未支持

OVN Provider Driver 在本次测试中的配置方式

vim /etc/kuryr/kuryr.conf

代码语言:javascript
代码运行次数:0
运行
复制
[kubernetes]
token_file = ""
api_root = https://172.21.104.121:6443
endpoints_driver_octavia_provider = ovn
enable_manager = False
vif_pool_driver = noop
pod_vif_driver = neutron-vif
watch_retry_timeout = 1200
enabled_handlers = vif,endpoints,service,kuryrloadbalancer,kuryrport
service_security_groups_driver = default
pod_security_groups_driver = default
pod_subnets_driver = default
port_debug = True
ssl_client_key_file = /etc/kubernetes/pki/kuryr-client.key
ssl_client_crt_file = /etc/kubernetes/pki/apiserver-kubelet-client.crt


[octavia_defaults]
timeout_member_data = 0
timeout_client_data = 0
lb_algorithm = SOURCE_IP_PORT
enforce_sg_rules = False
member_mode = L2

'ClusterIP' Service Connection Testing(Default Service Type):

Step 1: 创建 demo deployment, 确认neutron port 及 IP地址已分配

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl create deployment demo --image=quay.io/kuryr/demo
deployment.apps/demo created

stack@ubuntu-121:~/devstack$ kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS        AGE   IP           NODE         NOMINATED NODE   READINESS GATES
demo-7c8d858ccf-2jzd7   1/1     Running   0               40s   10.0.0.77    ubuntu-121   <none>           <none>
test-cc4b79c94-9fq6b    1/1     Running   359 (37m ago)   15d   10.0.0.117   ubuntu-121   <none>           <none>

stack@ubuntu-121:~/devstack$ openstack port list
+--------------------------------------+------------------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------+--------+
| ID                                   | Name                                                 | MAC Address       | Fixed IP Addresses                                                                                 | Status |
+--------------------------------------+------------------------------------------------------+-------------------+----------------------------------------------------------------------------------------------------+--------+
| 07695adb-8326-48a1-ac35-3a96ad283161 | default/demo-7c8d858ccf-2jzd7                        | fa:16:3e:76:46:d3 | ip_address='10.0.0.77', subnet_id='670fc5c1-f783-4a1c-9f02-ca82d58a0b58'                           | ACTIVE |

Step 2: 扩展demo deployment 至2个pods

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl scale deploy/demo --replicas=2
deployment.apps/demo scaled

stack@ubuntu-121:~/devstack$ kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS        AGE     IP           NODE         NOMINATED NODE   READINESS GATES
demo-7c8d858ccf-2jzd7   1/1     Running   0               2m33s   10.0.0.77    ubuntu-121   <none>           <none>
demo-7c8d858ccf-87946   1/1     Running   0               25s     10.0.0.106   ubuntu-121   <none>           <none>
test-cc4b79c94-9fq6b    1/1     Running   359 (39m ago)   15d     10.0.0.117   ubuntu-121   <none>           <none>

Step 3: 测试两个pod之间的连通性:

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl exec -it demo-7c8d858ccf-2jzd7 -- /bin/sh
~ $ curl 10.0.0.106:8080
demo-7c8d858ccf-87946: HELLO! I AM ALIVE!!!

~ $ ping 10.0.0.106
PING 10.0.0.106 (10.0.0.106) 56(84) bytes of data.
64 bytes from 10.0.0.106: icmp_seq=1 ttl=64 time=14.0 ms
64 bytes from 10.0.0.106: icmp_seq=2 ttl=64 time=9.44 ms
64 bytes from 10.0.0.106: icmp_seq=3 ttl=64 time=1.64 ms

Step 4: Expose service前的loadbalancer情况(存在一个系统默认创建的)

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ openstack loadbalancer list
+----------------------------------+--------------------+----------------------------------+-------------+---------------------+------------------+----------+
| id                               | name               | project_id                       | vip_address | provisioning_status | operating_status | provider |
+----------------------------------+--------------------+----------------------------------+-------------+---------------------+------------------+----------+
| 9f2fa0af-140a-4bc4-b894-         | default/kubernetes | 61c0d928b168418994287292e615467a | 10.0.0.129  | ACTIVE              | ONLINE           | amphora  |
| 71c7e7b42d59                     |                    |                                  |             |                     |                  |          |
+----------------------------------+--------------------+----------------------------------+-------------+---------------------+------------------+----------+
stack@ubuntu-121:~/devstack$ openstack loadbalancer listener list
+---------------------------+---------------------------+------------------------+-----------------------------+----------+---------------+----------------+
| id                        | default_pool_id           | name                   | project_id                  | protocol | protocol_port | admin_state_up |
+---------------------------+---------------------------+------------------------+-----------------------------+----------+---------------+----------------+
| 192df822-b10a-4a91-8b8e-  | 287da703-8aa5-4418-a7ab-  | default/kubernetes:443 | 61c0d928b168418994287292e61 | HTTPS    |           443 | True           |
| aebf50693aad              | 4bd1527be380              |                        | 5467a                       |          |               |                |
+---------------------------+---------------------------+------------------------+-----------------------------+----------+---------------+----------------+
stack@ubuntu-121:~/devstack$ openstack loadbalancer pool list
+------------------------------+------------------------+------------------------------+---------------------+----------+--------------+----------------+
| id                           | name                   | project_id                   | provisioning_status | protocol | lb_algorithm | admin_state_up |
+------------------------------+------------------------+------------------------------+---------------------+----------+--------------+----------------+
| 287da703-8aa5-4418-a7ab-     | default/kubernetes:443 | 61c0d928b168418994287292e615 | ACTIVE              | HTTPS    | ROUND_ROBIN  | True           |
| 4bd1527be380                 |                        | 467a                         |                     |          |              |                |
+------------------------------+------------------------+------------------------------+---------------------+----------+--------------+----------------+

Step 5: Expose demo service,查看service 具体信息

代码语言:javascript
代码运行次数:0
运行
复制
#expose service不指定type时,默认为ClusterIP type

stack@ubuntu-121:~/devstack$ kubectl expose deploy/demo --port=80 --target-port=8080
service/demo exposed

stack@ubuntu-121:~/devstack$ kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     SELECTOR
demo         ClusterIP   10.0.0.145   <none>        80/TCP    5h55m   app=demo
kubernetes   ClusterIP   10.0.0.129   <none>        443/TCP   15d     <none>

stack@ubuntu-121:~/devstack$ kubectl describe svc demo
Name:              demo
Namespace:         default
Labels:            app=demo
Annotations:       <none>
Selector:          app=demo
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.0.0.145
IPs:               10.0.0.145
Port:              <unset>  80/TCP
TargetPort:        8080/TCP
Endpoints:         10.0.0.106:8080,10.0.0.77:8080
Session Affinity:  None
Events:            <none>

stack@ubuntu-121:~/devstack$ kubectl get svc demo -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2024-05-16T02:09:30Z"
  finalizers:
  - kuryr.openstack.org/service-finalizer
  labels:
    app: demo
  name: demo
  namespace: default
  resourceVersion: "301748"
  uid: f730620a-54d0-4803-b7d7-83bca4b2f90f
spec:
  clusterIP: 10.0.0.145
  clusterIPs:
  - 10.0.0.145
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: demo
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Step 6: 查看创建的kuryrloadbalancer: demo 详细信息

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl get klb demo -o yaml
apiVersion: openstack.org/v1
kind: KuryrLoadBalancer
metadata:
  creationTimestamp: "2024-05-16T02:09:30Z"
  finalizers:
  - kuryr.openstack.org/kuryrloadbalancer-finalizers
  generation: 8
  name: demo
  namespace: default
  ownerReferences:
  - apiVersion: v1
    kind: Service
    name: demo
    uid: f730620a-54d0-4803-b7d7-83bca4b2f90f
  resourceVersion: "301765"
  uid: 72ff1596-eee8-43ed-8e90-2309a9cc3a27
spec:
  endpointSlices:
  - endpoints:
    - addresses:
      - 10.0.0.106
      conditions:
        ready: true
      targetRef:
        kind: Pod
        name: demo-7c8d858ccf-87946
        namespace: default
        uid: 1169dd72-5c84-469a-8c04-af02d273dbc9
    - addresses:
      - 10.0.0.77
      conditions:
        ready: true
      targetRef:
        kind: Pod
        name: demo-7c8d858ccf-2jzd7
        namespace: default
        uid: c3c95c5f-35ca-49e2-bf04-7e18c88fd164
    ports:
    - port: 8080
      protocol: TCP
  ip: 10.0.0.145
  ports:
  - port: 80
    protocol: TCP
    targetPort: "8080"
  project_id: 61c0d928b168418994287292e615467a
  provider: ovn
  security_groups_ids:
  - 99d66fd5-d261-428a-95fe-1321cad77793
  - a7994714-af04-44af-83eb-a10f4e7cce41
  subnet_id: c4c5a2fd-7916-465f-bde8-5a297a4fbfa2
  timeout_client_data: 0
  timeout_member_data: 0
  type: ClusterIP
status:
  listeners:
  - id: e0097e45-48c9-4a45-b4c2-e418dc0821c7
    loadbalancer_id: e81000c7-571b-4583-b04f-9da85065c074
    name: default/demo:TCP:80
    port: 80
    project_id: 61c0d928b168418994287292e615467a
    protocol: TCP
  loadbalancer:
    id: e81000c7-571b-4583-b04f-9da85065c074
    ip: 10.0.0.145
    name: default/demo
    port_id: 1dc2b465-6660-454b-9b0d-b318ab0fae0a
    project_id: 61c0d928b168418994287292e615467a
    provider: ovn
    security_groups:
    - 99d66fd5-d261-428a-95fe-1321cad77793
    - a7994714-af04-44af-83eb-a10f4e7cce41
    subnet_id: c4c5a2fd-7916-465f-bde8-5a297a4fbfa2
  members:
  - id: b1e73fe5-758e-473d-896c-4d33b1cf2e27
    ip: 10.0.0.106
    name: default/demo-7c8d858ccf-87946:8080
    pool_id: 9c89bdcf-461a-44a6-80cf-5f46accfaf8c
    port: 8080
    project_id: 61c0d928b168418994287292e615467a
    subnet_id: 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
  - id: 857a9685-0dfc-4033-a72c-8438618c70ed
    ip: 10.0.0.77
    name: default/demo-7c8d858ccf-2jzd7:8080
    pool_id: 9c89bdcf-461a-44a6-80cf-5f46accfaf8c
    port: 8080
    project_id: 61c0d928b168418994287292e615467a
    subnet_id: 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
  pools:
  - id: 9c89bdcf-461a-44a6-80cf-5f46accfaf8c
    listener_id: e0097e45-48c9-4a45-b4c2-e418dc0821c7
    loadbalancer_id: e81000c7-571b-4583-b04f-9da85065c074
    name: default/demo:TCP:80
    project_id: 61c0d928b168418994287292e615467a
    protocol: TCP

Step 7: 确认创建的loadbalancer情况:

Step 8: 测试demo service下的两个pod之间的load balance

代码语言:javascript
代码运行次数:0
运行
复制
# 测试前需要添加rule放行security group下的tcp流量
openstack security group rule create --protocol tcp --ingress 99d66fd5-d261-428a-95fe-1321cad77793
openstack security group rule create --protocol tcp --egress 99d66fd5-d261-428a-95fe-1321cad77793
openstack security group rule create --protocol tcp --egress a7994714-af04-44af-83eb-a10f4e7cce41
openstack security group rule create --protocol tcp --ingress a7994714-af04-44af-83eb-a10f4e7cce41
代码语言:javascript
代码运行次数:0
运行
复制
#选择一个pod进入,执行测试命令
stack@ubuntu-121:~/devstack$ kubectl exec -it demo-7c8d858ccf-87946 -- /bin/sh
~ $ curl 10.0.0.145
demo-7c8d858ccf-87946: HELLO! I AM ALIVE!!!

~ $ curl 10.0.0.145
demo-7c8d858ccf-2jzd7: HELLO! I AM ALIVE!!!

'Loadbalancer' Service Connection Testing

两种方式:

  • Pool: external IPs 从预先定义的地址池中分配
  • User: 用户自定义external ip address

本次测试中使用了Pool的配置方式:

代码语言:javascript
代码运行次数:0
运行
复制
#官方举例:
[neutron_defaults]
external_svc_net= <id of external network>
# 'external_svc_subnet' field is optional, set this field in case# multiple subnets attached to 
'external_svc_net'external_svc_subnet= <id of external subnet>

#测试环境中的配置
#/etc/kuryr/kuryr.conf

[neutron_defaults]
lbaas_activation_timeout = 1200
external_svc_net = 77d0a0ad-a3f5-4625-802c-8e0affba6ea1
ovs_bridge = br-int
pod_security_groups = 99d66fd5-d261-428a-95fe-1321cad77793,a7994714-af04-44af-83eb-a10f4e7cce41
service_subnets = c4c5a2fd-7916-465f-bde8-5a297a4fbfa2
service_subnet = c4c5a2fd-7916-465f-bde8-5a297a4fbfa2
pod_subnets = 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
pod_subnet = 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
project = 61c0d928b168418994287292e615467a

Step 1: 创建 lb-testing-demo deployment, 确认neutron port 及 IP地址已分配

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl create deployment lb-testing-demo --image=quay.io/kuryr/demo
deployment.apps/lb-testing-demo created

stack@ubuntu-121:~/devstack$ kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS        AGE     IP           NODE         NOMINATED NODE   READINESS GATES
demo-7c8d858ccf-2jzd7              1/1     Running   0               24h     10.0.0.77    ubuntu-121   <none>           <none>
demo-7c8d858ccf-87946              1/1     Running   0               24h     10.0.0.106   ubuntu-121   <none>           <none>
lb-testing-demo-6f9bfdd47c-rfdqp   1/1     Running   0               2m45s   10.0.0.118   ubuntu-121   <none>           <none>
test-cc4b79c94-9fq6b               1/1     Running   383 (49m ago)   16d     10.0.0.117   ubuntu-121   <none>           <none>

Step 2: 扩展lb-testing-demo deployment 至2个pods,测试连通性

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl scale deploy/lb-testing-demo --replicas=2
deployment.apps/lb-testing-demo scaled


stack@ubuntu-121:~/devstack$ kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS        AGE    IP           NODE         NOMINATED NODE   READINESS GATES
demo-7c8d858ccf-2jzd7              1/1     Running   0               24h    10.0.0.77    ubuntu-121   <none>           <none>
demo-7c8d858ccf-87946              1/1     Running   0               24h    10.0.0.106   ubuntu-121   <none>           <none>
lb-testing-demo-6f9bfdd47c-mtzf9   1/1     Running   0               22s    10.0.0.66    ubuntu-121   <none>           <none>
lb-testing-demo-6f9bfdd47c-rfdqp   1/1     Running   0               5m6s   10.0.0.118   ubuntu-121   <none>           <none>
test-cc4b79c94-9fq6b               1/1     Running   383 (51m ago)   16d    10.0.0.117   ubuntu-121   <none>           <none>


stack@ubuntu-121:~/devstack$ kubectl exec -it lb-testing-demo-6f9bfdd47c-mtzf9 -- /bin/sh
~ $ ping 10.0.0.118
PING 10.0.0.118 (10.0.0.118) 56(84) bytes of data.
64 bytes from 10.0.0.118: icmp_seq=1 ttl=64 time=0.952 ms
64 bytes from 10.0.0.118: icmp_seq=2 ttl=64 time=0.439 ms

~ $ curl 10.0.0.66:8080
lb-testing-demo-6f9bfdd47c-mtzf9: HELLO! I AM ALIVE!!!
~ $ curl 10.0.0.118:8080
lb-testing-demo-6f9bfdd47c-rfdqp: HELLO! I AM ALIVE!!!

Step 3: Expose lb-testing-demo service,查看service 具体信息

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl expose deploy/lb-testing-demo --port=80 --target-port=8080 --type=LoadBalancer
service/lb-testing-demo exposed

stack@ubuntu-121:~/devstack$ kubectl get svc -o wide
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP    PORT(S)        AGE   SELECTOR
demo              ClusterIP      10.0.0.145   <none>         80/TCP         8h    app=demo
kubernetes        ClusterIP      10.0.0.129   <none>         443/TCP        16d   <none>
lb-testing-demo   LoadBalancer   10.0.0.150   172.24.4.115   80:32533/TCP   4m    app=lb-testing-demo

stack@ubuntu-121:~/devstack$ kubectl describe svc lb-testing-demo
Name:                     lb-testing-demo
Namespace:                default
Labels:                   app=lb-testing-demo
Annotations:              <none>
Selector:                 app=lb-testing-demo
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.0.0.150
IPs:                      10.0.0.150
LoadBalancer Ingress:     172.24.4.115
Port:                     <unset>  80/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32533/TCP
Endpoints:                10.0.0.118:8080,10.0.0.66:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason               Age   From              Message
  ----    ------               ----  ----              -------
  Normal  KuryrServiceSkipped  2m8s  kuryr-controller  Skipping Service default/lb-testing-demo without Endpoints
  Normal  KuryrEnsureLB        2m5s  kuryr-controller  Provisioning a load balancer
  Normal  KuryrEnsuredLB       114s  kuryr-controller  Load balancer provisioned
  Normal  KuryrEnsureFIP       110s  kuryr-controller  Associating floating IP to the load balancer
  Normal  KuryrEnsuredLB       96s   kuryr-controller  Load balancer provisioned
  
stack@ubuntu-121:~/devstack$ kubectl get svc lb-testing-demo -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2024-05-16T10:15:59Z"
  finalizers:
  - kuryr.openstack.org/service-finalizer
  labels:
    app: lb-testing-demo
  name: lb-testing-demo
  namespace: default
  resourceVersion: "308246"
  uid: db781d98-accc-4657-88cc-9c26e3221354
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.0.0.150
  clusterIPs:
  - 10.0.0.150
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 32533
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: lb-testing-demo
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 172.24.4.115

Step 4: 查看创建的kuryrloadbalancer: lb-testing-demo 详细信息

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl get klb lb-testing-demo -o yaml
apiVersion: openstack.org/v1
kind: KuryrLoadBalancer
metadata:
  creationTimestamp: "2024-05-16T10:16:00Z"
  finalizers:
  - kuryr.openstack.org/kuryrloadbalancer-finalizers
  generation: 9
  name: lb-testing-demo
  namespace: default
  ownerReferences:
  - apiVersion: v1
    kind: Service
    name: lb-testing-demo
    uid: db781d98-accc-4657-88cc-9c26e3221354
  resourceVersion: "308252"
  uid: 04b9fbbb-72be-4861-9856-bf96a4fad33b
spec:
  endpointSlices:
  - endpoints:
    - addresses:
      - 10.0.0.118
      conditions:
        ready: true
      targetRef:
        kind: Pod
        name: lb-testing-demo-6f9bfdd47c-rfdqp
        namespace: default
        uid: b4e24cc7-af40-49e8-9767-c12f5e457eca
    - addresses:
      - 10.0.0.66
      conditions:
        ready: true
      targetRef:
        kind: Pod
        name: lb-testing-demo-6f9bfdd47c-mtzf9
        namespace: default
        uid: 1c471f0f-cf19-489d-87cb-14aea982422f
    ports:
    - port: 8080
      protocol: TCP
  ip: 10.0.0.150
  ports:
  - port: 80
    protocol: TCP
    targetPort: "8080"
  project_id: 61c0d928b168418994287292e615467a
  provider: ovn
  security_groups_ids:
  - 99d66fd5-d261-428a-95fe-1321cad77793
  - a7994714-af04-44af-83eb-a10f4e7cce41
  subnet_id: c4c5a2fd-7916-465f-bde8-5a297a4fbfa2
  timeout_client_data: 0
  timeout_member_data: 0
  type: LoadBalancer
status:
  listeners:
  - id: d7c3a677-78f1-4306-a06f-c563f98fb6ac
    loadbalancer_id: 5c3a079a-65e7-40d9-9d07-2b022c6b4996
    name: default/lb-testing-demo:TCP:80
    port: 80
    project_id: 61c0d928b168418994287292e615467a
    protocol: TCP
  loadbalancer:
    id: 5c3a079a-65e7-40d9-9d07-2b022c6b4996
    ip: 10.0.0.150
    name: default/lb-testing-demo
    port_id: d05dc8e8-b1ec-4513-8e21-18d01b4f7dc4
    project_id: 61c0d928b168418994287292e615467a
    provider: ovn
    security_groups:
    - 99d66fd5-d261-428a-95fe-1321cad77793
    - a7994714-af04-44af-83eb-a10f4e7cce41
    subnet_id: c4c5a2fd-7916-465f-bde8-5a297a4fbfa2
  members:
  - id: 9834fdb8-86ff-40ff-ba7c-05e9f1d45090
    ip: 10.0.0.118
    name: default/lb-testing-demo-6f9bfdd47c-rfdqp:8080
    pool_id: 08f6603a-e2e3-46ed-85a9-37e0f0648214
    port: 8080
    project_id: 61c0d928b168418994287292e615467a
    subnet_id: 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
  - id: b7a5494b-6dd1-468f-b9e8-8ef4fa79f211
    ip: 10.0.0.66
    name: default/lb-testing-demo-6f9bfdd47c-mtzf9:8080
    pool_id: 08f6603a-e2e3-46ed-85a9-37e0f0648214
    port: 8080
    project_id: 61c0d928b168418994287292e615467a
    subnet_id: 670fc5c1-f783-4a1c-9f02-ca82d58a0b58
  pools:
  - id: 08f6603a-e2e3-46ed-85a9-37e0f0648214
    listener_id: d7c3a677-78f1-4306-a06f-c563f98fb6ac
    loadbalancer_id: 5c3a079a-65e7-40d9-9d07-2b022c6b4996
    name: default/lb-testing-demo:TCP:80
    project_id: 61c0d928b168418994287292e615467a
    protocol: TCP
  service_pub_ip_info:
    alloc_method: pool
    ip_addr: 172.24.4.115
    ip_id: c5717c48-4720-4c96-82d4-d1c5c4e53d88

Step 5: 确认创建的loadbalancer情况:

Step 6: 在lb-testing-demo service的pod上测试两个其pod之间的load balance

代码语言:javascript
代码运行次数:0
运行
复制
stack@ubuntu-121:~/devstack$ kubectl get pods
NAME                               READY   STATUS    RESTARTS        AGE
demo-7c8d858ccf-2jzd7              1/1     Running   0               24h
demo-7c8d858ccf-87946              1/1     Running   0               24h
lb-testing-demo-6f9bfdd47c-mtzf9   1/1     Running   0               30m
lb-testing-demo-6f9bfdd47c-rfdqp   1/1     Running   0               35m
test-cc4b79c94-9fq6b               1/1     Running   384 (22m ago)   16d


#在Service
stack@ubuntu-121:~/devstack$ kubectl exec -it lb-testing-demo-6f9bfdd47c-mtzf9 -- /bin/sh
~ $ curl 172.24.4.115
lb-testing-demo-6f9bfdd47c-mtzf9: HELLO! I AM ALIVE!!!

~ $ curl 172.24.4.115
lb-testing-demo-6f9bfdd47c-rfdqp: HELLO! I AM ALIVE!!!

Step 7: 在demo service的pod上测试lb-testing-demo Service的两个pod之间的load balance

代码语言:javascript
代码运行次数:0
运行
复制
#分别curl EXTERNAL-IP(172.24.4.115) 和 CLUSTER-IP (10.0.0.150)

stack@ubuntu-121:~/devstack$ kubectl exec -it demo-7c8d858ccf-2jzd7 -- /bin/sh
~ $ curl 172.24.4.115
lb-testing-demo-6f9bfdd47c-rfdqp: HELLO! I AM ALIVE!!!
~ $ curl 172.24.4.115
lb-testing-demo-6f9bfdd47c-rfdqp: HELLO! I AM ALIVE!!!
~ $ curl 172.24.4.115
lb-testing-demo-6f9bfdd47c-rfdqp: HELLO! I AM ALIVE!!!
~ $ curl 172.24.4.115
lb-testing-demo-6f9bfdd47c-mtzf9: HELLO! I AM ALIVE!!!
~ $ curl 172.24.4.115
lb-testing-demo-6f9bfdd47c-mtzf9: HELLO! I AM ALIVE!!!
~ $ curl 172.24.4.115
lb-testing-demo-6f9bfdd47c-mtzf9: HELLO! I AM ALIVE!!!


~ $ curl 10.0.0.150
lb-testing-demo-6f9bfdd47c-rfdqp: HELLO! I AM ALIVE!!!
~ $ curl 10.0.0.150
lb-testing-demo-6f9bfdd47c-mtzf9: HELLO! I AM ALIVE!!!
~ $ curl 10.0.0.150
lb-testing-demo-6f9bfdd47c-mtzf9: HELLO! I AM ALIVE!!!
~ $ curl 10.0.0.150
lb-testing-demo-6f9bfdd47c-rfdqp: HELLO! I AM ALIVE!!!

官方文档:

https://docs.openstack.org/kuryr-kubernetes/latest/index.html

https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/basic.html

https://docs.openstack.org/devstack/latest/

https://docs.openstack.org/kuryr-kubernetes/latest/installation/services.html

https://docs.openstack.org/ovn-octavia-provider/latest/admin/driver.html

Git 源:

https://opendev.org/openstack-dev/devstack

https://opendev.org/openstack/kuryr-kubernetes

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Before We Begin
    • Kuryr-Kuberneters需配置的组件
    • Kuryr-Kubernetes的部署模式
      • 嵌套(Nested)模式
      • Standalone模式
  • 测试环境:
  • 安装过程:
    • 创建stack用户
    • 更新部分apt源为国内源
    • 更新pip源
    • 添加配置到bashrc解决git pull报错
    • 下载Devstack
    • 下载local.conf 配置文件:
    • 修改local.conf 配置文件参数:
    • 执行stack.sh 进行安装:
    • 安装完成
    • 系统组件确认
  • 配置 & 测试kuryr-kubernetes能力
    • Kubernetes : 创建一个Pod
    • Openstack: 创建一个VM
    • 进行Openstack VM 和 Kubernetes Pod间的 Ping 测试验证连通性:
  • Kuryr-Kubernetes 方案中的Kubernetes services networking
    • 对应关系:
      • Kubernetes Entities:
      • Kubernetes Entities 在原生Kubernetes Networking中的部署方式:
      • Kubernetes Entities 在Kuryr默认配置中的部署方式:
    • Octavia 相关内容
      • 在Openstack Octavia之上运行Kuryr有以下组件需求:
      • Amphorae (load balancer Nova VM)
      • OVN as Provider Driver for Octavia
      • 使用方式:
      • OVN Provider Driver 在本次测试中的配置方式
    • 'ClusterIP' Service Connection Testing(Default Service Type):
      • Step 1: 创建 demo deployment, 确认neutron port 及 IP地址已分配
      • Step 2: 扩展demo deployment 至2个pods
      • Step 3: 测试两个pod之间的连通性:
      • Step 4: Expose service前的loadbalancer情况(存在一个系统默认创建的)
      • Step 5: Expose demo service,查看service 具体信息
      • Step 6: 查看创建的kuryrloadbalancer: demo 详细信息
      • Step 7: 确认创建的loadbalancer情况:
      • Step 8: 测试demo service下的两个pod之间的load balance
    • 'Loadbalancer' Service Connection Testing
      • 两种方式:
      • 本次测试中使用了Pool的配置方式:
      • Step 1: 创建 lb-testing-demo deployment, 确认neutron port 及 IP地址已分配
      • Step 2: 扩展lb-testing-demo deployment 至2个pods,测试连通性
      • Step 3: Expose lb-testing-demo service,查看service 具体信息
      • Step 4: 查看创建的kuryrloadbalancer: lb-testing-demo 详细信息
      • Step 5: 确认创建的loadbalancer情况:
      • Step 6: 在lb-testing-demo service的pod上测试两个其pod之间的load balance
      • Step 7: 在demo service的pod上测试lb-testing-demo Service的两个pod之间的load balance
  • 官方文档:
  • Git 源:
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档