随Kubernetes的普及,容器环境对于大多数业务已经足够,但某些特殊业务涉及对内核的修改,还需要运行在虚拟机里,K8s将容器和虚拟机统一管理是将来的趋势。
下图K8s屏蔽了底层物理服务器和网络的细节,K8s资源平面之上使用openvswitch(Kube-OVN网络插件)构建多租户VPC隔离环境,让容器和虚拟机在VPC里互通。
项目地址:https://github.com/laoyang103/qdcloud 开源不易,欢迎点个Star鼓励一波^_^
部署采用两台虚拟机,配置如下:
K8s集群需要通过访问国外网站共享上网,配置服务器如下:
[root@vpn-node1 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e2:b4:47 brd ff:ff:ff:ff:ff:ff
inet 10.64.1.214/24 brd 10.64.1.255 scope global noprefixroute dynamic ens33
valid_lft 547sec preferred_lft 547sec
inet6 fe80::775f:86f1:125c:dffb/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e2:b4:51 brd ff:ff:ff:ff:ff:ff
inet 10.16.255.254/16 brd 10.16.255.255 scope global noprefixroute ens34
valid_lft forever preferred_lft forever
inet6 fe80::9e85:8258:89eb:1aa3/64 scope link noprefixroute
valid_lft forever preferred_lft forever
开启内核转发(临时),配置iptables通过SNAT共享上网,让K8s集群和里面的用户上网
[root@vpn-node1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@vpn-node1 ~]# iptables -t nat -A POSTROUTING -s 10.16.0.0/16 -j MASQUERADE
[root@vpn-node1 ~]# iptables-save
....看到这个就可以了
-A POSTROUTING -s 10.16.0.0/16 -j MASQUERADE
关闭防火墙selinux等安全措施(临时)
[root@vpn-node1 ~]# systemctl stop firewalld
[root@vpn-node1 ~]# setenforce 0
[root@vpn-node1 ~]# systemctl disable firewalld
[root@vpn-node1 ~]# iptables -F
安装K3s
通过访问国外网站为跳板,连接到K8s集群,IP地址:10.16.255.1/16,网关10.16.255.254,测试上网
root@k8s-node1:~# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:6d:5a:b6 brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 10.16.255.1/16 brd 10.16.255.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe6d:5ab6/64 scope link
valid_lft forever preferred_lft forever
root@k8s-node1:~/qdcloud# ping qq.com
PING qq.com (113.108.81.189) 56(84) bytes of data.
64 bytes from 113.108.81.189 (113.108.81.189): icmp_seq=1 ttl=51 time=57.5 ms
--- qq.com ping statistics ---
克隆代码
root@k8s-node1:~# git clone https://github.com/laoyang103/qdcloud
root@k8s-node1:~# cd qdcloud/
root@k8s-node1:~/qdcloud# ls
doc Dockerfile init.sh install.sh lib mkimage.sh pom.xml README.md src
安K3s-1.28.8,K3s是K8s的轻量版,适合于单机部署和有资源限制情况下使用
# 使用rancher国内镜像安装k3s-1.28.8
export INSTALL_K3S_VERSION=v1.28.8+k3s1
export INSTALL_K3S_MIRROR=cn
# 安装时去掉默认网络插件flannel,去掉默认ingress-controller traefik
# 节点最大pod数量5000,将kubeconfig保存到~/.kube/config
wget https://rancher-mirror.rancher.cn/k3s/k3s-install.sh
sh k3s-install.sh --flannel-backend=none --disable-network-policy --disable=traefik --write-kubeconfig-mode 644 --write-kubeconfig ~/.kube/config --kubelet-arg=max-pods=5000
等待K3s的容器处于Pending状态,因为没有CNI所有只能Pending
root@k8s-node1:~/qdcloud# kubectl get pod -A -w
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system metrics-server-54fd9b65b-hhg2k 0/1 Pending 0 0s
kube-system coredns-6799fbcd5-dfw4m 0/1 Pending 0 0s
kube-system local-path-provisioner-6c86858495-s26sr 0/1 Pending 0 0s
kube-system metrics-server-54fd9b65b-hhg2k 0/1 Pending 0 0s
kube-system local-path-provisioner-6c86858495-s26sr 0/1 Pending 0 0s
kube-system coredns-6799fbcd5-dfw4m 0/1 Pending 0 0s
部署Kube-OVN
安装Kube-OVN-1.12网络插件(实现VPC多租户隔离的CNI)
# 下载kube-ovn安装脚本(已经将pod网络改为10.42.0.0/16,svc改为10.43.0.0/16)
wget http://stu.jxit.net.cn:88/qdcloud/kube-ovn-1.12-k3s-1.28.8-install.sh
bash -x kube-ovn-1.12-k3s-1.28.8-install.sh
等待所有容器处于Running状态,此处容易出问题,大多都是拉取国外镜像
root@k8s-node1:~# kubectl get pod -A -w
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system ovn-central-78595449dc-vphsq 1/1 Running 0 107s
kube-system ovs-ovn-m7zrq 1/1 Running 0 107s
kube-system kube-ovn-controller-6f8d5b5dd4-rr75g 1/1 Running 0 55s
kube-system kube-ovn-cni-7vchp 1/1 Running 0 55s
kube-system kube-ovn-pinger-z2bb2 1/1 Running 0 38s
kube-system coredns-6799fbcd5-c6mzv 1/1 Running 0 38s
kube-system kube-ovn-monitor-76c9c9544b-nqsk9 1/1 Running 0 55s
kube-system metrics-server-54fd9b65b-f4296 1/1 Running 0 39s
kube-system local-path-provisioner-6c86858495-mthsl 1/1 Running 0 39s
部署multus
部署intel多网卡CNI插件multus(让一个容器有多个网卡),并安装到/opt/cni/bin
# 安装intel多网卡CNI插件multus,并安装到/opt/cni/bin
kubectl apply -f src/main/webapp/WEB-INF/cgi/yml/public/multus-daemonset-thick.yml
# 下载intel多网卡CNI插件multus,并安装到/opt/cni/bin
wget http://stu.jxit.net.cn:88/k8s/kube-ovn/multus-cni_4.0.2_linux_amd64.tar.gz
tar -zxf multus-cni_4.0.2_linux_amd64.tar.gz
cp multus-cni_4.0.2_linux_amd64/multus* /opt/cni/bin/
# 开启kube-ovn的VPC网关支持
kubectl apply -f src/main/webapp/WEB-INF/cgi/yml/public/enable-vpc-nat-gw.yml
让multus桥接到物理网络,先查看物理网卡名和IP地址段,我这里是ens33,网段10.16.0.0/16
root@k8s-node1:~/qdcloud# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:6d:5a:b6 brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 10.16.255.1/16 brd 10.16.255.255 scope global ens33
valid_lft forever preferred_lft forever
修改src/main/webapp/WEB-INF/cgi/yml/public/external-network.yml文件中网卡名为ens33,创建桥接网络ovn-vpc-external-network,桥接到物理网卡(每个VPC中路由器WAN口所在的网络)
# 让multus桥接到物理网络,注意要修改里面的物理网卡名和IP地址段
root@k8s-node1:~/qdcloud# vim src/main/webapp/WEB-INF/cgi/yml/public/external-network.yml
# cidrBlock: 10.16.0.0/16 # 外部网络的网段
# gateway: 10.16.255.254 # 外部网络的物理网关的地址
# ...
# "type": "macvlan",
"master": "ens33",
# "mode": "bridge",
# ...
root@k8s-node1:~/qdcloud# kubectl apply -f src/main/webapp/WEB-INF/cgi/yml/public/external-network.yml
subnet.kubeovn.io/ovn-vpc-external-network created
networkattachmentdefinition.k8s.cni.cncf.io/ovn-vpc-external-network created
查看新建的桥接网络,其中EXCLUDEIPS是不能被使用的IP
root@k8s-node1:~/qdcloud# kubectl get subnet ovn-vpc-external-network
NAME PROVIDER VPC PROTOCOL CIDR PRIVATE NAT DEFAULT GATEWAYTYPE V4USED V4AVAILABLE V6USED V6AVAILABLE EXCLUDEIPS U2OINTERCONNECTIONIP
ovn-vpc-external-network ovn-vpc-external-network.kube-system IPv4 10.16.0.0/16 false false false distributed 0 14333 0 0 ["10.16.0.0..10.16.200.0","10.16.255.254"]
至此K3s+Kube-OVN部署完毕。
cloud-hypervisor是轻量级虚拟化管理工具,底层依然使用KVM,启动虚拟机实际上是在容器中调用cloud-hypervisor命令创建虚拟机
root@k8s-node1:~# wget http://stu.jxit.net.cn:88/qdcloud/cloud-hypervisor -O /usr/bin/cloud-hypervisor
2024-05-08 07:36:53 (17.3 MB/s) - ‘/usr/bin/cloud-hypervisor’ saved [4585784/4585784]
root@k8s-node1:~# chmod +x /usr/bin/cloud-hypervisor
确定系统支持虚拟化,如果不支持,需要开启
root@k8s-node1:~# cat /proc/cpuinfo | grep vmx
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities
vmx flags : vnmi invvpid ept_x_only ept_ad tsc_offset vtpr mtf ept vpid unrestricted_guest ple
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities
vmx flags : vnmi invvpid ept_x_only ept_ad tsc_offset vtpr mtf ept vpid unrestricted_guest ple
root@k8s-node1:~#
VMware如下图,物理机在BIOS里开启虚拟化支持
VPN服务器提供用户访问管理界面、连接VPN到自己学习环境,由MySQL、JDK、Tomcat、openvpn组成
在VPN服务器克隆项目代码
[root@vpn-node1 qdcloud]# git clone https://github.com/laoyang103/qdcloud
[root@vpn-node1 qdcloud]# cd qdcloud/
[root@vpn-node1 qdcloud]# ls
doc Dockerfile init.sh install.sh lib mkimage.sh pom.xml README.md src
安装JDK、maven、mariadb等软件
[root@vpn-node1 qdcloud]# yum -y install vim wget java-1.8.0-openjdk maven mariadb-server mariadb
初始化数据库密码为123456,导入数据库SQL脚本
[root@vpn-node1 qdcloud]# systemctl restart mariadb
[root@vpn-node1 qdcloud]# mysqladmin -uroot password 123456
[root@vpn-node1 qdcloud]# mysql -uroot -p123456 -e "create database jxcms"
[root@vpn-node1 qdcloud]# mysql -uroot -p123456 jxcms < qdcloud/doc/jxcms.sql
修改数据库里的可用区表lab_region,将可用区1的mac地址改为自己VPN机器上网网卡的mac,有点难以理解,这个是多可用区的部署方案,暂时不详细解释了,这里只做部署。
[root@vpn-node1 qdcloud]# mysql -uroot -p123456 jxcms -e "select *from lab_region"
+----+------------+-----------------+---------+---------+-------------------+
| id | name | domain | webport | vpnport | mac |
+----+------------+-----------------+---------+---------+-------------------+
| 1 | 可用区1 | localhost | 888 | 1194 | 00:00:34:23:22:46 |
| 2 | 可用区2 | qd2.jxit.net.cn | 888 | 1194 | f6:39:cc:d8:05:a5 |
+----+------------+-----------------+---------+---------+-------------------+
[root@vpn-node1 qdcloud]# ip r | grep default | awk '{print $5}'
ens33
[root@vpn-node1 qdcloud]# ip l show dev $(ip r | grep default | awk '{print $5}')
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:e2:b4:47 brd ff:ff:ff:ff:ff:ff
[root@vpn-node1 qdcloud]# nic=$(ip r | grep default | awk '{print $5}')
[root@vpn-node1 qdcloud]# nic=$(ip r | grep default | awk '{print $5}')
[root@vpn-node1 qdcloud]# echo $nic
ens33
[root@vpn-node1 qdcloud]# mac=$(ip l show dev $nic | grep link | awk '{print $2}')
[root@vpn-node1 qdcloud]# echo $mac
00:0c:29:e2:b4:47
[root@vpn-node1 qdcloud]# mysql -uroot -p123456 jxcms -e "update lab_region set mac='00:0c:29:e2:b4:47' where id=1"
[root@vpn-node1 qdcloud]# mysql -uroot -p123456 jxcms -e "select *from lab_region"
+----+------------+-----------------+---------+---------+-------------------+
| id | name | domain | webport | vpnport | mac |
+----+------------+-----------------+---------+---------+-------------------+
| 1 | 可用区1 | localhost | 888 | 1194 | 00:0c:29:e2:b4:47 |
| 2 | 可用区2 | qd2.jxit.net.cn | 888 | 1194 | f6:39:cc:d8:05:a5 |
+----+------------+-----------------+---------+---------+-------------------+
部署tomcat
在VPN服务器部署tomcat8到/opt目录,替换监听端口为888
# 安装tomcat用于运行管理系统
cd /opt/
curl http://stu.jxit.net.cn:88/k8s/tomcat8-cgi.tar.gz -o tomcat8-cgi.tar.gz
tar -zxf tomcat8-cgi.tar.gz
rm tomcat8-cgi.tar.gz -rf
# 配置tomcat监听888端口,便于后面nginx做代理
sed -i "s/port=\"80\"/port=\"888\"/g" /opt/tomcat8/conf/server.xml
编译运行
修改maven配置文件/usr/share/maven/conf/settings.xml,默认源为aliyun,不然编译会很慢
<mirrors>
<mirror>
<id>nexus-aliyun</id>
<mirrorOf>central</mirrorOf>
<name>Nexus aliyun</name>
<url>http://maven.aliyun.com/nexus/content/groups/public</url>
</mirror>
</mirrors>
回到项目源码目录,编译源码生成war包,拷贝到tomcat目录下(代码已经将数据库地址写为127.0.0.1:3306,账号root 密码123456),启动tomcat
# 编译管理系统源码,拷贝到tomcat
cd /root/qdcloud/
mvn install:install-file -Dfile=lib/tangyuan-0.9.0.jar -DgroupId=org.xson -DartifactId=tangyuan -Dversion=0.9.0 -Dpackaging=jar
mvn install:install-file -Dfile=lib/rpc-util-1.0.jar -DgroupId=cn.gatherlife -DartifactId=rpc-util -Dversion=1.0 -Dpackaging=jar
mvn install:install-file -Dfile=lib/patchca-0.5.0-SNAPSHOT.jar -DgroupId=net.pusuo -DartifactId=patchca -Dversion=0.5.0-SNAPSHOT -Dpackaging=jar
mvn install:install-file -Dfile=lib/common-object-0.0.1-SNAPSHOT.jar -DgroupId=org.xson -DartifactId=common-object -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar
mvn package -Dmaven.test.skip=true
cp target/qdcloud.war /opt/tomcat8/webapps/ROOT.war
# 启动tomcat
/opt/tomcat8/bin/startup.sh
安装openvpn
项目里已经整理好了一份openvpn的服务端配置,拷贝配置文件
cd /root/qdcloud/doc/conf/
# 拷贝密钥配置文件等过去
cp -r openvpn/ /etc/
# 创建每个VPN客户端的IP网关配置目录,每个客户端账号一个文件,文件名就是账号名,里面是IP和网关
mkdir -p /etc/openvpn/ccd
使用二进制方式下载openvpn程序文件到/usr/bin/openvpn
[root@vpn-node1 qdcloud]# wget http://stu.jxit.net.cn:88/qdcloud/openvpn -O /usr/bin/openvpn
Saving to: ‘/usr/bin/openvpn’
2024-05-08 03:14:47 (7.58 MB/s) - ‘/usr/bin/openvpn’ saved [4640784/4640784]
[root@vpn-node1 qdcloud]# chmod +x /usr/bin/openvpn
运行openvpn服务端,并放到后台运行,查看VPN虚拟网卡
[root@vpn-node1 conf]# nohup openvpn --config /etc/openvpn/openvpn.conf --client-config-dir /etc/openvpn/ccd --crl-verify /etc/openvpn/crl.pem &
[1] 11349
[root@vpn-node1 conf]# nohup: ignoring input and appending output to ‘nohup.out’
[root@vpn-node1 conf]# ip a
5: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 10.32.0.1 peer 10.32.0.2/32 scope global tun0
valid_lft forever preferred_lft forever
配置kubectl
拷贝K3s的kubeconfig和kubectl到VPN服务器,kubeconfig是访问APIServer的地址和令牌,kubectl是客户端程序
[root@vpn-node1 conf]# mkdir /root/.kube
[root@vpn-node1 conf]# scp 10.16.255.1:/root/.kube/config /root/.kube/
root@10.16.255.1's password:
config 100% 2953 1.5MB/s 00:00
[root@vpn-node1 conf]# scp 10.16.255.1:/usr/local/bin/kubectl /usr/bin/
root@10.16.255.1's password:
kubectl 100% 62MB 38.5MB/s 00:01
[root@vpn-node1 conf]#
修改kubeconfig中的APIServer地址为10.16.255.1:6443,尝试在VPN服务端上使用kubectl获取pod状态
[root@vpn-node1 conf]# vim /root/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CR.......................................
server: https://10.16.255.1:6443
[root@vpn-node1 conf]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system ovn-central-78595449dc-vphsq 1/1 Running 0 3h16m
kube-system ovs-ovn-m7zrq 1/1 Running 0 3h16m
.......................
测试使用
访问VPN服务器的888端口,进入管理界面,使用账号jx00000001 密码123456,身份选择教师,登录系统
进入管理界面,点击环境管理,虚拟机列表,即可关闭开启虚拟机和容器,只有jx-ops-81是虚拟机,剩下是容器
随便一个容器jx-nginx-11,会顺带把路由器(第一个不可操作的容器)也打开,这里点开后不会马上显示开启,可能需要刷新一下。
具体在后台看开没开可以查看所有jx开头的容器,期中vpc-nat-gw-gateway-jx00000001-0是双IP的路由器,jx00000001-nginx-11就是后面单IP容器
[root@vpn-node1 jx00000001]# kubectl get pod -A -w | grep jx
kube-system vpc-nat-gw-gateway-jx00000001-0 1/1 Running 0 53s
ns-jx00000001 jx00000001-nginx-11 1/1 Running 0 41s
连接VPN
首先下载openvpn客户端http://dl.jxit.net.cn/soft/openvpn-win10.exe并安装,系统至少Win10。在管理节点点击下载VPN密钥,保存到本地(名字为jx000xxx.ovpn)。修改密钥中服务器地址为VPN服务器WAN口IP。
打卡VPN客户端,在右下角右键点击导入VPN密钥
输入在管理平台上的账号密码,上面用的是jx00000001密码123456,看到分配IP地址10.32.xx.xx连接成功
连接好了VPN后,直接可以通过xshell连接10.10.10.0/24网段的虚拟机或者路由器,先连接10.10.10.11,可以看到与宿主机的内存一致,所以这个是容器。
[c:\~]$ ssh root@10.10.10.11
Connecting to 10.10.10.11:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.
System is booting up. See pam_nologin(8)
[root@jx00000001-nginx-11 ~]# free -g
total used free shared buff/cache available
Mem: 3 1 0 0 2 2
Swap: 3 0 3
[root@jx00000001-nginx-11 ~]#
再去管理界面打开81,这个创建的就比较慢了,可以到后台查看容器创建进度,看到Running就可以了
[root@vpn-node1 jx00000001]# kubectl get pod -A -w | grep jx
kube-system vpc-nat-gw-gateway-jx00000001-0 1/1 Running 0 53s
ns-jx00000001 jx00000001-nginx-11 1/1 Running 0 41s
ns-jx00000001 jx00000001-ops-81 0/1 Pending 0 0s
ns-jx00000001 jx00000001-ops-81 0/1 Pending 0 0s
ns-jx00000001 jx00000001-ops-81 0/1 Pending 0 0s
ns-jx00000001 jx00000001-ops-81 0/1 Init:0/3 0 0s
ns-jx00000001 jx00000001-ops-81 0/1 Init:0/3 0 0s
ns-jx00000001 jx00000001-ops-81 0/1 Init:0/3 0 2s
ns-jx00000001 jx00000001-ops-81 0/1 Init:1/3 0 5s
ns-jx00000001 jx00000001-ops-81 0/1 Init:2/3 0 6s
ns-jx00000001 jx00000001-ops-81 0/1 PodInitializing 0 7s
ns-jx00000001 jx00000001-ops-81 1/1 Running 0 8s
再用xshell连接81,查看内存和内核版本,内存比宿主机的还大,所以这个是虚拟机。
root@ubuntu-container-rootfs:~# uname -r
5.15.12+
root@ubuntu-container-rootfs:~# free -g
total used free shared buff/cache available
Mem: 7 0 7 0 0 7
Swap: 0 0 0
root@ubuntu-container-rootfs:~#
伴随这个虚拟机的创建,也会创建一个PVC用于保存虚拟硬盘(raw格式),这样虚拟机就可以保留状态,目前容器都是临时存储,重启删除后数据就会丢失。
[root@vpn-node1 qdcloud]# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ns-jx00000001 jx00000001-ops-81-pvc Bound pvc-43b885b4-77d8-4c68-b9ca-56d8221e4d4e 40Gi RWO local-path 36m
[root@vpn-node1 qdcloud]#
Kube-OVN技术通过实现虚拟机和容器的网络隔离,为云原生环境提供了一个安全、高效的网络架构。它不仅优化了资源管理,还强化了数据安全,是推动云基础设施现代化的关键一步。随着技术的不断进步,Kube-OVN将继续为各个行业带来更多创新与价值。
Kube-OVN中文文档:
扫码关注腾讯云开发者
领取腾讯云代金券
Copyright © 2013 - 2025 Tencent Cloud. All Rights Reserved. 腾讯云 版权所有
深圳市腾讯计算机系统有限公司 ICP备案/许可证号:粤B2-20090059 深公网安备号 44030502008569
腾讯云计算(北京)有限责任公司 京ICP证150476号 | 京ICP备11018762号 | 京公网安备号11010802020287
Copyright © 2013 - 2025 Tencent Cloud.
All Rights Reserved. 腾讯云 版权所有