找一个服务器(自己的电脑也行),使用 Ubuntu-16.04.7 的镜像创建两个虚拟机。
虚拟机名称可以分别命名为openstack-controller
,openstack-compute
。
ubuntu-16.04.7-desktop-amd64.iso可以直接在官网下载,https://releases.ubuntu.com/16.04.7/,这里需要注意的是Pike版本只能使用16.04的ubuntu,其他版本的系统会存在兼容性问题。
关于虚拟机的配置问题,可以稍微溢出一些,控制节点需要多一些内存和硬盘。
控制节点【CPU:1* 4 核,内存 8G,硬盘 40G】
计算节点【CPU:1* 4 核,内存 4G,硬盘 30G】
网卡初始化需要一个管理网口,连接Internet(只需要能连接apt的官方源(cn.ubuntu)即可),不需要科学上网。
使用vSphere在服务器上创建三个网络,分别为vxlan-net、vlan-net和flat-net。
创建完成后能看到虚拟机端口组连接在物理适配器上。
打开虚拟机设置界面,选择 添加(A) -- 添加网络适配器,这里可以添加四个网络适配器。分别对应的是 管理网络、Vxlan网络、Vlan网络和Flat网络。这些网卡后面安装Neutron组件的时候会用到。
两个虚拟机的网络名称需要一模一样。controller节点叫vxlan-net,compute节点也叫vxlan-net,这样他们就在一个局域网内了。
进入虚拟机设置IP,设置IP时,除了管理网络需要设置网关,其他网卡不需要设置网关,使用直连路由即可。
IP的设置比较随意,同网段内不冲突即可,这里举个栗子
controller节点 | compute节点 | |
---|---|---|
管理网络 | xx.xx.xx.61(连接internet的IP) | xx.xx.xx.62(连接internet的IP) |
vxlan网络 | 10.0.1.61/24 | 10.0.1.62/24 |
vlan网络 | 10.0.0.61/24 | 10.0.0.62/24 |
Flat网络 | 172.31.0.61/24 | 172.31.0.62/24 |
1、安装SSH sudo apt install ssh
2、安装vim sudo apt install vim
3、安装net-tools sudo apt install net-tools
其他工具随意安装都可。apt的源不用修改,用ubuntu的中国官方源 cn.archive.ubuntu.com
即可,如有需求用其他的源加速也可以。
4、设置网络别名:sudo vim /etc/hosts
, 添加两行
10.0.0.61 controller
10.0.0.62 compute
测试一下:controller节点上执行 ping compute
, compute节点上执行 ping controller
5、设置SSH的root用户登录:sudo vim /etc/ssh/sshd_config
, 修改 # Authentication 段配置为
# Authentication
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes
6、如果可以连接Internet,Ubuntu桌面版的时间同步是没有问题的,用 date 命令查看虚拟机时间。
sudo add-apt-repository cloud-archive:pike
添加完成后会生成文件 /etc/apt/sources.list.d/cloudarchive-pike.list
,查看是否存在即可
sudo apt install python-openstackclient
$ openstack --version
openstack 3.12.0
MariaDB是MySQL的一个分支,MySQL被甲骨文收购后存在版权问题,所以大家都用MariaDB
sudo apt install mariadb-server python-pymysql
安装完成后创建 /etc/mysql/mariadb.conf.d/99-openstack.cnf
文件,添加如下信息
[mysqld]
bind-address = 0.0.0.0
# bind-address = 10.0.0.61
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
绑定地址用10网段也可,只要能和compute节点通信即可。0.0.0.0 表示绑定所有IP地址
重启 mysql 服务(这里还是叫mysql, ̄□ ̄||)
sudo service mysql restart
mysql -u root
登录试试,如果出现报错 Access denied for user 'root'@'localhost'
,用sudo su切换到root用户再试试。
如果都不行,那就不是密码的问题,而是插件类型问题
解决办法:在/etc/mysql/mariadb.conf.d/99-openstack.cnf 的配置文件添加一行 skip-grant-table
[mysqld]
bind-address = 0.0.0.0
# bind-address = 10.0.0.61
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
skip-grant-table
$ sudo service mysql restart
$ mysql -u root
MariaDB (none)> use mysql;
MariaDB mysql> select user, plugin from user;
### 看看插件的类型,如果plugin root的字段是auth_socket或者unix_socket,那就是有问题的
MariaDB mysql> update user set authentication_string=password("你的root密码"),plugin='mysql_native_password' where user='root';
注意替换字符串"你的root密码"。
最后删除99-openstack.cnf里面的skip-grant-table重启mysql。
$ sudo service mysql restart
$ mysql -u root -p
sudo apt install rabbitmq-server
添加openstack用户,密码是admin,按需修改
rabbitmqctl add_user openstack admin
允许用户的配置、写入和读取访问权限 openstack
rabbitmqctl set_permissions openstack "." "." ".*"
sudo apt install memcached python-memcache
编辑 /etc/memcached.conf 文件,里面有一个-l的配置,修改为 controller的IP 10.0.0.61,这个IP后面可以记住
$ sudo vim /etc/memcached.conf
-l 10.0.0.61
登录mysql:如果你前面安装设置了密码就用 mysql -u root -p 登录,如果没有设置过密码,就用sudo su切换到root用户,直接mysql进入
MariaDB (none)> CREATE DATABASE keystone;
MariaDB (none)> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB (none)> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
替换KEYSTONE_DBPASS为合适的密码,例如 用 admin 替换命令中的 KEYSTONE_DBPASS。
$ sudo apt install keystone apache2 libapache2-mod-wsgi
$ sudo vim /etc/keystone/keystone.conf
[database]
# ...
connection = mysql+pymysql://keystone:admin@controller/keystone
[token]
# ...
provider = fernet
这里的密码已经改成admin了,配置文件需要同步修改
$ sudo su
$ /bin/sh -c "keystone-manage db_sync" keystone
初始化 Fernet 密钥库
$ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
$ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引导身份服务(注意这里的密码已经改成了admin)
keystone-manage bootstrap --bootstrap-password admin --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ -- bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
ServerName controller
重启 apache服务 sudo service apache2 restart
$ openstack project create --domain default --description "Service Project" service
$ openstack project create --domain default --description "Demo Project" demo
$ openstack user create --domain default --password-prompt demo
User Password:
Repeat User Password:
$ openstack role create user
$ openstack role add --project demo --user demo user
验证一下,admin用户输入admin的密码, demo用户输入demo的密码
$ unset OS_AUTH_URL OS_PASSWORD
$ openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
Password:
创建一个脚本叫 admin-openrc
sudo vim admin-openrc
### 添加环境变量的设置,需要改一下OS_PASSWORD,这里我已经改成了admin
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
测试一下
$ . admin-openrc
$ openstack token issue
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:44:35.659723Z |
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
替换GLANCE_DBPASS
为合适的密码。这里还是假定密码是admin。
$ . admin-openrc
$ openstack user create --domain default --password-prompt glance
$ openstack role add --project service --user glance admin
$ openstack service create --name glance --description "OpenStack Image" image
### 创建glance服务的API-Endpoint,
$ openstack endpoint create --region RegionOne image public http://controller:9292
$ openstack endpoint create --region RegionOne image internal http://controller:9292
$ openstack endpoint create --region RegionOne image admin http://controller:9292
编辑配置文件 /etc/glance/glance-api.conf
$ sudo apt install glance
###
$ sudo vim /etc/glance/glance-api.conf
[database]
# ...
connection = mysql+pymysql://glance:GLANCE_PASS@controller/glance
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
编辑配置文件 /etc/glance/glance-registry.conf
[database]
# ...
connection = mysql+pymysql://glance:GLANCE_PASS@controller/glance
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
填充glance服务数据库
$ sudo su
$ /bin/sh -c "glance-manage db_sync" glance
最后重启服务
# service glance-registry restart
# service glance-api restart
测试一下
$ . admin-openrc
$ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
$ openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+--------------------------------------+--------+--------+
wget卡住了怎么办 !!试试这个网址 https://github.com/Areturn/openstack-install/blob/master/cirros-0.3.5-x86_64-disk.img,还不行就去网上下载一个,这个镜像不大,到处都有。
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
替换NOVA_DBPASS
为合适的密码。本文还是以admin作为密码举例
$ . admin-openrc
$ openstack user create --domain default --password-prompt nova
$ openstack role add --project service --user nova admin
$ openstack service create --name nova --description "OpenStack Compute" compute
$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
# 创建一个placement用户,密码假设也是admin
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
$ openstack role add --project service --user placement admin
$ openstack service create --name placement --description "Placement API" placement
$ openstack endpoint create --region RegionOne placement public http://controller:8778
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
$ openstack endpoint create --region RegionOne placement admin http://controller:8778
# apt install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api
配置 /etc/nova/nova.conf
这里配置会比较多,注意替换密码
[DEFAULT]
# log_dir 这一行配置要注释
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 10.0.0.61
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
如果嫌麻烦就把所有密码配置成一样的
$ sudo su
$ /bin/sh -c "nova-manage api_db sync" nova
$ /bin/sh -c "nova-manage cell_v2 map_cell0" nova
$ /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
$ /bin/sh -c "nova-manage db sync" nova
$ sudo nova-manage cell_v2 list_cells
$ service nova-api restart
$ service nova-consoleauth restart
$ service nova-scheduler restart
$ service nova-conductor restart
$ service nova-novncproxy restart
1、直接安装 nova-compute 组件
# apt install nova-compute
配置 /etc/nova/nova.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 10.0.0.62
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
$ egrep -c '(vmx|svm)' /proc/cpuinfo
4
如果打印 >=1 就不用修改,如果是0那就需要修改 /etc/nova/nova-compute.conf
,把kvm改成qemu
[libvirt]
# virt_type = kvm
virt_type = qemu
$ sudo service nova-compute restart
. admin-openrc
$ openstack compute service list --service nova-compute
+----+-------+--------------+------+-------+---------+----------------------------+
| ID | Host | Binary | Zone | State | Status | Updated At |
+----+-------+--------------+------+-------+---------+----------------------------+
| 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 |
+----+-------+--------------+------+-------+---------+----------------------------+
$ sudo su
$ /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
6、设置自动发现节点(可选)
sudo vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
$ mysql -u root -p
MariaDB [(none)] > CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
$ . admin-openrc
$ openstack user create --domain default --password-prompt neutron
$ openstack role add --project service --user neutron admin
$ openstack service create --name neutron --description "OpenStack Networking" network
$ openstack endpoint create --region RegionOne network public http://controller:9696
$ openstack endpoint create --region RegionOne network internal http://controller:9696
$ openstack endpoint create --region RegionOne network admin http://controller:9696
sudo apt install neutron-server neutron-plugin-ml2 neutron-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent
sudo vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
auth_strategy = keystone
transport_url = rabbit://openstack:RABBIT_PASS@controller
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
sudo vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
flat_networks = provider
vni_ranges = 1:1000
enable_ipset = true
sudo vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = openvswitch
sudo vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
sudo vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
sudo vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
$ sudo su
$ /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
$ service nova-api restart
$ service neutron-server restart
$ service neutron-openvswitch-agent restart
$ service neutron-dhcp-agent restart
$ service neutron-metadata-agent restart
$ service neutron-l3-agent restart
apt install neutron-linuxbridge-agent
sudo vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
3、配置Nova
sudo vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
4、重启服务
$ sudo service nova-compute restart
$ sudo apt install openvswitch-switch
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex <原网卡>
前面提到了虚拟机有四个网卡,这里选择第四个网卡,连接flat-net的那个。
删除原来的网卡配置,换成br-ex来配置。<原网卡>换成你得网卡名称,一般叫 ensxx
auto br-ex
iface br-ex inet static
address 172.31.0.61
netmask 255.255.255.0
## External network interface
auto <原网卡>
iface <原网卡> inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
重新拉起接口 br-ex
$ sudo ifdown br-ex
$ sudo ifup br-ex
sudo vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
tunnel_types = vxlan
l2_population = true
[ovs]
local_ip = 10.0.1.61
bridge_mappings = provider:br-ex
[securitygroup]
firewall_driver = openvswitch
这里需要注意吧local_ip换一下,本文举例的控制节点是10.0.1.61,计算节点是10.0.1.62
$ sudo service neutron-l3-agent restart
$ sudo service neutron-openvswitch-agent restart
sudo service neutron-server restart
$ sudo ovs-vsctl show
ee4c4da7-6be5-4c98-9312-1de608132b4d
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Port "ens35"
Interface "ens35"
ovs_version: "2.8.4"
is_connected状态都是true,说明插件安装成功。
1、安装 openstack-dashboard, 并配置 Apache2 Http-Server
$ sudo apt install openstack-dashboard
如果下面的配置和原文件有冲突,就删除原来的配置
sudo vim /etc/openstack-dashboard/local_settings.py
OPENSTACK_HOST = "controller"
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
sudo vim /etc/apache2/conf-available/openstack-dashboard.conf
## 添加一行
WSGIApplicationGroup %{GLOBAL}
$ sudo service apache2 reload
到此即完成了仪表盘的安装,打开仪表盘吧,访问第一个外网网卡的IP,http://xxx.xxx.xxx.61/horizon
输入Defalut + admin + admin进入
$ ip netns
qdhcp-b169637c-45ff-4753-ad01-434abce4aac0 (id: 2)
qrouter-04567b6a-df3b-428f-a341-027ee12deb0e (id: 1)
进入qdhcp-xxx空间,可以看到有一个叫 tap14b0644d-05 的端口,它的IP是192.168.1.2
sudo ip netns exec qdhcp-b169637c-45ff-4753-ad01-434abce4aac0 bash
# ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
tap14b0644d-05 Link encap:Ethernet HWaddr fa:16:3e:fe:7a:be
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fefe:7abe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:483 errors:0 dropped:0 overruns:0 frame:0
TX packets:468 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:39560 (39.5 KB) TX bytes:45319 (45.3 KB)
进入router-xxx空间,可以看到有一个叫 qr-a694c798-e2 的端口,它的IP是192.168.1.1
$ sudo ip netns exec qrouter-04567b6a-df3b-428f-a341-027ee12deb0e bash
# ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:235 errors:0 dropped:0 overruns:0 frame:0
TX packets:235 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:26152 (26.1 KB) TX bytes:26152 (26.1 KB)
qr-a694c798-e2 Link encap:Ethernet HWaddr fa:16:3e:0c:88:20
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe0c:8820/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:632 errors:0 dropped:0 overruns:0 frame:0
TX packets:607 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:55404 (55.4 KB) TX bytes:62657 (62.6 KB)
查看ovs的网桥,可以看到这两个端口都在br-int(集成网桥)上,并且vlan tag是一样的
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "tap14b0644d-05"
tag: 2
Interface "tap14b0644d-05"
type: internal
Port "qr-a694c798-e2"
tag: 2
Interface "qr-a694c798-e2"
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
sudo ip netns exec qrouter-04567b6a-df3b-428f-a341-027ee12deb0e bash
# ping 192.168.1.17
PING 192.168.1.17 (192.168.1.17) 56(84) bytes of data.
64 bytes from 192.168.1.17: icmp_seq=1 ttl=64 time=7.98 ms
64 bytes from 192.168.1.17: icmp_seq=2 ttl=64 time=0.783 ms
64 bytes from 192.168.1.17: icmp_seq=3 ttl=64 time=0.516 ms
64 bytes from 192.168.1.17: icmp_seq=4 ttl=64 time=0.399 ms
64 bytes from 192.168.1.17: icmp_seq=5 ttl=64 time=0.414 ms
^C
--- 192.168.1.17 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4061ms
rtt min/avg/max/mdev = 0.399/2.019/7.985/2.986 ms
cubswin:)
# ssh cirros@192.168.1.17
cirros@192.168.1.17's password:
$ ifconfig
eth0 Link encap:Ethernet HWaddr FA:16:3E:02:01:FD
inet addr:192.168.1.17 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe02:1fd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:180 errors:0 dropped:0 overruns:0 frame:0
TX packets:207 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:25044 (24.4 KiB) TX bytes:24391 (23.8 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.169.254 192.168.1.1 255.255.255.255 UGH 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
$
$ ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1): 56 data bytes
64 bytes from 192.168.1.1: seq=0 ttl=64 time=1.200 ms
64 bytes from 192.168.1.1: seq=1 ttl=64 time=0.782 ms
64 bytes from 192.168.1.1: seq=2 ttl=64 time=0.466 ms
^C
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.466/0.816/1.200 ms
watch -n 1 -d 'sudo ovs-ofctl dump-flows -O openflow13 br-int'
可以很明显的看到OVS流标被命中,并且port-id和虚拟机端口、DHCP端口的ID一致。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。