部署cinder存储节点 安装cinder 存储节点为ceph的节点,一般会安装在mon所在的节点上
# 在全部存储节点安装cinder服务,以compute01节点为例
[root@compute01 ~]# yum install -y openstack-cinder targetcli python-keystone
# 在全部存储节点操作,以compute01节点为例;
# 注意”my_ip”参数,根据节点修改;
# 注意cinder.conf文件的权限:root:cinder
[root@compute01 ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
[root@compute01 ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf
[DEFAULT]
state_path = /var/lib/cinder
my_ip = 存储节点ip
glance_api_servers = http://controller:9292
auth_strategy = keystone
enabled_backends = ceph
transport_url=rabbit://openstack:123456@controller01:5672,controller02:5672
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:123456@controller01/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller01:11211,controller02:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = $state_path/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
设置开机自启动
## 全部存储节点设置
# 开机启动
[root@compute01 ~]# systemctl enable openstack-cinder-volume.service target.service
对接Ceph做准备 创建pool
# Ceph默认使用pool的形式存储数据,pool是对若干pg进行组织管理的逻辑划分,pg里的对象被映射到不同的osd,因此pool分布到整个集群里。
# 可以将不同的数据存入1个pool,但如此操作不便于客户端数据区分管理,因此一般是为每个客户端分别创建pool。
#创建三个pool,volumes,images,vms
#我们是90个osd,2个副本,这样结合官网公式,算出pg数
[root@computer01 ceph]# ceph osd pool create volumes 2048
pool 'volumes' created
[root@computer01 ceph]# ceph osd pool create vms 1024
pool 'vms' created
[root@computer01 ceph]# ceph osd pool create images 256
pool 'images' created
##新创建的池必须在使用之前进行初始化。使用该rbd工具初始化池:
rbd pool init volumes
rbd pool init images
rbd pool init vms
安装Ceph客户端
# glance-api服务所在节点需要安装python-rbd;
# 这里glance-api服务运行在3个控制节点,以controller01节点为例
[root@controller01 ~]# yum install python-rbd -y
# cinder-volume与nova-compute服务所在节点需要安装ceph-common;cinder-backup也需要安装;
[root@compute01 ~]# yum install ceph-common -y
授权设置 创建用户
# ceph默认启用cephx authentication(见ceph.conf),需要为nova/cinder与glance客户端创建新的用户并授权;
# 可在ceph的管理节点上分别为运行cinder-volume与glance-api服务的节点创建client.glance与client.cinder用户并设置权限;
# 针对pool设置权限,pool名对应创建的pool
[root@computer01 ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[root@computer01 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
推送client.glance秘钥
# 将创建client.glance用户生成的秘钥推送到运行glance-api服务的节点
[root@computer01 ceph]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
[root@computer01 ceph]# ceph auth get-or-create client.glance | ssh root@controller01 tee /etc/ceph/ceph.client.glance.keyring
[root@computer01 ceph]# ceph auth get-or-create client.glance | ssh root@controller02 tee /etc/ceph/ceph.client.glance.keyring
# 同时修改秘钥文件的属主与用户组
[root@controller01 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@controller02 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring
推送client.cinder秘钥
# 将创建client.cinder用户生成的秘钥推送到运行cinder-volume服务的节点
[root@computer01 ceph]# ceph auth get-or-create client.cinder | ssh root@computer03 tee /etc/ceph/ceph.client.cinder.keyring
[root@computer01 ceph]# ceph auth get-or-create client.cinder | ssh root@computer03 tee /etc/ceph/ceph.client.cinder.keyring
[root@computer01 ceph]# ceph auth get-or-create client.cinder | ssh root@computer03 tee /etc/ceph/ceph.client.cinder.keyring
# 同时修改秘钥文件的属主与用户组
chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
推送client.cinder秘钥(nova-compute)
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
chown cinder:cinder
libvirt秘钥
##nova-compute所在节点需要将client.cinder用户的秘钥文件存储到libvirt中;当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到该秘钥以访问ceph集群;
[root@computer01 ceph]# ceph auth get-key client.cinder | ssh root@computer13 tee /etc/ceph/client.cinder.key
##将秘钥加入libvirt
# 首先生成1个uuid,全部计算节点可共用此uuid(其他节点不用操作此步);
# uuid后续配置nova.conf文件时也会用到,请保持一致
uuidgen
457eb676-33da-42ec-9a8c-9293d545c337
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
扫码关注腾讯云开发者
领取腾讯云代金券
Copyright © 2013 - 2025 Tencent Cloud. All Rights Reserved. 腾讯云 版权所有
深圳市腾讯计算机系统有限公司 ICP备案/许可证号:粤B2-20090059 深公网安备号 44030502008569
腾讯云计算(北京)有限责任公司 京ICP证150476号 | 京ICP备11018762号 | 京公网安备号11010802020287
Copyright © 2013 - 2025 Tencent Cloud.
All Rights Reserved. 腾讯云 版权所有