一.ceph存储服务器
节点 IP 服务
node1 192.168.80.21 mon mgr osd*12
node2 192.168.80.22 mon mgr osd*12
node3 192.168.80.23 mon mgr osd*12
二.准备工作
hostnamectl set-hostname node1 #其他递推
并将三节点的信息写入/etc/hosts文件下
添加描述
在node1上设置免密,三台互通
ssh-keygen -t rsa
ssh-copy-id 192.168.80.22/23
apt-get install ntp
在/etc/ntp.conf中设置本机为ntpserver并允许192.168.80.0网段访问,如下
添加描述
在其他node节点注释掉pool,添加server 192.168.80.21
三.ceph集群部署
在node1上操作
1.uuidgen #生成集群fsid-uuid
2.配置/etc/ceph/ceph.conf如下:
[global]
fsid = 9bf24809-220b-4910-b384-c1f06ea80728
mon_initial_members = node1,node2,node3
mon_host = 192.168.80.21,192.168.80.22,192.168.80.23
public_network = 192.168.80.0/24
cluster_network = 192.168.80.0/24
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 3
osd_pool_default_min_size = 2
3.创建集群mon秘钥
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
4.创建集群client秘钥并导入到集群mon秘钥
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
5.根据主机名,ip生成mon map
monmaptool --create --add node1 192.168.80.21 --add node2 192.168.80.22 --add node3 192.168.80.23 --fsid 9bf24809-220b-4910-b384-c1f06ea80728 /tmp/monmap
6.初始化并启动mon
mkdir -p /var/lib/ceph/mon/ceph-node1/
chown -R ceph:ceph /var/lib/ceph /tmp/monmap /tmp/ceph.mon.keyring /etc/ceph/
sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
systemctl start ceph-mon@node1.service
systemctl enable ceph-mon@node1.service
systemctl status ceph-mon@node1.service
7.将秘钥,配置文件同步到node2/3
scp -r /etc/ceph/* root@node2:/etc/ceph/
scp -r scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node2:/var/lib/ceph/bootstrap-osd/
scr -r /tmp/monmap /tmp/ceph.mon.keyring root@node2:/tmp/
之后初始化并启动mon服务通步骤6
ceph -s 检测集群状态,若出现msgr2问题则任一节点 ceph mon enable-msgr2
四.bcache附盘
针对nvme ssd 进行分区推荐parted
单盘操作可参考:
bcahce添加:
1. wipefs –a /dev/nvme0n1p7
2. wipefs –a /dev/sdb
3. make-bcache –C /dev/nvme0n1p7 #会有cset.uuid
4. echo /dev/nvme0n1p7 > /sys/fs/bcache/register
5. bcache-super-show /dev/nvme0n1p7 | grep cset.uuid| cut –d $’\t’ –f 3 #提取cset.uuid最终写入/sys/block
6. make-bcache –B /dev/sdb --discard –writeback –wipebcache
7. echo /dev/sdb > /sys/fs/bcache/register #可以看到成为bcache0,若cde---1,2,3具体根据数目
8. echo “cset.uuid” > /sys/block/bcache0/bcache/attach
脚本如下:
wipefs -a /dev/ssd
make-bcache -C /dev/ssd
echo /dev/ssd > /sys/fs/bcache/register
bcache-super-show /dev/nvme0n1p7 | grep cset.uuid| cut –d $’\t’ –f 3 #提取cset.uuid最终写入/sys/block
for i in {b..m}; do wipefs -a /dev/sd$i; done
for i in {b..m}; do ./make-bcache -B /dev/sd$i --wipe-bcache;sleep 1 ; done
for i in {b..g};do echo /dev/sd$i > /sys/fs/bcache/register;sleep 1 ;done
for i in {o..11}; do echo fc55d8f5-3b98-477c-94c6-0eaf917fb1fd > /sys/block/bcache$i/bcache/attach ; done
添加描述
五.创建crush map
1. ceph osd getcrushmap –o map 提取最新crushmap
2. crushtool –d map –o map.txt 反编译 并可以修改weight为1 root default共12
3. crushtool –c map.txt –o newmap 编译crush图
4. ceph osd setcrushmap –i newmap 注入
六.创建osd
for i in {b..m}; do parted /dev/sd$i mklabel gpt -s ;done
for i in {0..11};do ceph-volume lvm zap /dev/bcache$i;done
for i in {0..11};do ceph-volume lvm prepare --data /dev/bcache$i;done
ceph-volume lvm activate --all
以上为BLUESTORE存储引擎
添加描述
若为filestore需要指定journal 日志可参考如下
举例bcache0:/dev/nvme0n1p1
echo bcache0:/dev/nvme0n1p1
for i in {0..11},do j=$(i=(i+1)); echo "bcache${i}:/dev/nvme0n1p${j}" >> osd.txt
cat osd.txt | ./prepare-osd.sh 生成osd 如ceph-volume lvm prepare --data /dev/bcache0 --journal /dev/nvme0n1
prepare-osd.sh
#!/bin/bash
while read line
do
disk=`echo $line | cut -d ":" -f 1`
journal=`echo $line | cut -d ':' -f 2`
echo $disk $journal
#ceph-volume lvm prepare --data /dev/${disk} --journal /dev/${journal} #为方便下面查看注释创建命令已注销
done
cat osd.txt
bcache0:nvme0n1p1
bcache1:nvme0n1p2
bcache2:nvme0n1p3
bcache3:nvme0n1p4
bcache4:nvme0n1p5
bcache5:nvme0n1p6
bcache6:nvme1n1p1
bcache7:nvme1n1p2
bcache8:nvme1n1p3
bcache9:nvme1n1p4
bcache10:nvme1n1p5
bcache11:nvme1n1p6
cat osd.txt| ./prepare_osd.sh
bcache0 nvme0n1p1
bcache1 nvme0n1p2
bcache2 nvme0n1p3
bcache3 nvme0n1p4
bcache4 nvme0n1p5
bcache5 nvme0n1p6
bcache6 nvme1n1p1
bcache7 nvme1n1p2
bcache8 nvme1n1p3
bcache9 nvme1n1p4
bcache10 nvme1n1p5
bcache11 nvme1n1p6
7.创建pool以及rbd,映射
ceph osd pool create rbd_pool 1024 1024
./create_12_rbd.sh
脚本如下
#!/bin/bash
for i in {0..11}
do
rbd create rbd_pool/rbd${i} --size 1024G
rbd feature disable rbd_pool/rbd${i} object-map fast-diff deep-flatten
rbd info rbd_pool/rbd${i}
done
客户端rbd映射
scp root@node1:/etc/ceph/* /etc/ceph
ceph osd application enable rbd_pool rbd
for i {0..3};
do rbd map rbd_pool/rbd${i};
done
其他节点
for i {4..7}
for i {8..11}
rbd showmapped
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。