Openstack对接两套Ceph
openpstack-Pike对接cephRBD单集群,配置简单,可参考openstack官网或者ceph官网;
1.Openstack官网参考配置:
https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html
2.Ceph官网参考配置:
https://docs.ceph.com/docs/master/install/install-ceph-deploy/
由于物理环境和业务需求变更,当前配置云计算环境要求一套openstack对接后台两套不同版本的cephRBD存储集群;
此处以现有以下正常运行环境展开配置;
1)openstack-Pike
2)Ceph Luminous 12.2.5
3)Ceph Nautilus 14.2.7
其中,openstack对接ceph Luminous配置完成,且正常运行。现在此套openstack+ceph环境基础上,新增一套ceph Nautilus存储集群,使openstack能够同时调用两套存储资源。
配置步骤
1.拷贝配置文件
#拷贝配置文件、cinder账户key到openstack的cinder节点
/etc/ceph/ceph2.conf
/etc/ceph/ceph.client.cinder2.keyring
#此处使用cinder账户,仅拷贝cinder2账户的key即可
2.创建存储池
#OSD添加完成后,创建存储池,指定存储池pg/pgp数,配置其对应功能模式
ceph osd pool create volumes 512 512
ceph osd pool create backups 128 128
ceph osd pool create vms 512 512
ceph osd pool create images 128 128
ceph osd pool application enable volumes rbd
ceph osd pool application enable backups rbd
ceph osd pool application enable vms rbd
ceph osd pool application enable images rbd
3.创建集群访问账户
ceph auth get-or-create client.cinder2 mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images‘
ceph auth get-or-create client.cinder2-backup mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=backups‘
ceph auth get-or-create client.glance mon ‘allow r‘ osd ‘allow class-read object_prefix rbd_children, allow rwx pool=images‘
4.查看进程信息
#查看当前openstack的cinder组件服务进程
source /root/keystonerc.admin
cinder service-list
5.修改配置文件
#修改cinder配置文件
[DEFAULT]
enabled_backends = ceph1,ceph2
[ceph1]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph1
rbd_pool = volumes1
rbd_ceph_conf = /etc/ceph1/ceph1.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder1
rbd_secret_uuid = **
[ceph2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph2
rbd_pool = volumes2
rbd_ceph_conf = /etc/ceph/ceph2/ceph2.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder2
rbd_secret_uuid = **
6.重启服务
#重启cinder-volume服务
service openstack-cinder-volume restart Redirecting to /bin/systemctl restart openstack-cinder-volume.service
service openstack-cinder-scheduler restart Redirecting to /bin/systemctl restart openstack-cinder-scheduler.service
7.查看进程
cinder service-list
8.创建卷测试
#卷类型绑定
cinder type-create ceph1
cinder type-key ceph1 set volume_backend_name=ceph1
cinder type-create ceph2
cinder type-key ceph2 set volume_backend_name=ceph2
9.查看绑定结果
cinder create --volume-type ceph1 --display_name {volume-name}{volume-size}
cinder create --volume-type ceph2 --display_name {volume-name}{volume-size}
配置libvirt
1.将第二套ceph的密钥添加到nova-compute节点的libvirt
#为了使VM可以访问到第二套cephRBD云盘,需要在nova-compute节点上将第二套ceph的cinder用户的密钥添加到libvirt
ceph -c /etc/ceph2/ceph2/ceph2.conf -k /etc/ceph2/ceph.client.cinder2.keyring auth get-key client.cinder2 |tee client.cinder2.key
#绑定之前cinder.conf中第二个ceph集群的uuid
cat > secret2.xml <<EOF
<secret ephemeral=‘no‘ private=‘no‘>
<uuid>***</uuid>
<usage type=‘ceph‘>
<name>client.cinder2 secret</name>
</usage>
</secret>
#以上整段拷贝执行即可,替换uuid值
sudo virsh secret-define --file secret2.xml
sudo virsh secret-set-value --secret ***** --base64 $(cat client.cinder2.key) rm client.cinder2.key secret2.xml
#删除提示信息,输入Y即可
2.验证配置是否生效
#通过之前创建的两个类型的云盘挂载到openstack的VM验证配置
nova volume-attach {instance-id}{volume1-id}
nova volume-attach {instance-id}{volume2-id}
参考资料:
《ceph设计原理与实现》---谢型果
红帽官网
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_block_device_to_openstack_guide/installing_and_configuring_ceph_clients
ceph官网
https://docs.ceph.com/docs/master/install/install-ceph-deploy/