ceph对接openstack

使用rbd方式提供存储如下数据:
(1)image:保存glanc中的image;
(2)volume存储:保存cinder的volume;保存创建虚拟机时选择创建新卷;
(3)vms的存储:保存创建虚拟机时不选择创建新卷;

部署

(1)客户端也要有cent用户
(2)openstack要用ceph的节点(比如compute-node和storage-node)安装下载的软件包
(3)部署节点上执行,为openstack节点安装ceph
(4)设置权限
(5)create pools,只需在一个ceph节点上操作即可

在集群中部署
#ceph osd pool create images 128
#ceph osd pool create vms 128
#ceph osd pool create volumes 128

(当ceph集群osd小于5时,pg数为128;osd大于5小于10,pg数为512;osd大于10小于50,pg数为1024)
在这里插入图片描述

(6)在ceph集群中,创建glance和cinder用户, 只需在一个ceph节点上操作即可(nova使用cinder用户,就不单独创建了)

#ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
#ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
(cinder创建实例,需要更多权限)

在这里插入图片描述
(7)拷贝ceph-ring, 只需在一个ceph节点上操作即可

#cd /etc/ceph
#ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring
#ceph auth get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring

在这里插入图片描述

使用scp拷贝到其他节点(ceph集群节点和openstack的要用ceph的节点比如compute-node和storage-node

#scp ceph.client.glance.keyring ceph.client.cinder.keyring pikachu3:/etc/ceph/

在这里插入图片描述

(8)更改文件的权限(所有客户端节点均执行)

#chown glance:glance /etc/ceph/ceph.client.glance.keyring
#chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

没有用户,创建用户

#useradd glance
#useradd cinder

(9)更改libvirt权限(只需在nova-compute节点上操作即可,每个计算节点都做)

#uuidgen

在这里插入图片描述

#vim uuid

c866eefd-0b0c-4f40-ae80-05808dce831c

在/etc/ceph/目录下(在什么目录没有影响,放到/etc/ceph目录方便管理)

#vim secret.xml

修改uuid
<secret ephemeral='no' private='no'>
<uuid>c866eefd-0b0c-4f40-ae80-05808dce831c</uuid>
<usage type='ceph'>
 <name>client.cinder secret</name>
</usage>
</secret>

将 secret.xml 拷贝到所有compute节点,并执行

#virsh secret-define --file secret.xml
#virsh secret-list

在这里插入图片描述

#ceph auth get-key client.cinder > ./client.cinder.key

在这里插入图片描述

uuid
#virsh secret-set-value --secret c866eefd-0b0c-4f40-ae80-05808dce831c --base64 $(cat ./client.cinder.key)

(10)配置Glance, 在所有的controller节点上做如下更改
删除实例,删除原镜像

#vim /etc/glance/glance-api.conf

[DEFAULT]
default_store = rbd
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
[image_format]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

在所有的controller节点上做如下更改

#systemctl restart openstack-glance-api.service
#systemctl status openstack-glance-api.service

#ceph osd lspools
#rbd ls images

上传镜像

#source openrc
#openstack image create "cirros" --file cirros-0.3.3-x86_64-disk.img.img --disk-format qcow2 --container-format bare --public
#rbd ls images

存储节点配置 Cinder

#vim /etc/cinder/cinder.conf

[DEFAULT]
my_ip = 172.16.254.63
glance_api_servers = http://controller:9292
auth_strategy = keystone
enabled_backends = ceph
state_path = /var/lib/cinder
transport_url = rabbit://openstack:admin@controller
[backend]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = c866eefd-0b0c-4f40-ae80-05808dce831c
volume_backend_name=ceph
#把[lvm]注释掉

重启cinder服务

控制节点
#systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
#systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
存储节点
#systemctl restart openstack-cinder-volume.service
#systemctl status openstack-cinder-volume.service

创建验证
在这里插入图片描述

#rbd ls volumes

计算节点配置Nova

#vim /etc/nova/nova.conf

[libvirt]
virt_type=qemu
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = c866eefd-0b0c-4f40-ae80-05808dce831c

重启nova服务

控制节点
#systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-compute.service openstack-nova-cert.service
计算节点
#systemctl restart openstack-nova-compute.service

创建虚拟机验证
在这里插入图片描述

#rbd ls vms
#rbd ls volumes
#rbd ls images

猜你喜欢

转载自blog.csdn.net/PpikachuP/article/details/89407505