ceph docking openstack environment
First, the use RBD provides storage, the following data:
1, image: The image saved glance
2, volume storage: Save cinder of volume; select Create a new volume is created when you save a virtual machine
3, vms storage: do not choose to save when you create a virtual machine to create a new volume
Second, the implementation of step
1, the client must have a user cent
useradd cent && echo "123" | passwd --stdin cent echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph chmod 440 /etc/sudoers.d/ceph
2, openstack use ceph nodes (such as compute-node and storage-node) installation package download:
yum localinstall ./* -y
Or: Each node installation clients (nodes to access the cluster ceph):
Python-rbd install yum yum install Ceph-the Common If you first adopt the above installed the client, in fact, these two packages in rpm packages already installed a
3, the implementation of the deployment node for node installation openstack ceph:
ceph-deploy install controller ceph-deploy admin controller
4, the client performs
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
5, create pools, can only operate on a node ceph:
ceph osd pool create images 1024 ceph osd pool create vms 1024 ceph osd pool create volumes 1024
It displays the status of the pool
ceph osd lspools
6, in ceph cluster, create a glance and cinder user, simply operate on a node to ceph:
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' nova使用cinder用户,就不单独创建了
7, the copy ceph-ring, can only operate on a node ceph:
ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring ceph auth get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring
Use scp to copy to other nodes (node cluster nodes and ceph openstack of use ceph such as compute-node and storage-node, this docking is a all-in-one environment, so you can copy to the controller node)
[root@yunwei ceph]# ls ceph.client.admin.keyring ceph.client.cinder.keyring ceph.client.glance.keyring ceph.conf rbdmap tmpR3uL7W [root@yunwei ceph]# [root@yunwei ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring controller:/etc/ceph/
8, change file permissions (all client nodes are executed)
chown glance:glance /etc/ceph/ceph.client.glance.keyring chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
9, libvirt change permissions (can only operate on the nova-compute nodes, each computing node to do)
uuidgen
940f0485-e206-4b49-b878-dcd0cb9c70a4
In the etc / ceph / the / directory (no effect in any directory, into the / etc / ceph directory to facilitate management):
cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
The secret.xml copied to all compute nodes, and execute ::
virsh secret-define --file secret.xml
ceph auth get-key client.cinder > ./client.cinder.key
virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)
Finally, all compute nodes client.cinder.key and secret.xml are the same, previously generated note of uuid: 940f0485-e206-4b49-b878-dcd0cb9c70a4
If you experience the following error:
[root @ the Controller Ceph] # virsh Secret-the DEFINE - File secret.xml Error: secret.xml setting property failed error: internal error: UUID d448a6ee have to -60f3-42a3-b6fa- Secret defined as the 6ec69cab2378 client.cinder secret used together with [the root Controller @ ~] # virsh Secret_ver. List the UUID amount ------------------------------- ------------------------------------------------- d448a6ee b6fa---60f3-42a3 6ec69cab2378 Ceph client.cinder Secret [root @ the Controller ~] # virsh undefine d448a6ee-Secret-60f3-42a3-b6fa- 6ec69cab2378 deleted d448a6ee Secret -60f3-42a3-b6fa- 6ec69cab2378 [root @ the Controller~]# virsh secret-list UUID 用量 -------------------------------------------------------------------------------- [root@controller ceph]# virsh secret-define --file secret.xml 生成 secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 [root@controller ~]# virsh secret-list UUID 用量 -------------------------------------------------------------------------------- 940f0485-e206-4b49-b878-dcd0cb9c70a4 ceph client.cinder secret virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)
10, configure Glance, make the following changes on all controller nodes:
vim /etc/glance/glance-api.conf
[DEFAULT] default_store = rbd [cors] [cors.subdomain] [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 [image_format] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = glance [matchmaker_redis] [oslo_concurrency] [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [paste_deploy] flavor = keystone [profiler] [store_type_location_strategy] [task] [taskflow_executor]
Make the following changes on all controller nodes:
systemctl restart openstack-glance-api.service systemctl status openstack-glance-api.service
· Create image verification:
[Controller the root @ ~] # OpenStack Create Image "cirros" --file-0.3.3-cirros the x86_64-disk.img.img --disk the format-the format qcow2 --container-Bare --public [the root Controller @ ~] LS ImagesRF Royalty Free rbd # 9ce5055e-4217-44b4-A237-e7b577a20dac ********** the output image shows a success
11, the configuration Cinder:
vim /etc/cinder/cinder.conf
[DEFAULT] my_ip = 192.168.11.3 glance_api_servers = http://controller:9292 auth_strategy = keystone enabled_backends = lvm,ceph state_path = /var/lib/cinder transport_url = rabbit://openstack:admin@controller [backend] [barbican] [brcd_fabric_example] [cisco_fabric_example] [coordination] [cors] [cors.subdomain] [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [fc-zone-manager] [healthcheck] [key_manager] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder [matchmaker_redis] [oslo_concurrency] lock_path = /var/lib/cinder/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [oslo_reports] [oslo_versionedobjects] [profiler] [ssl] [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-vg volumes_dir = $state_path/volumes iscsi_protocol = iscsi iscsi_helper = lioadm iscsi_ip_address = 192.168.11.5 [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4 volume_backend_name=ceph
Restart cinder Service:
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
Create a volume verification:
[root@controller gfs]# rbd ls volumes volume-43b7c31d-a773-4604-8e4a-9ed78ec18996
12, the configuration Nova:
vim /etc/nova/nova.conf
[DEFAULT] my_ip=172.16.254.63 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis=osapi_compute,metadata transport_url = rabbit://openstack:admin@controller [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [barbican] [cache] [cells] [cinder] os_region_name = RegionOne [cloudpipe] [conductor] [console] [consoleauth] [cors] [cors.subdomain] [crypto] [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [image_file_url] [ironic] [key_manager] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] virt_type=qemu images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4 [matchmaker_redis] [metrics] [mks] [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET [notifications] [osapi_v21] [oslo_concurrency] lock_path=/var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne auth_type = password auth_url = http://controller:35357/v3 project_name = service project_domain_name = Default username = placement password = placement user_domain_name = Default [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [ssl] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc] enabled=true vncserver_listen=$my_ip vncserver_proxyclient_address=$my_ip novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp]
Restart nova Service:
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-compute.service openstack-nova-cert.service systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-compute.service openstack-nova-cert.service
Create a virtual machine verification: