ceph docking openstack environment

 

A, rbd provides storage using the following data:

(1) image: Image stored in glanc;
 
(2) volume storage: Save the cinder of volume; select Create a new volume is created when you save a virtual machine;
 
 

Second, the implementation of the steps of:

 

(1) The client should have a cent users:

useradd cent && echo "123" | passwd --stdin cent
echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod 440 /etc/sudoers.d/ceph


Package (2) openstack use ceph node (such as compute-node and the storage-node) install the downloaded:

  yum localinstall ./* -Y


Or then: Each node installation clients (nodes to access the cluster ceph):

yum install python-rbd
yum install ceph-common

 

Performing (3) nodes deployed, is mounted ceph openstack node:


     ceph-deploy install controller
     ceph-deploy admin controller


(4) client executes

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

 

5) create a storage pool, are named images, vms, volumes

[root@controller ~]#ceph osd pool create images 128
pool 'images' created
[root@controller ~]# ceph osd pool create vms 128
pool 'vms' created
[root@controller ~]# ceph osd pool create volumes 128
pool 'volumes' created


6) List View pool

ceph osd lspools

0 rbd,1 images,2 vms,3 volumes,

 

7) Create a glance and cinder users ceph cluster, because it is all in one environment here so we can create two users on the deployment node.

  useradd glance

  useradd cinder

 

8) This gives the following two user access rights storage pools

[root@controller ceph]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
    key = AQCZggNd3TrTDBAAFgWrEAXhXt7xv4xcnn0eWA==
[root@controller ceph]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
    key = AQCtggNdHrFuHhAAsI/rt4cVujt8QEYZOODRFw==

 

9) copies ceph-keyring, and sent to the user node needs to operate both, here we copy and distributed to the nodes in the storage node controller

root@controller ceph]# ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring

[root@controller ceph]# ceph auth get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring

 

[root@controller ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring compute:/etc/ceph/
ceph.client.glance.keyring 100% 64 15.4KB/s 00:00
ceph.client.cinder.keyring 100% 64 31.4KB/s 00:00
[root@controller ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring storage:/etc/ceph/
ceph.client.glance.keyring 100% 64 14.1KB/s 00:00
ceph.client.cinder.keyring 100% 64 28.0KB/s 00:00

 

10) change the following keyring file belongs to the group is a group, or do not have permission to access.

[root@controller ceph]# chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@controller ceph]# chown cinder:cinder /etc/ceph/ceph.client.glance.keyring

 

11) Change libvirt permission, only you need to operate on the line in the nova-compute node

Generating a uuid, and uuid write / etc / ceph / uuid

[root@compute ceph]# uuidgen
3e3314c9 -bfb0-439e- 8764 -61896c621b7e

 [root@compute ceph]# vim uuid

3e3314c9-bfb0-439e-8764-61896c621b7e

 

Create a secret file in the / etc / ceph directory, add the following

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid>
<usage type='ceph'>
 <name>client.cinder secret</name>
</usage>
</secret>

 

Transmitting nodes compute secret file to another, and perform the following operations


[root@compute ceph]# ceph auth get-key client.cinder > ./client.cinder.key
[root@compute ceph]# ls
ceph.client.admin.keyring   ceph.conf          secret.xml
ceph.client.cinder.keyring  client.cinder.key  tmpJKjseK
ceph.client.glance.keyring  rbdmap             uuid
[root@compute ceph]# virsh secret-set-value --secret 3e3314c9-bfb0-439e-8764-61896c621b7e --base64 $(cat ./client.cinder.key)
secret value is set

 

 

In case of the following errors:
[root@controller ceph]# virsh secret-define --file secret.xml
 
Error: secret.xml setting property failed
Error: internal error: UUID secret has been defined d448a6ee-60f3-42a3-b6fa-6ec69cab2378 is to be used with client.cinder secret.
Duplicate uuid undefines then secret file undefined. 
 
[root@controller ~]# virsh secret-list
The amount UUID
--------------------------------------------------------------------------------
d448a6ee-60f3-42a3-b6fa-6ec69cab2378  ceph client.cinder secret
 
[root@controller ~]# virsh secret-undefine d448a6ee-60f3-42a3-b6fa-6ec69cab2378
Deleted secret d448a6ee-60f3-42a3-b6fa-6ec69cab2378
 
[root@controller ~]# virsh secret-list
The amount UUID
--------------------------------------------------------------------------------
 
[root@controller ceph]# v irsh secret-define --file secret.xml
Generating a secret 940f0485-e206-4b49-b878-dcd0cb9c70a4
 
[root@controller ~]# virsh secret-list
The amount UUID
--------------------------------------------------------------------------------
940f0485-e206-4b49-b878-dcd0cb9c70a4  ceph client.cinder secret
 
virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)
 
 

15) remove the mirror and instances horizon page

 

16) Modify glance-api.conf profile on the node controller, and then restart

[DEFAULT]
default_store = rbd

[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

[root@controller ceph]# systemctl restart openstack-glance-api

 

17) Check and re-create a mirror image

[root@controller ceph]# openstack image create "cirros"   --file cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --public

 

Modify the configuration file on cinder 18) storage node, and restart the controller nodes and storage nodes related services related services

19) horizon interface to create the volume verification

20) modify the configuration file in nova nova-compute node, nova-related services and storage node restart controller related services

21) horizon interface to create a virtual machine verification

Guess you like

Origin www.cnblogs.com/zzzynx/p/11025376.html