OpenStack+Ceph平台构建

文档下载

OpenStack+Ceph平台构建(已排好版)

参考文档

官方文档

OpenStack集成Ceph

如何将Ceph与OpenStack集成

部署步骤

Ceph配置

创建Pool

# ceph osd pool create volumes 64

# ceph osd pool create images 64

# ceph osd pool create vms 64

OpenStack配置

安装Ceph Client包

在glance-api(控制节点)节点上

yum install python-rbd -y

在nova-compute(计算节点)和cinder-volume节点上

yum install ceph-common -y

复制配置文件到OpenStack相关节点

ssh controller sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf

ssh compute sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf

为Nova/Cinder and Glance创建新的用户

只有开启了cephx authentication,才需要

1、创建密钥,用的是auth get-or-create

ceph auth get-or-create client.cinder mon 'allow r'osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allowrwx pool=vms, allow rx pool=images'

ceph auth get-or-create client.glance mon 'allow r'osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

2、为client.cinder, client.glance添加keyring,并修改所属主/组

ceph auth get-or-create client.glance | sshcontroller sudo tee /etc/ceph/ceph.client.glance.keyring

ssh controller sudo chown glance:glance/etc/ceph/ceph.client.glance.keyring

ceph auth get-or-create client.cinder | ssh computesudo tee /etc/ceph/ceph.client.cinder.keyring

ssh compute sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

3、为nova-compute节点上创建临时密钥

ceph auth get-key client.cinder | ssh{your-compute-node} tee client.cinder.key

此处为:

ceph auth get-key client.cinder | ssh compute teeclient.cinder.key

4、在所有计算节点上(本例就只有一台计算节点)执行如下操作:在计算节点上为libvert替换新的key。

因为libvirt创建磁盘时,需要访问ceph集群。所以需要替换key

uuidgen

536f43c1-d367-45e0-ae64-72d987417c91

cat > secret.xml <<EOF

#粘贴以下内容,注意将红色key替换为新生成的key。

<secret ephemeral='no' private='no'>

<uuid>536f43c1-d367-45e0-ae64-72d987417c91</uuid>

<usage type='ceph'>

<name>client.cinder secret</name>

</usage>

</secret>

EOF

virsh secret-define --file secret.xml

以--base64 后的秘钥为计算节点上/root目录下的client.cinder.key。是之前为计算节点创建的临时秘钥文件

virsh secret-set-value  536f43c1-d367-45e0-ae64-72d987417c91  AQCliYVYCAzsEhAAMSeU34p3XBLVcvc4r46SyA==

这是通过--base64() 作用临时密钥生成的

AQCliYVYCAzsEhAAMSeU34p3XBLVcvc4r46SyA==

这里也可以替换为

--base64$(cat client.cinder.key)

然后删除临时密钥

rm –f client.cinder.key secret.xml

5、修改配置文件
glance-api.conf

[DEFAULT]

default_store = rbd

show_image_direct_url = True

show_multiple_locations = True

[glance_store]

stores = rbd

default_store = rbd

rbd_store_pool = images

rbd_store_user = glance

rbd_store_ceph_conf = /etc/ceph/ceph.conf

rbd_store_chunk_size = 8

取消Glance cache管理,去掉cachemanagement

[paste_deploy]

flavor = keystone

cinder-voluem的cinder.conf

[DEFAULT]

保留之前的

enabled_backends = ceph

#glance_api_version = 2

[ceph]

volume_driver = cinder.volume.drivers.rbd.RBDDriver

rbd_pool = volumes

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

glance_api_version = 2

rbd_user = cinder

volume_backend_name = ceph

rbd_secret_uuid=536f43c1-d367-45e0-ae64-72d987417c91

注意, 所有计算节点上的 UUID 不一定非要一样。但考虑到平台的一致性, 最好使用同一个 UUID

注意,如果配置多个cinder后端,glance_api_version = 2必须添加到[DEFAULT]中。本例注释了

compute节点nova.conf

[libvirt]

virt_type = qemu

hw_disk_discard = unmap

images_type = rbd

images_rbd_pool = vms

images_rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_user = cinder

rbd_secret_uuid = 536f43c1-d367-45e0-ae64-72d987417c91

disk_cachemodes="network=writeback"

libvirt_inject_password = false

libvirt_inject_key = false

libvirt_inject_partition = -2

live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED

6、重启OpenStack

systemctl restart openstack-glance-api.service

systemctl restart openstack-nova-compute.serviceopenstack-cinder-volume.service

验证

glance验证

1、下载Cirros镜像并将其添加到Glance。

wgethttp://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

2、将QCOW2转换为RAW。 使用Ceph,image必须是 RAW格式。

qemu-img convert cirros-0.3.4-x86_64-disk.imgcirros-0.3.4-x86_64-disk.raw

3、将镜像添加到Glance

glance image-create --name "Cirros 0.3.4"--disk-format raw --container-format bare --visibility public --filecirros-0.3.4-x86_64-disk.raw

cinder验证

1、创建Cinder卷

cinder create --display-name="test" 1

2、在Ceph中列出Cinder卷。

$ sudo rbd ls volumes

volume-d251bb74-5c5c-4c40-a15b-2a4a17bbed8b

$ sudo rbd infovolumes/volume-d251bb74-5c5c-4c40-a15b-2a4a17bbed8b

nova验证

1、启动使用在Glance步骤中添加的Cirros镜像的临时VM实例

nova boot --flavor m1.small --nicnet-id=4683d03d-30fc-4dd1-9b5f-eccd87340e70 --image='Cirros 0.3.4' cephvm

2、等待VM处于活动状态

nova list

3、在Ceph虚拟机池中列出镜像。我们现在应该看到镜像存储在Ceph中

sudo rbd -p vms ls

猜你喜欢

转载自blog.csdn.net/heavyfish/article/details/80927178