openstack整合ceph

环境:ubuntu16.04、ceph:10.2.3、openstack:14.0.1

一、在ceph集群中创建池

    ceph osd pool create volumes 128

    ceph osd pool create images 128

    ceph osd pool create backups 128

    ceph osd pool create vms 128

二、在openstack节点安装ceph client,把ceph.conf文件拷贝到所有openstack节点

    1、在glance-api节点:

        apt-get install python-rbd

    2、在nova-compute, cinder-backup、cinder-volume 节点:

        apt-get install ceph-common

    3、ssh {openstack-node-ip} mkdir -p /etc/ceph/

        scp /etc/ceph/ceph.conf {openstack-node-ip}:/etc/ceph

三、创建ceph集群用户(在/etc/ceph/目录下 root用户执行,根据个人环境调整)

   1、创建用户密钥

     ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

    ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

    ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images'

2、把密钥拷贝到相应节点

ceph auth get-or-create client.glance | ssh {glance-api-node} sudo tee /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {cinder-volume-node} sudo tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {cinder-backup-node} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ceph auth get-or-create client.cinder | ssh {nova-compute-node} sudo tee /etc/ceph/ceph.client.cinder.keyring

3、从ceph集群得到一个临时key文件用于libvirt

ceph auth get-key client.cinder | ssh {nova-compute-node} tee /root/client.cinder.key

4、在nova计算节点上把密钥加进libvirt

uuidgen
457eb676-33da-42ec-9a8c-9293d545c337

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat /root/client.cinder.key)

四、配置glance

    1、在glance节点编辑/etc/glance/glance-api.conf文件,在 [glance_store] 段下添加

   stores = glance.store.rbd.Store

  default_store = rbd

  rbd_store_pool = images
  rbd_store_user = glance
  rbd_store_ceph_conf = /etc/ceph/ceph.conf
  rbd_store_chunk_size = 8

    note:注释掉别的stores、default_store

    2、使能 镜像的copy-on-write:

在/etc/glance/glance-api.conf文件下的[DEFAULT]段下添加show_image_direct_url = True

3、禁用 Glance 缓存管理,以免 image 被缓存到 /var/lib/glance/image-cache/ 下,假设你的配置文件里有 flavor = keystone+cachemanagement :

    [paste_deploy]

    flavor = keystone

4、重启glance服务:

    systemctl restart glance-api

5、验证:

   a、 wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

   b、qemu-img convert cirros-0.3.4-x86_64-disk.img cirros-0.3.4-x86_64-disk.raw

  c、openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format raw --container-format bare --public

   d、openstack image list

   e、在ceph节点执行:rbd ls images 。有输出就证明配置成功。

五、配置cinder

    1、在有 /etc/cinder/cinder.conf 文件的节点编译,加入如下内容:      

[DEFAULT]
...
enabled_backends = ceph
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2

    如果ceph集群开启了cephx认证,在文件中的ceph段添加:

[ceph]
...
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337      //此为上步中uuidgen的输出

       

2、安装cinder-backup,并配置/etc/cinder/cinder.conf文件中backup相关部分,在default段添加:

backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

3、验证:

   用openstack创建云主机,在ceph集群节点执行:rbd ls volumes,会看到输出,证明配置成功。

六、配置nova:

    1、配置nova-compute节点的ceph.conf文件,打开rbd的缓存,如下:

[client]
    rbd cache = true
    rbd cache writethrough until flush = true
    admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/qemu/qemu-guest-$pid.log
    rbd concurrent management ops = 20

    创建文件

mkdir -p /var/run/ceph/guests/ /var/log/qemu/

     2、在有/etc/nova/nova.conf 文件的节点编辑:

    在[libvirt]段下添加加:

images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes="network=writeback"

禁用文件注入:

inject_password = false
inject_key = false
inject_partition = -2

为确保热迁移能顺利进行,要使用如下标志:

live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

3、重启nova-compute服务:

systemctl restart nova-compute

4、验证:

    在openstack创建云主机从镜像启动,不创建卷:

    在ceph节点执行:rbd ls -p vms ls

    是否有内容输出

七、配置keystone,提供对象存储:

    1、在radosgw节点,配置ceph.conf文件,在[client.rgw.node3](我的radosgw客户端名字是rgw.node)段下添加

    rgw keystone api version = 3

    # keystone地址
    rgw keystone url = http://10.33.0.150:5000

    # keystone admin用户
    rgw keystone admin user = admin
    rgw keystone admin password = 000000
    rgw keystone admin domain = default
    rgw keystone admin project = admin
    rgw keystone accepted roles = SwiftOperator,admin,_member_, project_admin, member2
    rgw keystone token cache size = 500
    rgw keystone revocation interval = 60
    rgw keystone implicit tenants = true
    rgw s3 auth use keystone = true

   # keystone没有开启ssl设置为false
    rgw keystone verify ssl = false

    2、重启radosgw服务:

        systemctl [email protected] restart

    3、在keystone中添加swift服务:

    a、

openstack service create --name=swift \
                         --description="Swift Service" \
                         object-store

    b、

openstack endpoint create --region RegionOne swift public "http://{radosgw-node-ip/fqdn}:{port}/swift/v1"

    c、

openstack endpoint create --region RegionOne swift admin "http://{radosgw-node-ip/fqdn}:{port}/swift/v1"

    d、

openstack endpoint create --region RegionOne swift internal "http://{radosgw-node-ip/fqdn}:{port}/swift/v1"

    e、查看创建的endpoint

       openstack endpoint show | grep object-store

    f、测试swift功能

    swift list

    swift stat

参考:http://docs.ceph.com/docs/master/radosgw/keystone/

        http://docs.ceph.com/docs/master/rbd/rbd-openstack/

        http://superuser.openstack.org/articles/ceph-as-storage-for-openstack/

        http://www.bubuko.com/infodetail-1854698.html

    

       

猜你喜欢

转载自my.oschina.net/u/2326998/blog/848977