1.Ceph Cluster Setup
Ceph first need to build a clustered environment
https://blog.csdn.net/ck784101777/article/details/102744203
2. Create a disk image
- [root@node1 ~]# rbd create vm1-image --image-feature layering --size 10G
- [root@node1 ~]# rbd list
- vm1-image
3.Ceph authentication account (you can view only)
Ceph enabled by default user authentication, the client needs an account before they can access the default account name is client.admin, key is the key accounts.
Ceph auth can be used to add a new account (case we use the default account).
All content key behind all keys (also ==)
- [Root @ node1 ~] # cat /etc/ceph/ceph.client.admin.keyring // account file
- [client.admin]
- key = AQBTsdRapUxBKRAANXtteNUyoEmQHveb75bISg==
4. Configure KVM virtual keys
Written account information file, let KVM know ceph account name.
According to to operate, you do not need to know the inside of principle
- [Root @ room9pc01 ~] # vim secret.xml # Create a temporary file, as follows
- <secret ephemeral='no' private='no'>
- <usage type='ceph'>
- <name>client.admin secret</name>
- </usage>
- </secret>
- [Root @ room9pc01 ~] # virsh secret-define secret.xml # XML configuration file to create secret
- # Command generates a random UUID, there is the UUID corresponding account information
- [Root @ room9pc01 ~] # virsh secret-list # View Secret Information
- UUID 用量
--------------------------------------------------------------------------------
b0271913-79f9-4c73-be41-884dcc2383d1 ceph client.admin secret- # Bind admin account to secret password, refer ceph.client.admin.keyring file.
- [root@room9pc01] virsh secret-set-value --secret 733f0fd1-e3d6-4c25-a69f-6681fc19802b \
- --base64 AQBTsdRapUxBKRAANXtteNUyoEmQHveb75bISg
- # Here is the secret behind the secret-fine step before UUID created
- # Base64 is behind the account password client.admin
5. modify the configuration file, the boot automatically mount
Profile marked red to make adjustments where needed
- [student @ room9pc01 ~] $ virsh list --all # View Virtual Machine
Id Name Status
------------------------------- ---------------------
. 1 ceph1 running
2 ceph2 running
. 3 ceph3 running
. 4 Client running
. 9 ansible running
- Close Win2008- [Root @ room9pc01] virsh destroy client # need to stop the virtual machine
- [Root @ room9pc01] virsh edit client #client virtual machine name, copy <disk> tag to the next content on a <disk> tag
- <disk type='network' device='disk'>
- <driver name='qemu' type='raw'/>
- <auth username='admin'>
- # View by virsh secret-list
- <secret type='ceph' uuid='b0271913-79f9-4c73-be41-884dcc2383d1'/>
- </auth>
- #Ip address ip ceph node, any of which may, vm1-Image is the name I created a mirrored pool, by looking at the ceph, command rbd list
- <source protocol='rbd' name='rbd/vm1-image'> <host name='192.168.4.11' port='6789'/> </source>
- # Here we must pay special attention to more than one disk disk configuration file, you must ensure that the name is not repeated here before
- <target dev='vdb' bus='virtio'/>
- </disk>
6. Power View
- [root@room9pc01] virsh start client
- [root@room9pc01] virsh console client
- [root@client] lsblk
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- vda 253:0 0 30G 0 disk
- └─vda1 253:1 0 30G 0 part /
- rbd0 252:0 0 15G 0 disk /mnt