KVM live migration of virtual machines --- Live Migration

KVM live migration of virtual machines --- Live Migration:

  • Server virtualization technology is the current hot spot, and the "hot migration (Live Migration)" virtual machine technology is the running state of the virtual machine preserved intact, and can quickly revert to the original hardware platforms or even different hardware platforms. After recovery, the virtual machine is still running smoothly, users will not notice any difference.

Migration of species:

  1. P2P: migration between physical machines
  2. V2P: physical machine virtual machine moved
  3. P2V: physical machine move virtual machines
  4. V2V: VM virtual machine moved

 

Live Migration advantage

  • The first is scalability strong, IT managers can make some business-critical servers running in a reasonable period of time appropriate to reduce the workload for the new operating system to the application patching and so on. By the peak of the service, and can run large elastic load operations. Virtual machine migration process is completely transparent, almost does not affect use.
  • Besides, now that data centers are pursuing green energy, a large amount of work load of the application server will certainly give the increase in energy consumption, with the virtual machine live migration technology, when a single physical machine server load is too large, the system administrator can hot thereon migrated to another virtual server, can effectively reduce the overall power consumption of the data center server, and then through the cooling system the temperature of the data center is maintained at normal levels.

 

Limitations live migration

Live Migration of virtual machines, there are many restrictions. E.g:

  Before performing VMotion migration, X86 architecture management software detects the target server is compatible with the source server. Includes a storage device and a processor, the virtual machine must be placed in shared storage, but also the same type of CPU, Intel is not only not a one is AMD, or even the same manufacturers of different product lines CPU does not work, such as Intel's Xeon and Pentium.

 

KVM live migration, there are a few precautions and following recommendations:

  • The source host and destination host try to use a shared network storage system to store a client disk image. For example: NFS, ISCSI, Glusterfs like.
  • In order to improve the success rate of live migration, dynamic migration as far as possible above the host CPU of the same type, although KVM supports live migration to migrate from AMD intel platform to platform, but, from the security, stability consideration is not recommended to operate
  • 64 client can run the migration between the host computer 64, the client 32 can migrate between host 32 and host 64.
  • When the live migration, the migrated name of the client is unique on the destination host has not migrated with the same name as the source client host client exists.
  • The purpose host and source host of the software as much as possible the same. That is the same as Vmware, kvm, xen and so on.

 

V2V migration

  • Deploy NFS server file sharing between Linux and Linux, NFS usually runs on port 2049

  Since before using the NFS file sharing service, you need to use the RPC (Remote Procedure Call, Remote Procedure Call) service sends information to the client NFS server's IP address and port number. Therefore, before you start the NFS service, incidentally, also you need to restart the program and enable rpcbind service.

   

  Server-side configuration:

  •   Download nfs and rpcbind

   1 [root@localhost ~]# yum install nfs-utils rpcbind -y 

 

  •   Nfs configuration file

  

[root@localhost ~]# mkdir /nfsdate
[root@localhost ~]# vim /etc/exports
/nfsdate 192.168.127.133/24(rw)

 

  •    Restart nfs and rpcbind

   1 [root@localhost ~]# systemctl restart rpcbind nfs 

 

  

  Client Configuration:

  •   Download nfs-utils

   1 [root@localhost ~]# yum install nfs-utils -y 

 

  •   View shared files

  

1 [root@localhost ~]# showmount -e 192.168.127.130
2 Export list for 192.168.127.130:
3 /nfsdate 192.168.127.133/24

 

  •   Hanging on to the local server nfs

  

. 1 [the root @ localhost ~] # mkdir / nfsdate
 2 [the root @ localhost ~] -t NFS Mount # 192.168 . 127.130 : / nfsdate / nfsdate
 . 3 [the root @ localhost ~] # DF - H
 . 4 file system capacity has been available with % with the mount point
 . 5 / dev / Mapper / Cl-the root. 17G   . 8 .3 G   . 8 0.8 g    49 % /
 . 6 devtmpfs 478M      0   478M     0 % / dev
 . 7 tmpfs 489m      0   489m     0 % / dev / SHM
 8 tmpfs                     489M  7.1M  482M    2% /run
 9 tmpfs                     489M     0  489M    0% /sys/fs/cgroup
10 /dev/sda1                1014M  141M  874M   14% /boot
11 tmpfs                      98M     0   98M    0% /run/user/0
12 192.168.127.130:/nfsdate   17G   10G  7.1G   59% /nfsdate

 

  •   The server to migrate the virtual machine configuration file (startup file) / etc / libvirt / qemu / backup to / disk and file / var / lib / libvirt / images / move to the shared folder you just created under the root / nfsdate in  

 

  •    The move vm1 disk files to a shared file

  

1 [root@localhost ~]# cd /var/lib/libvirt/images/
2 [root@localhost images]# ls
3 centos7.0.qcow2  CentOS-7-x86_64-DVD-1611.iso  test.qcow2  vm1.qcom2  vm2.qcow2
4 [root@localhost images]# mv vm1.qcom2 /nfsdate

 

  •    The vm1 backup configuration files in / root

  

[root@localhost ~]# cd /etc/libvirt/qemu/
[root@localhost qemu]# ls
centos7.0.xml  networks  test.xml  vm1.xml  vm2.xml
[root@localhost qemu]# cp vm1.xml /root

  

  •   To delete a virtual machine domain vm1

  

. 1  [the root @ localhost QEMU] LS #
 2 centos7. 0 .xml the test.xml vm1.xml vm2.xml Networks
 . 3  [QEMU the root @ localhost] # virsh undefine vm1
 . 4  domain defined vm1 has been canceled
 . 5  
. 6  [the root @ localhost QEMU ] LS #
 7 centos7. 0 .xml Networks test.xml vm2.xml

 

  •   Switching in / root profile modification vm1, modify its disk file path

  

1 [root@localhost ~]# vim vm1.xml
2     <disk type='file' device='disk'>
3       <driver name='qemu' type='qcow2'/>
4       <source file='/nfsdate/vm1.qcom2'/>       #把源路径修改为/nfsdate
5       <target dev='vda' bus='virtio'/>
6       <boot order='1'/>
7       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
8     </disk>

 

  •   Recovery vm1

  

. 1 [the root @ localhost ~ ] # virsh DEFINE vm1.xml
 2  domain VM1 (from vm1.xml)
 . 3  
. 4 [the root @ localhost ~] # virsh List - All
 . 5  Id Name State
 . 6 -------- --------------------------------------------
 7   -. centos7 0                       Close
 . 8   -      Test off
 . 9   -      VM1 off
 10   - off VM2

 

  •    Server and client need to do domain name resolution

  

1 [root@localhost ~]# vim /etc/hosts
2 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
3 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
4 192.168.127.130 yun1
5 192.168.127.133 yun2

 

  •    The client view shared files

  

[root@localhost ~]# cd /nfsdate
[root@localhost nfsdate]# ls
vm1.qcom2

 

 

Validated in the host

  • First, create a connection in the host machine, the host is a function of two shared.

  

 

  • Two proposed to amend the host name of the host, two hosts to avoid the same name. Prevent some unnecessary mistakes.

  

hostnamectl set-hostname hostname 
exit

  

 

  •  Click on the link, then respectively input yes, the account password, the connection is created
  • Because it is live migration, the migrated virtual machine should open; then click on migration

  

 

Guess you like

Origin www.cnblogs.com/CruxAustralis/p/11272091.html