Qemu-kvm

 

One. Virtualization Introduction

In the X86 virtualization technology platform, introduction of new virtualization layer is often referred to as virtual monitor ( Virtual Machine Monitor, the VMM ), also called Hypervisor . In virtualization, the VMM must be able to intercept computation element direct access to the physical resources, and redirect it to the virtual resource pool. According to VMM is a pure software method or the use of physical resources to provide mechanisms to "intercept redirection" can be divided into software virtualization and hardware virtualization.

 

 

 

 

 

II: software virtualization and hardware virtualization

Software virtualization: QEMU, VMWare

QEMU dynamic binary translation technology, command the client can no longer be a physical machine on direct execution, need VMM translation, then translate instructions that can be executed on a physical machine, which usually results in significant performance overhead, QEMU virtual machine it is quite slow, but QEMU advantage is platform-independent, can simulate different architecture platform of virtual machines on the same platform. Such a good advantage really should not be so slow performance is holding without being widely used, so the back of it with KVM used together, QEMU provides the I / O , KVM provides access to the hardware.

VMWare binary translation and direct execution combination of programs. Code client user space by way of direct execution, the client kernel code is translated using the binary way, we know that most of the time a program is done in user space, greatly reducing the share of the cost of translation , and QEMU in comparison, this performance will be significantly improved, but lost the ability to cross-platform.

 

Hardware virtualization:

It means the hardware provides support for special instructions intercepted and redirected, so as to enhance the performance of the client.

 

 

three. Paravirtualization and full virtualization

In paravirtualization scheme, through changes in the client's operating system, so that the client knows it is running in a virtualized environment, the ability to VMM for collaborative work. In essence, paravirtualization weakened the requirements for passive interception virtual machine special instructions to convert it into the operating system of the client proactive notification. This proactive notification provided that the need to modify the operating system source code for the client.

 

Full virtualization:

Full virtualization client provides a complete virtual x86 platform, including processor, memory and peripherals, clients think they run on hardware, paravirtualized performance relative to low.

With vendors cpu getting better and better virtualization support, intel introduced Intel-VT , AMD introduced ADM-V technology, relying on hardware-assisted full virtualization performance is getting better, full virtualization will be the virtualization technology core

 

 

four. KVM

KVM ( kernel-based Virtual Machine ) is based on the kernel virtual machine from the Linux 2.6.20 start, KVM will be integrated into the kernel, kernel module can become loaded. About KMV, summarized as follows:

 

1.KVM modules :

kvm-intel.ko #for Intel CPU

kvm-amd.ko #for AMD CPU

kvm.ko # main module

When the three modules are loaded, there will be / dev / kvm character device, responsible for qemu and kvm newsletter

 

2.  In kvm architecture, each virtual CPU appears as a normal process, there are Linux scheduler for scheduling, enjoy the Linux kernel all functions

 

3. KVM itself does not provide simulations run in the kernel, providing CPU and memory virtualization, as well as the client I / O blocking, client I / O is KVM over to the interception QEMU process for the KVM modified QEMU run in user space, providing hardware I / O virtualization, through IOCTL / dev / kvm character device and KVM interact.

When the KVM when the module is loaded:

1 ) first initializes the internal data structures;

2 ) After the ready, the KVM module detects the current CPU , and then open the CPU control and access CR4 virtualization mode switch, and by executing VMXON instruction host operating system in virtualization mode root mode;

3 ) Finally, KVM modules to create a special device file / dev / kvm and waits for instructions from the user space

 

4. KVM feature list:

1) Support CPU and Memory super-sub ( overcommit feature )

2) support virtio

3) hot-swappable ( CPU, block devices, network equipment, etc.)

4) Support the SMP ( the Symmetric the Multi-Processing symmetric multi-processor system)

5) support for live migration ( Live Migration )

6) supported PCI devices direct assignment and single I / O Virtualization ( the SR-IOV)

7) Support merge with the kernel page ( KSM )

8) support for non-uniform memory access ( NUMA )

 

 

5. KVM tool set

1) libvirt: Operations Management kvm virtual machine virtualization API

2) Virsh: based on libvirt command-line tool

3) Virt-Manager: based on libvirt of GUI tools

4) Virt-v2v: virtual machine migration

5) Virt-*

6) Svirt: Security Tools

 

 

Fives. Qemu-KVM simple to install and use

1 ) Check CPU supports virtualization

cat /proc/cpuinfo|egrep “vmx|svm”

2) Now linux comes with kvm module, ensure kvm module is installed correctly

ls / dev / kvm

3)  install qemu

yum install qemu*

Or git clone  https://git.qemu.org/git/qemu.git  download the latest qemu

Or https://download.qemu.org/ to download the desired version

 

4) Create a img file, there are generally two ways

[a]. dd if=/dev/zero of=rhel7u4.img bs=1M count=8192

[b] qemu-img create -f qcow2 rhel7u4.img 8G;. This is recommended to create a way out of this img file is a sparse file, which is not just created out of 8G, it will be with the increase of data increases when data exceeds 8G time, img file size will increase as data

 

-f: disk file format, generally raw and qcow2, usually the more is qcow2, compared to raw format, the performance, although almost, but the advantage is sparse, and with encryption, compression, snapshots and other functions

 

Format Conversion: QEMU-IMG Convert RAW input.img -f -O qcow2 output.qcow2

 

5) The client os installed img in, img can be regarded as qemu client initiated the hard disk.

taskset -c 0-4 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp cores=16,sockets=1, -m 77G -drive file=redhat.img -vnc :12  -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device e1000,netdev=ipvm1,id=net0,mac=00:00:02:98:AC:62

 

-c 1-4 taskset : Binding host of Core , which means a host of 1-4 core to play this guest

  -name vm2: the name of the client

-enable-kvm open kvm accelerate

Host -cpu : client cpu model

-smp : Client cpu

-m 77G: client's memory

-drive file = / home / redhat.img: configure drive, also write -boot order = cd -hda = redhat.img -cdrom = redhat.iso

-vnc: 12 Open vnc 5912 port, may be connected to the port through which the client may specify IP , e.g. -vnc 10.10.10.10:12, default 127.0.0.1

-netdev tap, id = ipvm1, ifname  = tap3, script = / etc / qemu-ifup This is a new version of the net configuration, this means creating a tap type of network equipment, the above mentioned id = ipvm1 This identifier can easily take, ifname = TAP3 TAP name of the device card, script = / etc / qemu-ifup said in virtual machines created when first performs qemu-ifup script, in fact there is a downscript = / etc / QEMU-ifdown , this is called automatically, you do not need to go out to specify, at the end of a call.

-device e1000, netdev = ipvm1, id  = net0, mac = 00: 00: 02: 98: AC: 62 This -netdev in the host -side to create a tap NIC, its id is ipvm1, -device e1000 is to create a one thousand Katherine card, NETDEV = ipmi1 , with host end tap the other end of the device corresponding to the card, ID = the net0 a guest identifier of the card terminal, mac designated guest end NIC mac

 

 

Further kind of binding numa node to the specified client SMP , e.g. -smp = Cores. 4, Threads = 2, = 2 -numa Socket Node, MEM =. 1G , CPUs = 0-8, nodeid -numa Node = 0,. 1G MEM = , cpus = 9-15, nodeid = 1

More simple manner may be written -smp 16 , i.e., designated guest of cpu 16 core

 

6) Start guest , press ctrl + alt + 2 into (QEMU) command line, type info cpus see guest CPU corresponding to the host in the thread ID

 

Or host in ps -efL | grep qemu can see

 

22288 is qemu start the process ID of the client, 22290-22297 is the thread it produces, as a client of vCPU running.

 

7) affinity and process processor vCPU binding

a.  Check vcpu in which the core is running on

[root@xid]# taskset  -p  3963

pid 3963s current affinity mask: 4

This 4 is 16 hex, namely 0100 , which runs at core 3 on

or

[root@xid]# taskset  -pc  3963

pid 3963s current affinity core: 3

Plus -c parameter, which directly represents the Core , was 10 decimal

b.  Modify the client qemu process to run at core 4 on

[root@xid]# taskset -p 0x8 3963

pid 3963s current affinity mask: 4

pid 3963s new affinity mask: 8

or

[root@xid]# taskset -pc  4  3963

pid 3963s current core : 3

pid 3963s new affinity core: 4

 

Guess you like

Origin www.cnblogs.com/xia-dong/p/11470390.html