KubeVirt technology introduction and experimental deployment

Introduction to Virtualization

In the development of cloud computing, there are two types of virtualization platforms:

  • openstack (iaas): focus on resource utilization, virtual machine computing, network and storage
  • Kubernetes (pass): focus on container scheduling, automated deployment, and release management

Classification of Hypervisor (VMM) virtual machine monitors

1. Type-1, native or bare-metal hypervisors: hardware virtualization

These hypervisors are directly installed and run on the hardware on the host machine, and the hypervisors run on the hardware to control and manage hardware resources.

for example:

  • Microsoft Hyper-V
  • VMware ESXi
  • KVM

2、 Typer-2 or hosted hypervisors :

These hypervisors run directly as a computer program on the operating system on the host computer.

  • QEMU
  • VirtualBox
  • VMware Player
  • VMware WorkStation

3. Virtualization is mainly virtual CPU, MEM (memory), I/Odevices

Among them, Intel VT-x/AMD-X implements COU virtualization

Intel EPT/AMD-NPT realizes the virtualization of MEM

4. The combination of Qemu-Kvm:

KVM can only virtualize CPU, MEM, and QEMU can virtualize hardware, such as: sound card, USE interface, ..., so QEMU and KVM are usually combined for common virtualization: QEMU-KVM.

Libvirt

It is a software collection of virtualization management platform. It provides a unified API, daemon libvirtd and a default command line management tool: virsh.

In fact, we can also use the kvm-qemu command line management tool, but it has too many parameters and is inconvenient to use.

So we usually use the libvirt solution to manage the virtual exchange.

libvirt is the management solution of Hypertvisor, which is to manage Hypervisor.

We use the libvirt command line tool to mobilize the Hypervisor so that the Hypervisor can manage the virtual machine.

Introduction to KubeVirt

Reference documentation: https://github.com/kubevirt/kubevirt

Official document: https://kubernetes.io/blog/2018/05/22/getting-to-know-kubevirt/

Introduction to volumes and disks: http://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/

KubeVirt is an open source virtual machine project that runs in containers. It runs on the basis of Kubernetes. Specifically, it is based on the CRD (custom resource) of Kubernetes to increase the resources related to the operation and management of virtual machines. Specifically VM, VMI resource types. That is to say, we use CRD to add resource types about virtual machines, and then create a series of operations such as creating virtual machines in the form of YAML.

There are two main resource types: VM, VMI

  • VM resource: VirtualMachineInstanceProvides , such as starting/shutting down/restarting the virtual machine, ensuring the startup status of the virtual machine instance, and has a 1:1 relationship with the virtual machine instance, similar to spec.replicaa StatefulSet with 1.
  • VMI (VirtualMachineInstance) resource: Similar to kubernetes Pod, it is the minimum resource for managing virtual machines. An VirtualMachineInstanceobject represents a running virtual machine instance, including various configurations required by a virtual machine.

Through the above resources, virtual machines can be managed on Kubernetes.

Architecture of Kubevirt

Look at the architecture diagram first:

insert image description here

KubeVirt connects the VM management interface to Kubernetes in the form of CRD. It is the same as using libvirt to manage VMs through a Pod, and realizes the one-to-one correspondence between Pods and VMs. That is to say, a Pod can run both VMs and containers. Manage virtual machines like containers, and implement the same resource management and scheduling planning as containers.

The difference is that ordinary Pods are called Pods, while Pods containing VMs are called virt-launcher Pods.

Let's introduce the components of the Master node:

  • virt-api: KubeVirt manages the virt-launcher Pod in the form of CRD. Virt-api is the entry point for all virtual machine management operations. It only receives instructions, such as: CRD update, delete, increase, and start, stop, delete, console, etc. for virtual machines
  • virt-controller: Manage and monitor VMI objects and their associated Pods, and update their status. The virt-controller will create the corresponding VMI according to the VM resource list you wrote, and then create the corresponding virt-launcher Pod. And communicate with the api-server of kuberneters to monitor and maintain the creation and deletion of VMI resources.

Let's introduce the components of the node node:

  • virt-handler: This component is started as a Pod, and note that it is a Pod of a DaemonSet, that is, the Pod of the daemon process, which means that it needs to be deployed on each Node as a daemon process, responsible for Monitor and operate each virtual machine instance on the node. Once a change in the state of the virtual machine instance is detected, it will respond and ensure that the corresponding operation can achieve the state you expect in the VMI CRD.
  • virt-launcher: Each virt-launcher Pod corresponds to this VMI. Kubernetes is only responsible for the running status of the virt-launcher Pod, and does not care about the creation of the VMI, so the virt-handler will notify the virt-launcher to use the local libvirtd to manage the life cycle of the virtual machine according to the parameter configuration of creating the resource list. As the life cycle of the Pod ends, virt-lanuncher will also notify the VMI to perform the termination operation.
  • Virtctl: virtctl is a command-line tool that comes with kubevirt. It has the same nature as kubectl, but unlike kubectl, virtctl interacts with virt-api to delete, create, and update virtual machines.

VM (Virtual Machine) startup process

Reference documentation: https://github.com/kubevirt/kubevirt

insert image description here

Introduction to resources related to virtual machines

  • virtualmachines (VM): Provides management functions for VirtualMachineInstance in the cluster, such as starting/shutting down/restarting virtual machines, to ensure the startup status of virtual machine instances.
  • virtualmachine instances (VMI): Similar to Kubernetes Pod, it is the minimum resource for managing virtual machines. A VirtualMachineInstance object represents a running virtual machine instance, and includes various configurations required by a virtual machine.
  • VirtualMachineInstanceReplicaSet: Similar to Kubernetes' ReplicaSet, it can start a specified number of VirtualMachineInstances, and ensure that the specified number of VirtualMachineInstances are running, and HPA can be configured.
  • VirtualMachineInstanceMigrations: Provides the ability to migrate virtual machines.

virtual machine images, disks, volumes

Introduction to volumes and disks: http://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/

It is absolutely necessary to create a virtual machine image. Since an image is required, the image needs to be stored by storage.

There is a sub-project in KubeVirt: CDI.

CDI is a plug-in for Kubernetes persistent storage management. The CDI project provides the function of using PVC as a KubeVirt VM disk.

Multiple types of volumes can be specified under spec.volumes:

  • cloudInitNoCloud: Cloud-init related configuration, used to modify or initialize the configuration information in the virtual machine
  • containerDisk: Specify a docker image containing qcow2 or raw format, restarting the vm data will be lost
  • dataVolume: dynamically create a PVC, and fill the PVC with the specified disk image, restarting the vm data will not be lost
  • emptyDisk: Allocate a fixed-capacity space from the host and map it to a disk in the vm. The life cycle of emptyDisk is the same as that of vm. Restarting mv will lose data
  • Ephemeral: A temporary volume is created when the virtual machine starts, and is automatically destroyed after the virtual machine is shut down. The temporary volume is useful in any situation that does not require disk persistence.
  • hostDisk: Create an img image file on the host machine and hang it for the virtual machine to use. Restarting vm data will not be lost.
  • persistentVolumeClaim: Specify a PVC to create a block device. Restarting vm data will not be lost.

KubeVirt-Network

Pre-environment

Requires a Kubernetes platform

Deployable with KubeKey

install (v0.48.1)

Official website address: https://github.com/kubevirt/kubevirt

Preparation

1. Install libvirt and qemu software packages first

[root@master ~]# yum install -y qemu-kvm libvirt virt-install bridge-utils

2. Check whether the node supports kvm hardware virtualization

[root@master ~]# virt-host-validate qemu
#以下就是虚拟化失败的情况
QEMU: Checking for hardware virtualization                                 : FAIL (Only emulated CPUs are available, performance will be significantly limited)
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
WARN (Unknown if this platform has IOMMU support)

3. If not supported, you need to let kubevirt use software to simulate virtualization configuration:

kubectl create namespace kubevirt
kubectl create configmap -n kubevirt kubevirt-config \
 --from-literal debug.useEmulation=true

Install KubeVirt

Deploy version 0.48.1

export VERSION=v0.48.1
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml

Test results:

[root@master ~]# kubectl get pods -n kubevirt
NAME                               READY   STATUS    RESTARTS   AGE
virt-api-67699974c9-46h4p          1/1     Running   0          11m
virt-api-67699974c9-rxvzn          1/1     Running   0          11m
virt-controller-575dbb5b66-q2kps   1/1     Running   0          9m24s
virt-controller-575dbb5b66-xhv89   1/1     Running   0          9m24s
virt-handler-f9jlk                 1/1     Running   0          9m24s
virt-handler-gtstv                 1/1     Running   0          9m24s
virt-operator-574d595ffb-tlbcs     1/1     Running   0          12m
virt-operator-574d595ffb-w9s88     1/1     Running   0          12m

Deploy CDI (v1.47.1)

Containerized Data ImporterThe (CDI) project provides functionality for making a PVC a KubeVirt VM disk. It is recommended to deploy CDI at the same time:

[root@master ~]# export VERSION=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*")
[root@controller ~]# kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
[root@controller ~]# kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

Test results:

[root@master ~]# kubectl get pods -n cdi
NAME                               READY   STATUS    RESTARTS   AGE
cdi-apiserver-67944db5c9-6bmpz     1/1     Running   0          2m53s
cdi-deployment-6fcc76f596-pfjqq    1/1     Running   0          2m45s
cdi-operator-5f57676b77-vq6rj      1/1     Running   0          5m39s
cdi-uploadproxy-66bd867c6f-5fl2r   1/1     Running   0          2m41s

Install the virtctl client tool

KubeVirt provides a command line tool: virtctl, which we download and use directly.

[root@master ~]# wget https://github.com/kubevirt/kubevirt/releases/download/v0.48.1/virtctl-v0.48.1-linux-amd64
[root@master ~]# mv virtctl-v0.48.1-linux-amd64 /usr/local/bin/virtctl
[root@master ~]# chmod +x /usr/local/bin/virtctl 
[root@master ~]# virtctl -h     #help帮助命令

Download the official vm.yaml file (to generate the official virtual machine)

1. Download the official VM.yaml file. The vm.yaml file declares all the configurations required by a virtual machine, such as: network, disk, image, ...

[root@master ~]# wget https://kubevirt.io/labs/manifests/vm.yaml --no-check-certificate
#文件内容如下:
apiVersion: kubevirt.io/v1
kind: VirtualMachine           #vm资源
metadata:
  name: testvm
spec:
  running: false            #当此字段为"Running",开始创建vmi资源
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: testvm
    spec:
      domain:
        devices:            #代表可以添加设备,比如:磁盘,网络...
          disks:            #硬盘设置。表示创建那种硬盘,这里表示有两块硬盘
            - name: containerdisk
              disk:              #将卷作为磁盘连接到vmi(虚拟机实例)
                bus: virtio      #表示要模拟的磁盘设备的类型,比如有:sata,scsi,virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:            #选择定义的网络,下面的networks字段,就定义了一个"default"网络,这里就表示选择那个"default"网络
          - name: default
            masquerade: {
    
    }       #开启masquerade,这个表示使用网络地址转换(NAT)来通过Linux网桥将虚拟机连接至Pod网络后端。
        resources:
          requests:
            memory: 64M
      networks:          #网络的配置
      - name: default              #定义一个网络叫做"default",这里表示使用Kubernetes默认的CNI,也就是使用默认的网络
        pod: {
    
    }
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo      #使用此镜像创建个虚拟机。
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=

2. Use this template file to create a virtual machine:

[root@master ~]# kubectl apply -f vm.yaml 
virtualmachine.kubevirt.io/testvm created
#查看vm的状态,在上面的概念中也将到了,vm中的"running"字段默认是"false"的,我们需要用virtctl命令行去启动此vm,然后virt-controller就会把vm中的"running"字段由"false"改为"true",之后在创建vmi资源,创建虚拟机等操作。
[root@master ~]# kubectl get vm
NAME     AGE   STATUS    READY
testvm   5s    Stopped   False

3. Start the virtual machine and try to enable virtualization

#启动testvm这个虚拟机
[root@master ~]# virtctl start testvm
VM testvm was scheduled to start
#查看Pod,这个Pod里面就存放一个虚拟机
[root@master ~]# kubectl get pods
virt-launcher-testvm-c5gfc   2/2     Running   0          25s
#在查看vmi资源
[root@master ~]# kubectl get vmi
NAME     AGE   PHASE     IP            NODENAME   READY
testvm   59s   Running   10.244.1.26   node1      True

4. Enter the virtual machine console

virtctl console testvm
#Ctrl+]退出此虚拟机
Successfully connected to testvm console. The escape sequence is ^]

OK
GROWROOT: NOCHANGE: partition 1 is size 71647. it cannot be grown
/dev/root resized successfully [took 0.17s]
/run/cirros/datasource/data/user-data was not '#!' or executable
=== system information ===
Platform: KubeVirt None/RHEL-AV
Container: none
Arch: x86_64
CPU(s): 1 @ 2199.541 MHz
Cores/Sockets/Threads: 1/1/1
Virt-type: AMD-V
RAM Size: 43MB
Disks:
NAME  MAJ:MIN     SIZE LABEL         MOUNTPOINT
vda   253:0   46137344               
vda1  253:1   36683264 cirros-rootfs /
vda15 253:15   8388608               
vdb   253:16   1048576 cidata        
=== sshd host keys ===
-----BEGIN SSH HOST KEY KEYS-----
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCh4nXQS4nzbGRBMHw92aSBrSG1OPxbfp99vC2NHrYLtA6rMbi8sZ7H7Ys7A4RVC0vH1dcbVN/NFXRBfANXcD0rmr17HHX6nhXvFzWGBEEZEY2OYWErjxGAtAI/m+6OwOoYwYkHVIZTyAMejcN/JgW+yYqPAc8Md7zAfZ5c9xTqVnRASFxTpWwxGaf5p1pWq1RH2QHzKvEcbSqt0OIRyneqo25xn7we3rh2KcjCj16f4E0iL7qkum/ftv6bzgZ9mPKgRb2ja6W1LEek1GlHIwmvUuflL8Y6a4sk0RFTvyEUnNq8SdNBTqTUGZ9O8SSQx1bj733vr2WGLljDZkFB6Ver root@testvm
ssh-dss AAAAB3NzaC1kc3MAAACBAIlxV2kDXdGe3qapWXZ2qKoI3KCK9c6w80zSgfr0loLFwUaCWZax6NedlJDIXzoigjcgT0YWQT40aLxUrYQXlBUISnz8CkfKjfByzQK1WMP4OpZ0xjCWuuOdp0kqtGr76J8teq14RTRHdApUey0JHQCEkEU05AUqj3V3nmY8HcoRAAAAFQCmziUZA8vzBKHzc19KOpxnCsndUwAAAIBCPympZPf8EuY7miagP+vt6qFSW2Yv1X/xP0vTqd89BYCYmgoGHYKlU3B7gCq7EEF5kphzZ0CagjAPiHt50X3aL9vviqM9gJx721Dz+y5xvnicRs0OKfYMSDo7gg5bcsKM/BtKTR80gRq51IBWm+kO5NcIcCK75HIQX5cu5UK2DwAAAIBZPygbYSM7fetwf0qEvXInhbsvDtjFGXsHAh2M3n6DkbmDgTjwcnDBb2WPzkMzmnGz/mCsClMR/mZRjViZ7A5i3OKk2tpqBQbfP0drKPg4WaMuvtpkZ5drr8y6PHWlweekBmcuiK0mHlgRFCl0aoJ0KWXU0AH3llDxdZlVwl1U1Q== root@testvm
-----END SSH HOST KEY KEYS-----
=== network info ===
if-info: lo,up,127.0.0.1,8,,
if-info: eth0,up,10.0.2.2,24,fe80::5054:ff:feae:bce3/64,
ip-route:default via 10.0.2.1 dev eth0 
ip-route:10.0.2.0/24 dev eth0  src 10.0.2.2 
ip-route6:fe80::/64 dev eth0  metric 256 
ip-route6:unreachable default dev lo  metric -1  error -101
ip-route6:ff00::/8 dev eth0  metric 256 
ip-route6:unreachable default dev lo  metric -1  error -101
=== datasource: nocloud local ===
instance-id: testvm.default
name: N/A
availability-zone: N/A
local-hostname: testvm
launch-index: N/A
=== cirros: current=0.4.0 uptime=18.75 ===
  ____               ____  ____
 / __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \ 
\___//_//_/  /_/   \____/___/ 
   http://cirros-cloud.net


login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
#根据上面提供的用户/密码进行登录
testvm login: 

Ctrl+] to exit this virtual machine

5. Related commands

virtctl stop vm
virtctl start vm
virtctl restart vm
virtctl console vm

Create a virtual machine with Win10 operating system

1. Download the Win10 image

Download address: https://tb.rg-adguard.net/public.php

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-D2Vd7Buo-1685428751730) (C:\Users\Zheng Bo\AppData\Roaming\Typora\typora-user-images \1650169936524.png)]

Download to Win and transfer to Linux, or directly in Linux:

wget https://tb.rg-adguard.net/dl.php?go=165b1d29

Create default storage (StorageClass)

Note: Since we need to upload the mirror later, we need to use PV, PVC, and PVC will be automatically created for us, but PV needs to be created by ourselves.

We know that there are two ways to create PV and PVC, one is static and the other is dynamic.

Static is: manually write PV, PVC resource lists, and then bind them.

Dynamic is: we create a storage class: StorageClass, and then we only need to write the resource list of PVC, which will automatically create PV for us, and then bind it to PVC.

We use a dynamic method here:

1. Deploy NFS service

# 在每个机器。
yum install -y nfs-utils


# 在master 执行以下命令 
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports


# 执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /nfs/data


# 在master执行
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

# 使配置生效
exportfs -r


#检查配置是否生效
exportfs

2. Configure storage class

The file is as follows:

You only need to modify two parts: just change the IP of the NFS server to your own.

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.31.0.4 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.31.0.4
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

3. Apply and view StorageClass (sc)

kubectl apply -f sc.yaml 
[root@master ~]# kubectl get sc
NAME                    PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  11m

upload mirror

KubeVirt can use PVC as the backend disk. When using the filesystem type PVC, the image /disk.img is used by default. Users can upload the image to PVC and use this PVC when creating a VMI. When using this method, you need to pay attention to the following points:

Only one image is allowed in a PVC, and only one VMI is allowed to be used. To create multiple VMIs, it needs to be uploaded multiple times

The format of /disk.img must be in RAW format

CDI provides a solution to use PVC as a virtual machine disk. Before the virtual machine is started, the PVC is filled in the following ways:

Import the virtual machine image to PVC through the URL, the URL can be http link, s3 link

Clone an existing PVC

Import virtual machine disks to PVC through container registry, which needs to be used in conjunction with ContainerDisk

Upload the local image to PVC through the client

Through the command line virtctl, combined with the CDI project, you can upload the local image to the PVC. The supported image formats are:

.img

.qcow2

.iso

The above images compressed into .tar, .gz, .xz formats

Our goal is to install a Windows 10 virtual machine, so we need to upload the Windows image downloaded above to PVC:

Do as follows:

virtctl image-upload  \
 --image-path='Win10_20H2_Chinese(Simplified)_x64.iso' \
 --pvc-name=iso-win10 \
 --pvc-size=7G \
 --uploadproxy-url=https://10.96.237.3 \
 --insecure --wait-secs=240


#参数讲解:
--image-path='Win10_20H2_Chinese(Simplified)_x64.iso'    #指定Win镜像的路径,我们是在/root目录下,因此就这个路径即可。
--pvc-name=iso-win10         #指定PVC的名字
--pvc-size=7G                #指定PVC的大小,根据操作系统镜像大小来设定,一般略大一个G就行。
--uploadproxy-url=https://10.96.237.3         #cdi-uploadproxy 的 Service IP,可以通过命令 kubectl -n cdi get svc -l cdi.kubevirt.io=cdi-uploadproxy 来查看。

Add hostDisk support

Kubevirt does not enable hostDisk support by default and needs to be enabled manually. Here we directly edit the ConfigMap of KubeVirt1

[root@master ~]# kubectl get cm -n kubevirt
NAME                              DATA   AGE
kube-root-ca.crt                  1      3h47m
kubevirt-ca                       1      3h40m
kubevirt-config                   2      3h47m          #edit此ConfigMap
kubevirt-install-strategy-r6gd8   1      3h40m

[root@master ~]# kubectl edit cm kubevirt-config -n kubevirt
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  debug.useEmulation: "true"
  feature-gates: HostDisk              #添加这一行即可
kind: ConfigMap
metadata:
  creationTimestamp: "2022-04-17T01:19:05Z"
  name: kubevirt-config
  namespace: kubevirt
  resourceVersion: "46074"
  uid: 56ff5d51-1ce5-4c0e-bef2-7c2d24525306

create virtual machine

1. Similarly, we need to write a vm.yaml file ourselves

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: win10
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/domain: win10
    spec:
      domain:
        cpu:
          cores: 4
        devices:
          disks:
          - bootOrder: 1
            cdrom:
              bus: sata
            name: cdromiso
          - disk:
              bus: virtio
            name: harddrive
          - cdrom:
              bus: sata
            name: virtiocontainerdisk
          interfaces:
          - masquerade: {
    
    }
            model: e1000
            name: default
        machine:
          type: q35
        resources:
          requests:
            memory: 16G
      networks:
      - name: default
        pod: {
    
    }
      volumes:
      - name: cdromiso
        persistentVolumeClaim:
          claimName: iso-win10
      - name: harddrive
        hostDisk:
          capacity: 50Gi
          path: /data/disk.img
          type: DiskOrCreate
      - containerDisk:
          image: kubevirt/virtio-container-disk
        name: virtiocontainerdisk

Three Volumes are used here:

cdromiso: Provide the operating system installation image, that is, the PVC iso-win10 generated after uploading the image above.

harddrive: The disk used by the virtual machine, that is, the operating system will be installed on this disk. Here, hostDisk is selected to mount directly to the host to improve performance. If distributed storage is used, the experience will be very bad.

containerDisk : Since Windows does not recognize disks in raw format by default, the virtio driver needs to be installed. containerDisk can mount the packaged virtio-driven container image into the virtual machine.

Regarding the network part, spec.template.spec.networks defines a network called default, which means using the default CNI of Kubernetes. spec.template.spec.domain.devices.interfaces Select the defined network default and enable masquerade to use Network Address Translation (NAT) to connect the virtual machine to the Pod network backend through the Linux bridge.

2. Create a virtual machine using a template file:

kubectl apply -f win10.yaml 
virtctl start win10
#查看VM,VMI,Pod的状态
[root@master ~]# kubectl get vm
NAME     AGE     STATUS    READY
testvm   3h30m   Stopped   False
win10    4m7s    Running   True

[root@master ~]# kubectl get vmi
NAME    AGE    PHASE     IP            NODENAME   READY
win10   4m7s   Running   10.244.2.12   node1      True

[root@master ~]# kubectl get pods
virt-launcher-win10-hwhcw                 2/2     Running   0          4m16s

Guess you like

Origin blog.csdn.net/m0_57776598/article/details/130948061