『OpenStack』 cloud computing platform 『Nova』 computing service study guide

cover

Preface

This article will explainOpenStack platform computing service component Nova, combining abstract concepts and simple and easy-to-understand practical operations to help you better Understand the role of Nova computing services in OpenStack

System configuration: Host Ubuntu 20.04 (WSL2)

Introduction

OpenStack

官网链接Open Source Cloud Computing Infrastructure - OpenStack

OpenStack is an open sourcecloud computing platform used to build and manage public and private clouds infrastructure. It provides a modular set of tools and services that enableusers to create and manage virtual machines, storage, networking, authentication, images, and other cloud infrastructure resources

We know that OpenStack is a cloud operating system at the IaaS layer. OpenStack provides and manages three major types of resources for virtual machines: computing, network and storage.

OpenStack three major types of resources

service component

Official website service component introduction:Open Source Cloud Computing Platform Software - OpenStack

Currently OpenStack official displayThere are as many as thirty types of services, but we generally do not use all of these services. We will only introduce them belowCore servicesand Main services

OpenStack architecture diagram

Nova: Manage computing resources (core)

Neutron: Managing network resources (core)

Glance: Provides OS images for VMs, belonging to the storage category (core)

Cinder: Block Storage Service (Core)

Swift: Object storage service

Keystone: Identity authentication service (core)

Ceilometer:监维务

Horizon: Web user interface (core)

Quantum: Network Management Services

Node composition

OpenStack is a distributed system, consisting of several nodes (Node) with different functions

  1. Controller Node (Controller Node)
    manages OpenStack, and the services running on it include Keystone, Glance, Horizon, and management related management in Nova and Neutron s component. Control nodes also run OpenStack-enabled services such as a SQL database (usually MySQL), a message queue (usually RabbitMQ), and the network time service NTP.
  2. Network Node(Network Node)
    The service running on it is Neutron. Provides L2 and L3 networking for OpenStack. Including virtual machine networking, DHCP, routing, NAT, etc.
  3. Storage Node (Storage Node)
    Provides block storage (Cinder) or object storage (Swift) services.
  4. Compute Node(Compute Node)
    runs the Hypervisor (KVM is used by default). At the same time, the agent of the Neutron service is run to provide network support for the virtual machine.

In order to have a simple topology and complete functions, the following two types of virtual machine nodes can be deployed:

  1. devstack-controller:control node + network node + block storage node + compute node
  2. devstack-compute: calculation points

Virtualization

Note: This chapter is preparatory knowledge and can be skipped as appropriate.

Way

Virtualization is the foundation of cloud computing. Virtualization allows multiple virtual machines to be run on one physical server. Virtual machines share the CPU, memory, and IO hardware of the physical machine. resources, but logically virtual machines are isolated from each other. The physical machine is generally called the host (Host), and the virtual machine on the host is called the guest (Guest)

The hardware virtualization of the Host is mainly implemented throughHypervisor. According to the implementation method and location of the Hypervisor, virtualization is divided into There are two types: type 1 virtualization and type 2 virtualization

Type 1 virtualization

Hypervisor is installed directly on the physical machine, and multiple virtual machines run on the Hypervisor. Hypervisor implementation is generally a specially customized Linux system.
Xen and VMWare’s ESXi both belong to this type

Type 1 virtualization

Type 2 virtualization

First install a conventional operating system on the physical machine, such as Redhat, Ubuntu and Windows. The hypervisor runs as a program module on the OS and manages the management virtual machines.
KVM, VirtualBox and VMWare Workstation all belong to this type

Type 2 virtualization

Compared

Type 1 virtualization has specially optimized the hardware virtualization function. The performance is higher than that of type 2

Type 2 virtualizationBecause it is based on a common operating system, it will bemore flexible, such as supporting virtualization Machine nesting. Nesting means you can run KVM inside a KVM virtual machine

KVM

Official website link:KVM (linux-kvm.org)

This time we will use **KVM (Kernel-Based Virtual Machine)** as the virtualization tool. As the name suggests, KVM is implemented based on the Linux kernel.

KVM The kernel module is called kvm.ko, is only used to manage virtual CPU and memory, IO Part of virtualization (such as storage and network devices) needs to be implemented by the Linux kernel and QEMU

Note: The blogger below uses a virtual machine based on WSL2 as the host. If you use a virtual machine created by VMWare Workstation as the host, it is not guaranteed to be correct. After all, WSL2 and VMWare Workstation have different virtualization methods.

Libvirt

Official website link:libvirt: The virtualization API

Note: This will only be briefly explained, and the Libvirt system architecture will be explained in detail later.

Libvirt is currently the most widely used tool for managing KVM virtual machines. In addition to managing hypervisors such as KVM, it can also manage Xen, VirtualBox, etc. OpenStack also uses Libvirt

It mainly includes the following three parts: background daemon program libvirtd, API library and command line tool virsh

  1. libvirtd is a service program that receives and processes API requests
  2. The API library allows others to develop advanced tools based on Libvirt, such as virt-manager, which is a graphical KVM management tool, which we will introduce later.
  3. virsh is a KVM command line tool that we often use. There will be examples of use later.

Note: If you are using a virtualization method that does not natively support GUI, you need to use VcXsrv. The blogger has written a corresponding article before. Just search for ""GUI Interface" Transfer" on the homepage and install and configure it.

Install

Verify that the CPU supports hardware virtualization

Note: If the number is greater than 0, it means that the CPU supports hardware virtualization, otherwise it does not support it.

grep -Eoc '(vmx|svm)' /proc/cpuinfo

Check if VT is enabled in BIOS

# 安装
apt-get install cpu-checker -y
# 执行
kvm-ok

Then after execution, the following content is output to indicate success.

INFO: /dev/kvm exists
KVM acceleration can be used

Execute the following command to install KVM related packages

sudo apt-get install qemu-kvm qemu-system Libvirt-bin virt-manager bridge-utils vlan -y

Then the following are the relevant commands of libvritd, which can start the service and set up the service to start automatically at boot.

systemctl start libvirtd
systemctl enable libvirtd
# 查看虚拟化启动服务和自启动状态
systemctl list-unit-files |grep libvirtd.service

Verify whether libvirtd is enabled. The output active indicates that it is enabled.

systemctl is-active libvirtd

Verify kvm, output kvm_intel, kvm to indicate successful installation (but I did not successfully output and it has no impact for the time being)

lsmod | grep kvm

Create a virtual machine

Here we usesmall virtual machine image CirrOS, which is more suitable fortesting and learning

Mirror download homepage:Index of / (cirros-cloud.net)

Mirror download address:cirros-0.3.3-x86_64-disk.img (cirros-cloud.net)

Then we need to place the image in the /var/lib/Libvirt/images/ directory, which is where KVM searches for image files by default

image-20231030160020256

Start the virtual machine management graphical interface

virt-manager

Then follow the steps below to create a virtual machine. We are using an .img image file and you need to select the fourth item. If you are using an ISO image, you need to select the first item.

image-20231030155819084

image-20231030160311980

image-20231030160429805

It will be the default in the future.

image-20231030160554716

Returning to the homepage at this time, we can see that the vm1 virtual machine has been created successfully.

We can use virsh to manage virtual machines. For example, use the following command to view the virtual machines on the host.

# 执行如下
virsh list
# 输出列表
Id   Name   State
----------------------
 1    vm1    running

question

If you are not the latest WSL2, systemd may not be started by default. You need to use the following steps to enable it.

git clone https://github.com/DamionGans/ubuntu-wsl2-systemd-script.git
cd ubuntu-wsl2-systemd-script/
bash ubuntu-wsl2-systemd-script.sh
# Enter your password and wait until the script has finished

But after entering the directory, you need to manually modify the following two parameters

image-20231030092811856

If your WSL version is 0.67.6+ (use wsl --version to check the version), you can follow the steps below to enable systemd

Refer to the official link:Advanced settings configuration in WSL | Microsoft Learn

To enable systemd, open the file in a text editor with administrator rights usingsudo and add the following lines to a>wsl.conf/etc/wsl.conf

[boot]
systemd=true

Then, you need to shut down the WSL distribution using wsl.exe --shutdown via PowerShell to restart the WSL instance. After the distribution restarts, systemd should be running. You can confirm this with the systemctl list-unit-files --type=service command, which displays the status of the service

DevStack

Introduction

DevStack (Develop OpenStack) is a rapid deployment tool provided by the OpenStack community. It is a tool specially designed for developing OpenStack.

DevStack does not depend on any automated deployment tools.Pure Bash script implementation, so there is no need to spend a lot of time preparing deployment tools. You only needsimply edit the configuration file, and then run the script to achieveone-click deployment of the OpenStack environment. Basically all OpenStack components can be deployed using DevStack, but not all developers need to deploy all services. For example, Nova developers may only need to deploy core components. Other services such as Swift, Heat, Sahara, etc. do not actually need to be deployed. need. DevStack fully considers this situation and is designed to be extensible from the beginning. In addition to the core components, other components are provided in the form of plug-ins. Developers only need to customize and configure themselves according to their own needs. Just use the plugin

deploy

We will explain in detail how to deploy DevStack on a single machine

We needAdd users

DevStack should be executed as a non-root user (but with sudo permissions), manually add the stack user

# 添加 stack 用户
sudo useradd -s /bin/bash -d /opt/stack -m stack

# 授予 sudo 权限
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack

# 以 stack 用户登录
sudo su - stack

Then configure pip

# 创建文件夹
cd && mkdir .pip && cd .pip

# 创建并编辑配置文件
sudo vim pip.conf

# 添加如下配置
[global]
timeout = 6000
index-url = http://mirrors.aliyun.com/pypi/simple/
trusted-host = mirrors.aliyun.com

Download DevStack (you need to configure your own GitHub SSH key before executing the following command)

Generate key command: ssh-keygen -t rsa -C "stack"

git clone https://opendev.org/openstack/devstack -b stable/ussuri

Then we enter the devstack folder, then create the configuration file local.conf and add the following content

Explain parameters:

  • ADMIN_PASSWORD: Password for OpenStack users admin and demo
  • DATABASE_PASSWORD: MySQL administrator user password
  • RABBIT_PASSWORD:RabbitMQ password
  • SERVICE_PASSWORD: Password for interaction between service component and KeyStone
  • GIT_BASE: Source code hosting server
  • HOST_IP:Bound IP address
[[local|localrc]]
HOST_IP=172.22.124.174
GIT_BASE=https://opendev.org

ADMIN_PASSWORD=111111
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD

Execute script in directory

# 安装
./stack.sh
# 停止
./unstack.sh
# 停止并删除配置
./clean.sh

If the following error occurs during installation, you can execute the command FORCE=yes ./stack.sh to install it

[ERROR] ./stack.sh:227 If you wish to run this script anyway run with FORCE=yes
/opt/stack/devstack/functions-common: line 241: /opt/stack/logs/error.log: No such file or directory

New

官方仓库:openstack/nova: OpenStack Compute (Nova). Mirror of code maintained at opendev.org. (github.com)

Introduction

Now we start to enter the core topic of this article, managing computing services of virtual machine instances Nova

Nova handles all activities in the life cycle of instances (instances) in the OpenStack cloud. It is responsible formanaging computing resources, networks, authentication, all Platforms that require scalability

However Nova does not have virtualization capabilities, it needs to use the Libvirt API to interact with a supported Hypervisor

components

1.API Server(Nova-Api)

API Server provides external interfaces for interacting with cloud infrastructure and is also the only external component that can be used to manage infrastructure.

2.Message Queue(Rabbit MQ Server)

Communication between OpenStack nodes is completed through message queues using AMQP (Advanced Message Queue Protocol).

3.Compute Worker(Nova-Compute)

Compute Worker manages the instance life cycle, receives instance life cycle management requests through Message Queue, and undertakes operation work.

4.Network Controller(Nova-Network)

The Network Controller handles the host'snetwork configuration, including assigning IP addresses, configuring VLANs for the project, implementing security groups, and configuring compute node networks.

5.Volume Workers(Nova-Volume)

Volume Worker is used to manage LVM (Logical Volume Manager)-based instance volumes. Volume Worker has volume-related functions, such as creating new volumes, deleting volumes, attaching volumes to instances, and detaching volumes from instances.

6.Scheduler(Nova-Scheduler)

Scheduler Scheduler maps Nova-API calls to OpenStack components. The scheduler runs as the Nova-Schedule daemon and obtains computing services from the available resource pool through appropriate scheduling algorithms.

AMQP

AMQP (Advanced Message Queuing Protocol, Advanced Message Queuing Protocol) is an open standard forapplication layer protocol, designed formessage-oriented middleware

It is mainly used to store and forward messages sent by the exchange. The queue also has flexible life cycle attribute configuration, which can realize the persistent storage, temporary residence and automatic deletion of the queue.

Basic workflow:

Publisher (Publisher) publishes messages (Message) viaSwitch (Exchange)

The switch distributes the received message to the queue bound to the switch according to the routing rules (Queue)

FinallyAMQP broker will deliver the message to the consumer subscribed to this queue >, or consumers can obtain it themselves according to their needs

The diagram is as follows:

image-20231030223238499

switch

An exchange is an AMQP entity used to send messages

After the switch gets a message, it routes it to one or zero queues.

Which one it usesThe routing algorithm is determined by the switch type and bindings rules

The AMQP 0-9-1 broker provides four switches

Fanout exchange (fan switch/broadcast switch)

This type of exchange does not analyze the Routing Key in the received message. By default, the message will be forwarded to all queues bound to the exchange. < a i=2>Broadcast exchange is the simplest and has the highest forwarding efficiency, but it is less secure. Consumer applications can obtain messages that do not belong to them

Direct exchange (direct switch)

Direct switch has higher forwarding efficiency and better security, but lacks flexibility and requires a large amount of system configuration
relative to broadcast For switches, direct switches can bring us more flexibility

The routing algorithm of the direct exchange is very simple: if the routing_key of a message exactly matches the binding_key of a queue, the message will be routed to the queue
The bound keyword will combine the queue and switches are bound together. When the routing_key of the message matches multiple binding keywords, the message may be sent to multiple queues

Topic exchange

In **topic exchanges**, queues are bound to the exchange through routing keys. Then, the exchange routes the message to one or more bound queues based on the routing value in the message.

Sector switches and topic switchesSimilarities and differences:

  • The routing key is meaningless for sector switches. As long as there is a message, it is sent to all queues it is bound to.
  • For topic switches, routing rules are determined by the routing key. Only when the rules of the routing key are met, the message can be routed to the corresponding queue.

Headers exchange

Similar to the topic switch, butthe head switch uses multiple message attributes instead of routing keys to establish routing rules. Routing rules are established by determining whether the message header value matches the specified binding.
This switch has an important parameter: "x-match"

  • When "x-match" is "any", any value in the message header can satisfy the condition if it matches
  • When "x-match" is set to "all", all values ​​in the message header need to match successfully.

queue

We know that Nova contains many sub-services, and thesesub-services need to coordinate and communicate with each other. To decouple sub-services , Nova serves as an information transfer station for sub-services through Message Queue. So in the architecture diagram we see that there are no direct connections between sub-services, they are all contacted through Message Queue

Note: OpenStack uses RabbitMQ as Message Queue by default

RPC call

Nova implements two RPC calls based on RabbitMQ

Nova modules are roughly divided into:

Invoker module: The main function is to send system request messages to the message queue, such as Nova-API and Nova-Scheduler

Worker module: Get the system request message sent by the Invoker module from the message queue and reply the system response message to the Invoker module, such as Nova-Compute, Nova-Volume and Nova-Network

RPC.CALL (based on request and response method)

Initialize aTopic Publisher to send message requests to the queue system; create one immediately before actually executing the message publishing action, it will be routed key (such as 'msg_id ') is obtained from the Direct Consumer specified in and passed to the InvokerOnce the message is unpacked by exchange , a Direct Publisher will be generated to send the reply message to the queue system;Once the task is completed, it will Obtained by the Topic Consumer specified in the routing key (such as 'topic.host') and passed to the Worker responsible for the task;Once the message is unpacked by exchange to wait for the reply message;Direct Consumer


../_images/rpc-flow-1.png

RPC.CAST (only provides one-way requests)

Initialize Topic Publisher to send message requests to the queue system
Once the message is unpacked by exchange, it will be routed key (such as 'topic ') The Topic Consumer specified in is obtained and passed to the Worker responsible for the task

../_images/rpc-flow-2.png

Libvirt

cause

Variousdifferent virtualization technologies provide basic management tools, such as startup, deactivation, configuration, connection to the console, etc. . In this way, there are two problems when building cloud management

  • If you adopthybrid virtualization technology, the upper layer will need to use different management tools for different virtualization technologies, which is very troublesome
  • There may be new virtualization technologies that are more suitable for current application scenarios, and need to be migrated to the past. This will require significant changes to the management platform

The main goal of Libvirt is to provide a set of convenient and reliable programming interfaces for various virtualization tools, and to manage multiple different virtualizations in asingle way a>Provide method

Architecture

Without Libvirt managementThe virtual machine running structure is as follows

img

Use Libvirt API managementThe virtual machine running structure is as follows

img

There are two control methods for Libvirt:

The management application and domain are on the same node. Management application works through Libvirt to control local domain

img

The management application and domain are located on different nodes. This mode uses a special daemon called Libvirtd that runs on the remote node. The program starts automatically when Libvirt is installed on a new node and can automatically identify the local hypervisor and install drivers for it

The management application connects from local Libvirt to remote libvirtd via a common protocol

img****

Reference link

This article is published by the blog post platform OpenWrite!

Guess you like

Origin blog.csdn.net/m0_63748493/article/details/134130805