OpenStack Computing Service (Nova)

1. The basic concept of Nova

Nova is responsible for managing the creation, deletion, start, and stop of cloud host instances in OpenStack. Nova is at the center of the OpenStack architecture, and other services or components (such as Glance, Placement, Cinder, Neutron, etc.) provide support for it. Nova itself does not have any virtualization capabilities. It uses a virtual machine manager (Hypervisor) to create and manage cloud hosts. Hypervisor provides unified interface services for various virtualization programs (such as KVM, Xen, VMware ESX, QEMU).

1. Nova's component architecture - Nova's module composition

Nova is a powerful and complex component consisting of various modules, which belong to several cells (Cells). Each unit is a collection of several computing nodes.

| **Module**
| Function Introduction |
| — | — |
| nova-scheduler | This module is responsible for the virtual machine scheduling service, cooperates with Placement, and is responsible for selecting a host from the computer cluster to create a virtual machine | | nova-
api | This module is used to receive and respond to external requests, and is also the only external access to manage Nova|

image.png

2. Nova's component architecture - Nova's unit management mode

Computing nodes in OpenStack are divided into several small units for management. Except for the top-level management unit "cell0", each unit has its own message queue and database, and "cell0" only has a database. The unit "cell0" contains the interface module (nova-api) and the scheduling module (nova-scheduler). The rest of the units such as "cell1" and "cell2" are responsible for the creation and management of specific cloud host instances.
There are 3 databases serving each unit of Nova, namely "nova_api", "nova_cell0" and "nova". The top management unit "cell0" uses the "nova_api" "nova_cell0" database. The "nova_api" database stores global information, such as unit information, instance type (template for creating cloud hosts) information, and so on. The function of the "nova_cell0" database is that when a certain cloud host fails to be scheduled, the information of the cloud host will not belong to any unit at this time, but can only be stored in the "nova_cell0" database, so the "nova_cell0" database is used to store cloud Hosts schedule failed data for centralized management. The "nova" database serves all other units and stores information about cloud hosts in the unit.
image.png

3. The basic workflow of Nova

image.png

  • Step 1, nova-api receives the cloud host creation request initiated by the user through the management interface or command line, and sends it to the message queue.
  • In the second step, nova-conductor obtains the request from the message queue, obtains relevant information such as Cell units from the database, and then puts the request and the obtained data into the message queue.
  • Step 3: After nova-scheduler obtains the request and data from the message queue, it cooperates with the Placement component to select the physical machine for creating the cloud host. After the selection is completed, the request is transferred to the message queue to wait for nova-compute to process.
  • Step 4: After nova-compute gets the request from the message queue, it interacts with Glance, Neutron and Cinder respectively to obtain mirror resources, network resources and cloud storage resources. After all resources are ready, nova-compute invokes specific virtualization programs through Hypervisor, such as KVM, QEMU, Xen, etc., to create virtual machines.

2. Project implementation

1. Install and configure Nova on the control node
Install the Nova package
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
  • "openstack-nova-api": Nova and external interface module.
  • "openstack-nova-conductor": Nova conduction service module, providing database access.
  • "nova-scheduler": Nova scheduling service module, used to select a host for cloud host creation.
  • "openstack-nova-novncproxy": Nova's virtual network console (Virtual Network Console, VNC) proxy module, which supports users to access cloud hosts through VNC.
Create a Nova database and authorize it
#第1步,用下面的方法进入MariaDB数据库服务器。
mysql -uroot -p123456

#第2步,新建“nova_api”“nova_cell0”“nova”数据库。
CREATE DATABASE nova_api;
CREATE DATABASE nova_cell0;
CREATE DATABASE nova;

#第3步,给用户授权使用新建数据库。
GRANT ALL PRIVILEGES ON 数据库名.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON 数据库名.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
#注意:这里数据库名分别是“nova_api”“nova_cell0”“nova”,即6条语句。

#退出数据
quit;
Modify the Nova configuration file

Remove comments and blank lines from the configuration file
Step 1, back up the configuration file.

cp /etc/nova/nova.conf /etc/nova/nova.bak

Step 2, remove all comments and blank lines, and generate a new configuration file.

grep -Ev '^$|#' /etc/nova/nova.bak >/etc/nova/nova.conf

Editing a new configuration file
Step 1, Open the configuration file for editing.

vi /etc/nova/nova.conf

Step 2, modify the "[api_database]" and "[database]" parts to realize the connection with the database "nova_api" and "nova".

[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api
[database]
connection = mysql+pymysql://nova:123456@controller/nova

Step 3, modify the "[api]" and "[keystone_authtoken]" parts to realize the interaction with Keystone.

[api]
auth_strategy = keystone
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 123456

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 123456

Step 4, modify the "[placement]" part to realize the interaction with Placement.

[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 123456
region_name = RegionOne
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 123456

Step 5, modify the "[glance]" part to interact with Glance.

[glance]
api_servers = http://controller:9292

Step 6, modify "[oslo_concurrency]" and configure the lock path.

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

Step 7, modify the "[DEFAULT]" section, and configure information such as message queues and firewalls

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://rabbitmq:123456@controller:5672
my_ip = 192.168.10.10
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

Step 8, modify the "[vnc]" section to configure the VNC connection mode.

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
Initialize Nova's database

Initialize the "nova_api" database.
Step 1, initialize the "nova_api" database.

su nova -s /bin/sh -c "nova-manage api_db sync"

Step 2, create the "cell1" cell, which will use the "nova" database.

su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1"

The third step is to map "nova" to the "cell0" database, so that the table structure of "cell0" is consistent with that of "nova".

su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0"

Step 4, initialize the "nova" database, and create the same data table in "cell0" at the same time due to the existence of the mapping.

su nova -s /bin/sh -c "nova-manage db sync"
2. Install and configure Nova on the control node

Check unit registration status

nova-manage cell_v2 list_cells
3. Nova component initialization
Create Nova users and assign roles

Step 1, import environment variables to simulate login.

. admin-login 

Step 2: Create user "nova" on the OpenStack cloud computing platform.

openstack user create --domain default --password 123456 nova 

Step 3, assign "admin" role to user "nova"

openstack role add --project project --user nova admin
Create Nova service and endpoint

(1) Create a service
Create a service named "nova" and type "compute".

openstack service create --name nova compute 

(2) Create Computing Service Endpoints
There are three types of service endpoints for OpenStack components, corresponding to the addresses of Admin users (admin), internal components (internal), and public users (public).

Step 1, create a service endpoint accessed by public users.

openstack endpoint create --region RegionOne nova public http://controller:8774/v2.1 

Step 2, create service endpoints accessed by internal components.

openstack endpoint create --region RegionOne nova internal http://controller:8774/v2.1 

Step 3, create an Admin user to access the endpoint.

openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
Start the Nova service

Set boot

systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy 

start now

systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
Detect the Nova service of the control node

View port occupancy 8774 and 8775

netstat -nutpl|grep 8774
netstat -nutpl|grep 8775

View list of computing services

openstack compute service list
4. Install and configure Nova

You only need to install Nova's computing module "nova-compute" on the computing node, and install it as follows

Install the Nova package
yum -y install openstack-nova-compute
Modify the Nova configuration file

(1)
Remove comments and blank lines from the configuration file
Step 1, back up the configuration file.

cp /etc/nova/nova.conf /etc/nova/nova.bak 

Step 2, remove all comments and blank lines, and generate a new configuration file.

grep -Ev '^$|#' /etc/nova/nova.bak >/etc/nova/nova.conf 

(2)
Edit a new configuration file
Step 1, open the configuration file for editing.

vi /etc/nova/nova.conf 

Step 2, modify the "[api]" and "[keystone_authtoken]" parts to realize the interaction with Keystone.

[api] 
auth_strategy = keystone 

[keystone_authtoken] 
auth_url = http://controller:5000 
memcached_servers = controller:11211 
auth_type = password 
project_domain_name = Default 
user_domain_name = Default 
project_name = project 
username = nova 
password = 123456 

Step 3, modify the "[placement]" part to realize interaction with Placement.

auth_url = http://controller:5000 
auth_type = password 
project_domain_name = Default 
user_domain_name = Default 
project_name = project 
username = placement 
password = 123456 
region_name = RegionOne 

Step 4, modify the "[glance]" part to interact with Glance.

api_servers = http://controller:9292 

Step 5, modify "[oslo_concurrency]" and configure the lock path.

lock_path = /var/lib/nova/tmp

Step 6, modify the "[DEFAULT]" section, and configure information such as message queues and firewalls.

enabled_apis = osapi_compute,metadata
transport_url = rabbit://rabbitmq:123456@controller:5672 
my_ip = 192.168.10.20 
use_neutron = true 
firewall_driver = nova.virt.firewall.NoopFirewallDriver 

Step 7, modify the "[vnc]" section to configure the VNC connection mode.

[vnc] 
enabled = true 
server_listen = 0.0.0.0 
server_proxyclient_address = $my_ip 
novncproxy_base_url = http://192.168.10.10:6080/vnc_auto.html 

Step 8, configure the "[libvirt]" section, and set the virtualization type to QEMU.

[libvirt] 
virt_type = qemu
Start the compute node Nova service

First, set the boot service to start.

systemctl enable libvirtd openstack-nova-compute 

Start the service now.

systemctl start libvirtd openstack-nova-compute
Discover compute nodes and verify services

Control node check
(1) Import environment variables and simulate login

. admin-login 

(2) Discover new computing nodes

su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" 

Set up automatic discovery
Step 1, open the configuration file, modify “[scheduler]”, set automatic discovery every 60s

vi /etc/nova/nova.conf 
[scheduler] 
discover_hosts_in_cells_interval = 60 

Step 2, restart the "nova-api" service to make the modified configuration file take effect.

systemctl restart openstack-nova-api
Verify Nova service (control node)

(1) View the list of computing services

openstack compute service list 

(2) View a list of all OpenStack services and endpoints

openstack catalog list 

(3) Check with the Nova status detection tool

nova-status upgrade check

Guess you like

Origin blog.csdn.net/xiaoyu070321/article/details/131384217