Chapter IX nova mounting assembly 3

Premise: the need for a physical machine to support Intel VT or AMD-V

1, computing nodes other virtualization components mounted kvm

apt -y install qemu-kvm libvirt-bin virtinst bridge-utils libosinfo-bin libguestfs-tools virt-top
episode:
After the above packages installed, to generate a local virbr0 bridge; libvirt service enabled generation after, libvirt server (host) generates a virtual network switch (virbr0) on all virtual machines on the host by connecting this virbr0 stand up. By default virbr0 using NAT mode (using IP Masquerade), so in this case the virtual machine access to the outside through the host. But in our environment, a virtual machine using a bridge (bridge) directly connected to the LAN, so this virbr0 not required (Note: Do not be confused, bridge and here virbr0 bridge is unrelated) In order to make clear the network and see less confusing, here delete virbr0 bridge. (This paragraph, then learn from others, is to find the time when problems arise)
NET - virsh List # view, there is a default 
virsh NET - the destroy default # removed virbr0 bridge, terminate dnsmasq process, remove rules iptables 
#virsh NET - undefine default 
virsh NET -autostart --network default - disable # stop after the restart Network automatic start 
systemctl restart libvirtd # remember to restart the service 
brctl show # to confirm the removal 
virsh NET -list # to confirm the deletion, choose a way to view

2, in the open vhost-net computing node

modprobe vhost_net
lsmod | grep vhost

echo vhost_net >> /etc/modules

3, the computing node assembly mounted nova

apt -y install nova-compute nova-compute-kvm
mv /etc/nova/nova.conf /etc/nova/nova.conf.org    # 备份官方的
# 配置文件
vi /etc/nova/nova.conf
# create new
[DEFAULT]
# allow resize to same host
allow_resize_to_same_host = True
# block allocate time
block_device_allocate_retries = 600
block_device_allocate_retries_interval = 6
max_concurrent_live_migrations = 10
use_neutron = True
linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver    # LinuxBridge桥接
firewall_driver = nova.virt.firewall.NoopFirewallDriver
vif_plugging_is_fatal = True
vif_plugging_timeout = 300
debug = True                                    # 打开调试功能
# define own IP address
my_ip = 192.168.222.27                          # API IP
state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata
log_dir = /var/log/nova
# RabbitMQ connection info
transport_url = The Rabbit: // OpenStack: [email protected] 
[API] 
auth_strategy = Keystone 

# the Glance Connection info 
[the Glance] 
api_servers = HTTP: // 192.168.220.29:9292 # here to configure the storage network IP, Network API can also be 

# VNC enable 
[VNC] 
Enabled = True 
server_listen = 0.0 . 0.0 
server_proxyclient_address = $ my_ip 
novncproxy_base_url = HTTP: // 192.168.222.29:6080/vnc_auto.html 

[oslo_concurrency] 
lock_path = $ state_path /tmp

# Keystone auth info
[keystone_authtoken]
www_authenticate_uri = http://192.168.222.29:5000
auth_url = http://192.168.222.29:5000
memcached_servers = 192.168.222.29:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = servicepassword

[placement]
auth_url = http://192.168.222.29:5000
os_region_name = RegionOne
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = servicepassword

[wsgi]
api_paste_config = /etc/nova/api-paste.ini

[neutron]
auth_url = http://192.168.222.29:5000
auth_type = password
project_domain_name = default
user_domain_name =default 
region_name = RegionOne 
project_name = Service 
username = Neutron 
password = servicepassword 
service_metadata_proxy = True 
metadata_proxy_shared_secret = metadata_secret 

[Cinder] 
os_region_name = RegionOne 

[the libvirt] 
virt_type = KVM 
# VMotion function, created between all the compute nodes nova no password login account 
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED 

# modify file permissions 
chmod 640 / etc / Nova / nova.conf
 chgrp Nova / etc / Nova / nova.conf 
# restart the service 
systemctl restart Nova -compute # service has been switched from the start
episode:
Here achieve migration of virtual machines; the end of the above configuration file also shows that, to create a password-free login account nova across all compute nodes, and the following configuration.
# 创建nova账号免密码登录
cat /etc/passwd|grep nova
usermod -s /bin/bash nova
cat /etc/passwd|grep nova # 确认
passwd nova               # 输入admin123,作为密码
# 以上动作创建账号行为,现在各个节点上进行完成后,再继续
su - nova
ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsa >/dev/null 2>&1
ssh-copy-id -i .ssh/id_dsa.pub nova@192.168.220.27  # 有几个节点就传几个节点,相互传送
ssh 192.168.220.27                                  # 验证所有节点,无需密码登录(这个IP是为了在存储网上迁移)

# 配置文件
vi /etc/default/libvirtd
# 修改如下
libvirtd_opts="-l"

# 配置文件
vi /etc/libvirt/libvirtd.conf
# 修改如下
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
listen_addr = "192.168.220.28"                      # 写入每个计算节点存储网的IP地址
auth_tcp = "none"
host_uuid = "75f51c73-fa22-4401-906e-c42b05f966d4" # 写入每个计算节点uuid,用uuidgen生成

systemctl restart libvirtd             # 服务已经开机自启动

4、在控制节点发现计算节点(这里需要验证)

# 发现计算节点,同步数据库,python的orm对象关系映射,需要初始化来生成数据库表结构
su -s /bin/bash nova -c "nova-manage cell_v2 discover_hosts"
# 查看验证
openstack compute service list

 

Guess you like

Origin www.cnblogs.com/shihongkuan/p/11399245.html