Just use it as a cloud note
For the deployment process, refer to the official openstack documentation https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-queens
Also refer to https://blog.csdn.net/LL_JCB/article/details/80146328 , this series of articles by Fuai, no local source is used
Both servers are Centos7.2 system, 16G memory
Controller node IP: 192.168.239.128/24 hostname is controller.node
Computer node IP: 192.168.239.129/24 hostname is computer.node
System settings:
Two nodes
1. Modify the hosts file
vi /etc/hosts
2. Close SElinux
Vi /etc/selinux/config
3. Turn off the firewall
firewall-cmd --state //查看状态(notrunning/running)
systemctl stop firewalld【.service】 //停止firewall
systemctl disable firewalld【.service】 //禁止开机自启
4. Set clock synchronization [NTP]
date查看系统时间 hwclock查看硬件时间
yum install ntpdate -y
timedatectl list-timezones|grep Asia
timedatectl set-timezone Asia/Shanghai
systemctl start ntpdate
yum upgrade
5. Install a vim editor or not
yum install vim -y
Install the basic environment
1. Install the openstack client, etc. [both nodes are required]
yum install centos-release-openstack-queens -y
yum install python-openstackclient -y
yum install openstack-selinux -y
yum upgrade -y
2. Controller node configuration install mariadb [other databases can also be used]
yum install mariadb mariadb-server python2-PyMySQL -y
Create + modify configuration file
vim /etc/my.cnf.d/openstack.cn
Enable service + self-start after boot
systemctl enable mariadb.service
systemctl start mariadb.service
Just set the database all the way to y, and you will be prompted to set a password in the middle
3. Control node, install message queue
yum install rabbitmq-server -y
添加openstack用户并设置openstack使用消息队列的权限
rabbitmqctl add_user openstack openstackmqpwd
Creating user "openstack" ...
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
4. Install memcached service
yum install memcached python-memcached -y
Modify the /etc/sysconfig/memcached file
5. Control node install etcd service
yum install etcd -y
Modify the configuration file /etc/etcd/etcd.conf
Enable service
Systemctl enable etcd
systemctl start etcd
Install the keystone component on the control node
1. Database settings
Mysql -u root -pmysqlpwd
Create database keystone;
设置keystone数据库的访问权限
Grant all privileges on keystone.* to ‘keystone’@’localhost’ IDENTIFIED by 'mysqlpwd';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED by 'mysqlpwd';
2. Install and configure keystone
yum install openstack-keystone httpd mod_wsgi -y
Edit [database] section
Edit [token] section
Import the keystone database table structure
su -s /bin/sh -c "keystone-manage db_sync" keystone
Initialize keystone
keystone-manage bootstrap --bootstrap-password mysqlpwd --bootstrap-admin-url http://controller.node:35357/v3/ --bootstrap-internal-url http://controller.node:5000/v3/ --bootstrap-public-url http://controller.node:5000/v3/ --bootstrap-region-id RegionOne
3. Configure apache service
Modify the configuration file /etc/httpd/conf/httpd.conf
Establish connection, enable service
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl start httpd.service
4. Configure related domains, projects, roles, and users
Import administrator environment variables
export OS_PA=admin
export OS_PASSWORD=mysqlpwd
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller.node:35357/v3
export OS_IDENTITY_API_VERSION=3
openstack project create --domain default --description "Demo Project" demo //创建demo项目
openstack domain create --description "An Example Domain" example //创建域
openstack user create --domain default --password-prompt demo
创建demo用户,输入demo用户的密码【这里为demopwd】
openstack role create user
openstack role add --project demo --user demo user
5. Verify operation
unset OS_AUTH_URL OS_PASSWORD //取消环境变量
openstack --os-auth-url http://controller.node:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue //请求身份验证令牌
This command uses the password of the demo user and api port 5000. It only allows regular ( non-administrator ) access to the identity service api .
openstack --os-auth-url http://controller.node:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue
Create client environment scripts for projects and users:
. /home/admin-opensc 运行脚本写入变量
Control node Glance component installation
1. Database settings
mysql -u -root -pmysqlpwdmysql -u root -pmysqlpwd
create database glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glancepwd';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glancepwd';
. /home/admin-opensc
openstack user create --domain default --password-prompt glance
创建glance用户,设置密码【此处为glancepwd】
2. Set roles and service items
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
3. Create Mirror Service API
openstack endpoint create --region RegionOne image public http://controller.node:9292
openstack endpoint create --region RegionOne image internal http://controller.node:9292
openstack endpoint create --region RegionOne image admin http://controller.node:9292
4 install and configure components
Vim /etc/glance/glance-api.conf
Modify the configuration file /etc/glance/glance-api.conf
1924行【database】部分 connection=mysql+pymysql://glance:[email protected]
3472行【keystone_authtoken】部分 3501行左右
auth_uri = http://controller.node:5000
auth_url = http://controller.node:5000
3551行 memcached_servers = controller.node:11211
3658行后面增加
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glancepwd
4509L [paste_deploy]部分 flavor = keystone
Modify the configuration file /etc/glance/glance-registry.conf
1170行【database】 mysql+pymysql://glance:[email protected]/glance
1285行【keystone_authtoken】
auth_uri = http://controller.node:5000
auth_url = http://controller.node:5000
memcached_servers = controller.node:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glancepwd
2298行 flavor=keystone
Populate the mirror database and enable services
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service
openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
5. Verification operation Openstack official also provides a test mirror, which is smaller
获取镜像wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1708.qcow2
上传镜像glance image-create --name "centos7.4" --file CentOS-7-x86_64-GenericCloud-1708.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress
查看镜像openstack image list
NOVA component installation
Control node:
1. Create database, service credentials and API points
mysql -u -root -pmysqlpwdmysql -u root -pmysqlpwd
CREATE DATABASE nova_cell0;
CREATE DATABASE nova;
CREATE DATABASE nova_api;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@‘%' IDENTIFIED BY 'novapwd';
2. Create a user and set a password [novapwd here]
openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin 将用户nova添加到角色admin
3. Create service entity and api
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://controller.node:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller.node:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller.node:8774/v2.1
4. Create user settings password
openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin
Create an entry in openstack, place the API
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller.node:8778
openstack endpoint create --region RegionOne placement internal http://controller.node:8778
openstack endpoint create --region RegionOne placement admin http://controller.node:8778
5. Install the software package
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
6. Modify the configuration file /etc/nova/nova.conf
1291L: [DEFAULT]中 my_ip=192.168.239.128
1755L: [DEFAULT]中 use_neutron=true
2417L: [DEFAULT]中 firewall_driver=nova.virt.firewall.NoopFirewallDriver
2756L: [DEFAULT]中enabled_apis=osapi_compute,metadata
3155L: [DEFAULT]中 transport_url=rabbit://openstack:[email protected]
3220L: [api]中 auth_strategy=keystone
3512L: [api_database]中 connection=mysql+pymysql://nova:[email protected]/nova_api
4635L: [database]中 connection=mysql+pymysql://nova:[email protected]/nova
5340L: [glance]中 api_servers=http://controller.node:9292
6117L: [keystone_authtoken]中
6118 auth_url = http://controller.node:5000/v3
6119 memcached_servers = controller.node:11211
6120 auth_type = password
6121 project_domain_name = default
6122 user_domain_name = default
6123 project_name = service
6124 username = nova
6125 password = novapwd
7918L: [oslo_concurrency]中 lock_path=/var/lib/nova/tmp
8800L:[placement] 中
8800 os_region_name = RegionOne
8801 project_domain_name = Default
8802 project_name = service
8803 auth_type = password
8804 user_domain_name = Default
8805 auth_url = http://controller.node:5000/v3
8806 username = placement
8807 password = novapwd
10290L: [vnc]中 enabled=true
10314L: [vnc]中 server_listen=192.168.239.128
10327L: [vnc]中server_proxyclient_address=192.168.239.128
Modify the configuration file /etc/httpd/conf.d/00-nova-placement-api.conf
添加
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
7. Restart the service and populate the database
systemctl restart httpd
su -s /bin/sh -c "nova-manage api_db sync" nova
8. Register the database cellp, create cell cell1, and populate the nova database
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
验证nova的cell0和cell1是否正确注册 nova-manage cell_v2 list_cells
9. Enable nova service
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
calculate node:
1. Install the software package
yum install openstack-nova-compute
2. Modify the configuration file /etc/nova/nova.conf
1294L:[Default]中my_ip=192.168.239.129
1755L: [DEFAULT]中 use_neutron=true
2417L: [DEFAULT]中 firewall_driver=nova.virt.firewall.NoopFirewallDriver
2756L:[Default]中 enabled_apis=osapi_compute,metadata
3156L: [Default]中 transport_url=rabbit://openstack:[email protected]
3221L:[api]中 auth_strategy=keystone
5340L: [glance]中 api_servers=http://controller.node:9292
6119L:[keystone_authtoken]中
6120 auth_url = http://controller.node:5000/v3
6121 memcached_servers = controller.node:11211
6122 auth_type = password
6123 project_domain_name = default
6124 user_domain_name = default
6125 project_name = service
6126 username = nova
6127 password = novapwd
7921L: [oslo_concurrency]中 lock_path=/var/lib/nova/tmp
8800L:[placement] 中
8800 os_region_name = RegionOne
8801 project_domain_name = Default
8802 project_name = service
8803 auth_type = password
8804 user_domain_name = Default
8805 auth_url = http://controller.node:5000/v3
8806 username = placement
8807 password = novapwd
10290L: [vnc]中 enabled=true
10317L: [vnc]中 server_listen=0.0.0.0
10330L: [vnc]中server_proxyclient_address=192.168.239.129
10348L: [vnc]中 novncproxy_base_url=http://controller.node:6080/vnc_auto.html
3. Determine whether the computing node supports hardware acceleration of virtual machines
egrep -c '(vmx|svm)' /proc/cpuinfo
如果返回大于1或者greate,不需要其他配置
如果返回0或者libvirt,则不支持硬件加速,必须配置为qemu,KVM不能用
编辑配置文件vim /etc/nova/nova.conf
【libvirt】部分 virt_type=qemu
启动计算服务,开机自启
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
4. Enable computing services
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
Return to control node
1. Check whether there is a compute node host in the database
openstack compute service list --service nova-compute
2. Discover computing host
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
When adding new computing nodes, you must run nova-manage cell_v2 discover_hosts on the controller node to register these new computing nodes. Or modify the configuration file in /etc/nova/nova.conf to set the discovery interval, in the [scheduler] section discover _host_in_cells_interval=number
3. Operation verification
openstack compute service list //列出服务组件以验证每个进程的成功启动和注册
openstack catalog list //查看API连接点
验证单元格放置API是否成功 nova-status upgrade check
Neutro component installation
Control node
1. Configure the database
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@’localhost' IDENTIFIED BY 'neutronpwd';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutronpwd';
2. Create openstack users and add them to roles
openstack user create --domain default --password-prompt neutron
输入密码 此处为neutron
openstack role add --project service --user neutron admin
3. Create service entities and APIs
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller.node:9696
openstack endpoint create --region RegionOne network internal http://controller.node:9696
openstack endpoint create --region RegionOne network admin http://controller.node:9696
4. Install components [option2 is used here] For the specific difference between 1 and 2, please refer to the official openstack documentation
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
5. Modify the configuration file /etc/neutron/neutron.conf
27L:【DEFAULT】中 auth_strategy = keystone
30L:【DEFAULT】中 core_plugin = ml2
33L:【DEFAULT】中 service_plugins = router
85: allow_overlapping_ips = True
98L:【DEFAULT】中 notify_nova_on_port_status_changes = true
102L:【DEFAULT】中notify_nova_on_port_data_changes = true
570L:【DEFAULT】中transport_url = rabbit://openstack:[email protected]
729L:【database】中 connection = mysql+pymysql://neutron:[email protected]/neutron
817L: [keystone_authtoken]
818 auth_uri = http://controller.node:5000
819 auth_url = http://controller.node:35357
820 memcached_servers = controller.node:11211
821 auth_type = password
822 project_domain_name = default
823 user_domain_name = default
824 project_name = service
825 username = neutron
826 password = neutron
1065 [nova]
1066 auth_url = http://controller.node:35357
1067 auth_type = password
1068 project_domain_name = default
1069 user_domain_name = default
1070 region_name = RegionOne
1071 project_name = service
1072 username = nova
1073 password = novapwd
1191L:[oslo_concurrency]中 lock_path = /var/lib/neutron/tmp
6. Configure the modular two-tier components to use the linux bridging mechanism to build the second layer (bridging and switching) virtual network infrastructure for the instance/etc/neutron/plugins/ml2/ml2_conf.ini
128L 【ML2】中:
136 type_drivers = flat,vlan,vxlan
141 tenant_network_types = vxlan
145 mechanism_drivers = linuxbridge,l2population
150 extension_drivers = port_security
177L:【ml2_type_flat】中
186 flat_networks = provider
231 [ml2_type_vxlan] 中
239 vni_ranges = 1:1000
247L: [securitygroup]中
263 enable_ipset = true
7. Configure linux bridge agent /etc/neutron/plugins/ml2/linuxbridge_agent.ini
146L:【linux_bridge】
157 physical_interface_mappings = provider:ens33
181L: [securitygroup]中
188 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
193 enable_security_group = true
200L: [vxlan]中
208 enable_vxlan = true
234 local_ip = 192.168.239.128
258 l2_population = true
Configure the three-layer agent /etc/neutron/l3_agent.ini
16 interface_driver = linuxbridge
Configure DHCP agent /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Configure the metadata agent /etc/neutron/metadata_agent.ini
22 nova_metadata_host = controller.node
34 metadata_proxy_shared_secret = METADATA_SECRET
Configure computing service to use network service /etc/nova/nova.conf
7588 [neutron]
7589 url = http://controller.node:9696
7590 auth_url = http://controller.node:35357
7591 auth_type = password
7592 project_domain_name = default
7593 user_domain_name = default
7594 region_name = RegionOne
7595 project_name = service
7596 username = neutron
7597 password = neutron
7598 service_metadata_proxy = true
7599 metadata_proxy_shared_secret = METADATA_SECRET
Make a link
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Populate the database
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Restart computing API service, network service, three-tier service
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
calculate node:
1. Install components
yum install openstack-neutron-linuxbridge ebtables ipset
2. Modify the configuration file /etc/neutron/neutron.conf
【DEFAULT】
27 auth_strategy = keystone
570 transport_url = rabbit://openstack:[email protected]
【keystone_authtoken】
818 auth_uri = http://controller.node:5000
819 auth_url = http://controller.node:35357
820 memcached_servers = controller.node:11211
821 auth_type = password
822 project_domain_name = default
823 user_domain_name = default
824 project_name = service
825 username = neutron
826 password = neutron
【oslo_concurrency】
1183 lock_path = /var/lib/neutron/tmp
3. Configure the linux bridge agent /etc/neutron/plugins/ml2/linuxbridge_agent.ini
【linux_bridge】
157 physical_interface_mappings = provider:ens33
【vxlan】
208 enable_vxlan = true
234 local_ip = 192.168.239.129
258 l2_population = true
【securitygroup】
188 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
193 enable_security_group = true
4. Verify sysctl value [need to be echoed as 1]
sysctl net.bridge.bridge-nf-call-ip6tables
sysctl net.bridge.bridge-nf-call-iptables
5. Configure nova to use the network service /etc/nova/nova.conf
7591 [neutron]
7593 url = http://controller.node:9696
7594 auth_url = http://controller.node:35357
7595 auth_type = password
7596 project_domain_name = default
7597 user_domain_name = default
7598 region_name = RegionOne
7599 project_name = service
7600 username = neutron
7601 password = neutron
6. Restart the computing service and enable the bridge agent
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
Verify operation at the control node
openstack network agent list
应当有三个控制节点代理和一个计算机点代理
So far the necessary components have been installed, you can use the command line to create/manage virtual machines and internal/external networks, etc. It is recommended to install the dashboard horizon and block storage cinder components
Let me organize the installation of the dashboard component horizon. . . .