openstack(pike) manual deployment record

该实验根据官方文档,选择Pike版本进行部署。
环境:VMware workstaiton 
			两台虚拟机 CentOS 7(controller和computer1)
			双网卡

Although there are many online automated deployment tutorials for Openstack, such as devstack, fuel, kolla-ansible, etc., if you want to understand the architecture and components of Openstackd, you must start with manual deployment. Even if you are unwilling to learn manual deployment, you still only have an interface understanding of Openstack after the above deployment is completed, which is far from reaching the stage of flexible use and tracking of problems.
Network topology diagram:
Insert picture description here
Two machines are configured here; each machine has two network cards; the
centos configuration is shown in the figure, and minimal installation is selected (the time is selected as Shanghai, otherwise it needs to be manually modified later)
comtroller
Insert picture description here

安装完成后强烈建议打个快照!

Each NIC has two functions, namely:
NIC 1: ①Networking to download software packages ②The cloud host accesses the network through this NIC (the cloud host here refers to the virtual machine on openstack)
NIC 2: ①Internal network intercommunication ②Realization of Vxnet
NIC 1 is in bridge mode, and NIC 2 is in host-only mode, both of which must be configured with dual NICs.
And provided at the adapter settings for the network card to share VMnet1
Insert picture description here
virtual machine network card is configured reference virtual NIC on Three
here encounter some problems, can be found online to ... be patient with it slowly.
The following network card configuration is for reference only, and you need to prepare it yourself according to the environment.
In addition, it is strongly not recommended to use the campus network! Some mirrors will be missing

#进入网卡目录
[root@controller ~]# cd /etc/sysconfig/network-scripts
#查看目录下文件  这个时候可以看到两块网卡 ifcfg-ens33和 ifcfg-ens34
[root@controller network-scripts]# ls
ifcfg-ens33  ifdown-ippp    ifdown-sit       ifup-bnep  ifup-plusb   ifup-TeamPort
ifcfg-ens34  ifdown-ipv6    ifdown-Team      ifup-eth   ifup-post    ifup-tunnel
ifcfg-lo     ifdown-isdn    ifdown-TeamPort  ifup-ippp  ifup-ppp     ifup-wireless
ifdown       ifdown-post    ifdown-tunnel    ifup-ipv6  ifup-routes  init.ipv6-global
ifdown-bnep  ifdown-ppp     ifup             ifup-isdn  ifup-sit     network-functions
ifdown-eth   ifdown-routes  ifup-aliases     ifup-plip  ifup-Team    network-functions-ipv6
#修改第一块网卡
[root@controller network-scripts]# vi ifcfg-ens33

Insert picture description here

#修改完成后输入 :wq 保存退出(新手注意冒号:)w表示保存,q表示退出
#同样修改第二块网卡 
[root@controller network-scripts]# vi ifcfg-ens34

Insert picture description here
The specific IP depends on your own network card, because I am a campus network here, the network card looks a bit strange...
Both hosts must be configured, the process is the same.
After the configuration is complete, download an xshell from the Internet (this is very useful, it's pretty easy to search for it),
and then ssh to log in and connect.
Insert picture description here
The next step is to start the environment configuration process;
open the official document openstack

首先需要!关闭一些服务,根据以下文档#networkmanager 可以不关

Reference documents

物理机部署时遇到问题,网速过慢
原因是因为源在国外,需要进行修改
参考以下文档解决
[bjtu@controller etc]$ vim /etc/yum.repos.d/CentOS-OpenStack-pike.repo 
#修改内容如下
baseurl=http://mirrors.163.com/centos/7.6.1810/cloud/x86_64/openstack-pike/

##这个不做也可以##
关闭NetworkManager后需要手动配置一些服务。如下参考文档

Slow Internet Speed
Reference Document

First, start directly from NTP time synchronization. The
Insert picture description here
official website is in English. Students who are not used to it can use the 360 ​​speed browser, which can be automatically translated.

#首先需要修改主机名 分别在两台主机下各自执行
hostnamectl set-hostname controller
hostnamectl set-hostname computer1
#执行后需要重启,然后就能看到主机名修改后了

Insert picture description here Insert picture description here

#执行时间同步服务  首先是comtroller 节点
[root@controller ~]# yum install chrony
[root@controller ~]# vim /etc/chrony.conf 
#修改后如下
###后面报错,因此改成server ntp1.aliyun.com iburst
#但这里图片就不进行更改了

#重启服务
[root@controller ~]# systemctl enable chronyd.service
[root@controller ~]# systemctl start chronyd.service
#computer1时间同步服务
[root@computer1 ~]# yum install chrony
[root@computer1 ~]# vim /etc/chrony.conf 
#修改后图片如下

#修改后执行
[root@computer1 ~]# systemctl enable chronyd.service
[root@computer1 ~]# systemctl start chronyd.service
#验证同步 出现以下内容即为成功
[root@controller ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 120.25.115.20                 2   6    17     3  -2085us[-4245us] +/-   40ms

注意为^*    ^?为未连接

[root@computer1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 10.0.0.11                     3   6    17     9    -25us[  -94us] +/- 7253ms

#安装前在两个节点域名解析
[root@controller ~]# vim /etc/hosts
#增加以下内容,computer1相同操作 修改后可以  ping controller 和 ping computer1 检测

Insert picture description here

#以下安装按照centos  两个节点均执行
[root@controller ~]# yum install centos-release-openstack-pike -y
[root@controller ~]# yum upgrade -y
##!!!!安装了内核,需要重启激活一下,

#这个过程中我遇到一个问题 3:mariadb-libs-10.1.20-2.el7.x86_64: [Errno 256] No more mirrors to try.#解决方案在下面的链接

#安装OpenStack客户端:
[root@controller ~]# yum install python-openstackclient -y
#RHEL和CentOS 默认启用SELinux。安装 openstack-selinux软件包以自动管理OpenStack服务的安全策略:
[root@controller ~]# yum install openstack-selinux -y

Solution
Next is the installation of database SQL, which is used to store information, usually installed on the controller node

[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL -y
#其中报错了,一个是缺少镜像,解决方法同上。另一个问题了话就是下面那个。

Transaction check error: file /etc/my.cnf from install of mysql-libs

#创建和编辑/etc/my.cnf.d/openstack.cnf
[root@controller ~]# vi /etc/my.cnf.d/openstack.cnf
##文件内容如下
[mysqld]
bind-address = 10.0.0.11

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

保存退出
启动数据库服务并将其配置为在系统引导时启动:
[root@controller ~]# systemctl enable mariadb.service
[root@controller ~]# systemctl start mariadb.service
可以查看一下状态

通过运行mysql_secure_installation 脚本来保护数据库服务。特别是,为数据库root帐户选择合适的密码 :
[root@controller ~]# mysql_secure_installation
#接下来需要直接回车以重新设置密码
#密码设置为123456
#接下来输入
Remove anonymous users? [Y/n] n
 ... skipping.

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] n
 ... skipping.

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

At this point, the installation of myslq is complete.
Next, install the RPC message queue.

消息队列在控制器节点上运行。

安装和配置组件
安装包:
# yum install rabbitmq-server -y

启动消息队列服务并将其配置为在系统引导时启动:
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service

添加openstack用户:
# rabbitmqctl add_user openstack 123456
替换RABBIT_PASS为合适的密码。

允许用户进行配置,写入和读取访问 openstack:
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Install the token cache, which can be understood as an optimization of token performance

安装和配置组件
安装包:
# yum install memcached python-memcached -y

编辑 /etc/sysconfig/memcached 文件并完成以下操作:
# vim  /etc/sysconfig/memcached
配置服务以使用控制器节点的管理IP地址。这是为了通过管理网络启用其他节点的访问:
修改部分内容如下
OPTIONS="-l 127.0.0.1,::1,controller"

完成安装
启动Memcached服务并将其配置为在系统引导时启动:
# systemctl enable memcached.service
# systemctl start memcached.service

At this point, the environment deployment is completed
##ps After I came to the lab the next morning, I found that the virtual machine Ping could not connect to the Internet...very self-closing. Then I kept adjusting and did not find the problem. I can only guess that it is the metaphysics of the campus network, and then switch to NAT mode DHCP. So I decided to change the network to NAT. Apart from the ip network segment, there should be no other problems. Up. #
### It looks like the reason for the network disconnection, just share Vnet1 again. When reanalyzing the network card, you need to be patient to look for errors instead of blindly repeating

创建keystone业务,十分重要
本节介绍如何在控制器节点上安装和配置代号为keystone的OpenStack Identity服务。出于可伸缩性的目的,此配置部署了Fernet令牌和Apache HTTP服务器来处理请求。

【先决条件】
在安装和配置Identity服务之前,必须创建数据库。
1.使用数据库访问客户端以root用户身份连接到数据库服务器:
$ mysql -u root -p
2.创建keystone数据库:#以下为数据库的操作
MariaDB [(none)]> CREATE DATABASE keystone;
3.授予对keystone数据库的适当访问权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY '123456';
4.quit或者exit退出

【安装和配置组件】
1.安装软件包
# yum install openstack-keystone httpd mod_wsgi -y
2.编辑/etc/keystone/keystone.conf文件并完成以下操作:
由于conf本身文件过多,首先做一步操作
#mv /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
#vi /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:123456@controller/keystone
[token]
provider = fernet

#替换KEYSTONE_DBPASS为您为数据库选择的密码。

3.填充Identity服务数据库
# su -s /bin/sh -c "keystone-manage db_sync" keystone

4.初始化Fernet密钥存储库:
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

5.引导身份服务:\表示换行
# keystone-manage bootstrap --bootstrap-password 123456\
  --bootstrap-admin-url http://controller:35357/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne
替换ADMIN_PASS为管理用户的合适密码。

【配置Apache HTTP服务器】
1.编辑/etc/httpd/conf/httpd.conf文件并配置ServerName引用控制器节点的 选项:
#vi /etc/httpd/conf/httpd.conf
找到以下内容并修改
ServerName controller
2.创建/usr/share/keystone/wsgi-keystone.conf文件的链接:
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

【完成安装】
1.启动Apache HTTP服务并将其配置为在系统引导时启动:
# systemctl enable httpd.service
# systemctl start httpd.service
2.配置管理帐户
$ export OS_USERNAME=admin
$ export OS_PASSWORD=123456  //注意密码更换
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://controller:35357/v3
$ export OS_IDENTITY_API_VERSION=3

【创建域,项目,用户和角色】
Identity服务为每个OpenStack服务提供身份验证服务。身份验证服务使用域,项目,用户和角色的组合。
1.本指南使用的服务项目包含您添加到环境中的每项服务的唯一用户。创建service 项目:
$ openstack project create --domain default   --description "Service Project" service、

Insert code snippet here

2.常规(非管理员)任务应该使用非特权项目和用户。例如,本指南创建demo项目和用户。
创建demo项目:
$ openstack project create --domain default --description "Demo Project" demo

Insert code snippet here

在为此项目创建其他用户时,请勿重复此步骤。
创建demo用户:
$ openstack user create --domain default --password-prompt demo

Insert code snippet here

创建user角色:
$ openstack role create user

Insert code snippet here


将user角色添加到demo项目和用户:
$ openstack role add --project demo --user demo user

【验证操作】
在安装其他服务之前验证Identity服务的操作。
在控制器节点上执行这些命令。
1.取消设置临时 变量OS_AUTH_URL和OS_PASSWORD环境变量:
$  unset OS_AUTH_URL OS_PASSWORD
2.作为admin用户,请求身份验证令牌:
$ openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue

Insert code snippet here

3.作为demo用户,请求身份验证令牌:
$ openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue

Insert code snippet here

【创建登录脚本(在root的~ 目录下)】
1.创建 admin-openrc
vi admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default 
export OS_USER_DOMAIN_NAME=Default 
export OS_PROJECT_NAME=admin 
export OS_USERNAME=admin 
export OS_PASSWORD=123456 
export OS_AUTH_URL=http://controller:35357/v3 
export OS_IDENTITY_API_VERSION=3 
export OS_IMAGE_API_VERSION=2 

2.创建 demo-openrc
vi demo-openrc

export OS_PROJECT_DOMAIN_NAME=Default 
export OS_USER_DOMAIN_NAME=Default 
export OS_PROJECT_NAME=demo 
export OS_USERNAME=demo 
export OS_PASSWORD=123456 
export OS_AUTH_URL=http://controller:5000/v3 
export OS_IDENTITY_API_VERSION=3 
export OS_IMAGE_API_VERSION=2 

3.验证 admin
登录:
# . admin-openrc

验证:
openstack token issue

Insert code snippet here

至此keystone业务安装完成!

##ps Report an error when creating a keystone business

Insert picture description here
Insert picture description here
Log viewing method #cd
/var/log/keystone
#cat keystone.log
Try again to change all the passwords to 123456, but it still fails. I can’t find the reason. I read a few other people’s deployment logs and found the problem. One more authorization was given to the controller, and it succeeded after an attempt.
Insert picture description here
It took nearly four or five days to resolve the error. However, many problems that were originally overlooked during this process have also been found; now summarized as follows:
1. Configure the network card to be patient and check, instead of blindly restarting, find out why the restart will cause problems, such as the restart of one of the network cards and one network card is a good example
2 .mirror not found The problem is found in the campus network. This lesson needs to be kept in mind. At the same time, when rpm is installed, it can be mixed and mixed. As expected, it fails in maridb.
3. Pay attention to the basic configuration, such as turning off the firewall and some chaos. Eight-slot service, pay attention to restart to update the kernel
4. Read more, think more, try less blindly, learn to take snapshots to save time
5. Don't spend time in the laboratory when you encounter errors and have no clue, except for the problem of playing with the phone, it has not been solved. It's better to relax and try again. Don't rely on expecting others. Only you know your own mistakes. Learn to analyze logs and solve problems.

GLANCE service installation (controller node only)

【数据库】

1.登录
mysql -uroot -p123456
2.创建数据库glance
CREATE DATABASE glance;
3.登录操作权限
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
4.退出 exit

【创建 glance】
1.登录
 . admin-openrc
2.创建glance user:
openstack user create --domain default --password-prompt glance

Insert code snippet here

3.关联
openstack role add --project service --user glance admin
4.创建glance service:
openstack service create --name glance --description "OpenStack Image" image

Insert code snippet here


5.创建API
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

Insert code snippet hereInsert picture description here


【下载和配置】
1.下载
yum install openstack-glance -y
2.配置
mv /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
vi /etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

3.配置
mv /etc/glance/glance-registry.conf  /etc/glance/glance-registry.conf.bak
vi /etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone

4.填充数据库
su -s /bin/sh -c "glance-manage db_sync" glance

此时可进入数据库查看一下Glance是否同步成功,失败了话需要多重新尝试几次
过程如下

Insert code snippet here


5.启动
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

【验证】
1.登录:
. admin-openrc
2.下载:
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
3.上传: 
openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

Insert code snippet here

4.查看列表
openstack image list

Insert code snippet here

Next install nova
nova is the core part of openstack and the part responsible for managing virtual machines
[database] (controller node)

1登录:
mysql -uroot -p123456
2创建数据库:
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
3权限
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
4退出exit
【创建 nova】
1登录
. admin-openrc
2创建nova user:
openstack user create --domain default --password-prompt nova

Insert code snippet here

3关联
openstack role add --project service --user nova admin
4创建 nova service
openstack service create --name nova --description "OpenStack Compute" compute

Insert code snippet here

5创建 API
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

Insert code snippet here

openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

Insert code snippet here

openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

Insert code snippet here

【创建 placement】
1登录
. admin-openrc
2创建 placement user
openstack user create --domain default --password-prompt placement

Insert code snippet here

3关联
openstack role add --project service --user placement admin
4创建API
openstack service create --name placement --description "Placement API" placement

Insert code snippet here

openstack endpoint create --region RegionOne placement public http://controller:8778

Insert code snippet here

openstack endpoint create --region RegionOne placement internal http://controller:8778

Insert code snippet here

openstack endpoint create --region RegionOne placement admin http://controller:8778

Insert code snippet here

【下载和配置】
1下载:
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api   
2配置
mv  /etc/nova/nova.conf  /etc/nova/nova.conf.bak
vi /etc/nova/nova.conf
#
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 10.0.0.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api

[database]
connection = mysql+pymysql://nova:123456@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456
#
3.配置
vi /etc/httpd/conf.d/00-nova-placement-api.conf
#在最后添加
<Directory /usr/bin>
  <IfVersion >= 2.4>
     Require all granted
  </IfVersion>
  <IfVersion < 2.4>
     Order allow,deny
     Allow from all
  </IfVersion>
</Directory>

4.重启 httpd
systemctl restart httpd
5.填充数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
6.验证 cell0 cell1
nova-manage cell_v2 list_cells

Insert code snippet here

7.启动
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

Computing service! (Compute node)

【下载和配置】
1.下载
yum install openstack-nova-compute
2.配置
mv /etc/nova/nova.conf /etc/nova/nova.conf.bak
vi /etc/nova/nova.conf
#
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 10.0.0.31
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456
#
3.启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

Continue computing service (controller node)

【添加compute节点到 cell数据库】
1.登陆:
. admin-openrc
2.查看:
openstack compute service list --service nova-compute

Insert picture description here

3.手动注册 compute 节点到 cell数据库 (每次添加新compute节点都需要这个操作)
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
4.自动注册 compute 节点到 cell数据库(只需要操作一次)
vi /etc/nova/nova.conf
#最后添加
[scheduler]
discover_hosts_in_cells_interval = 300

【验证】
1.登陆:
. admin-openrc
2.查看计算服务列表:
openstack compute service list

Insert code snippet here

3.查看 api:
openstack catalog list

Insert code snippet here

4.查看镜像列表:
openstack image list

Insert code snippet here

5.检查 cells 和placement API 是否正常工作
nova-status upgrade check

Insert code snippet here

The nova service is very smooth.
Pay attention to the operation of the computer node here.

Next, install the Neutron component. The
network component is the most responsible component. There is a lot of network knowledge involved, and it needs to be supplemented slowly.
[Controller node]

【数据库】
1.登陆:
mysql -uroot -p123456
2.创建neutron
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
3.登出:exit

【创建 neutron 用户】
1.登陆:
. admin-openrc
2.创建 neutron 用户:
openstack user create --domain default --password-prompt neutron

Insert code snippet here

3.关联:
openstack role add --project service --user neutron admin
4.创建 neutron service:
openstack service create --name neutron --description "OpenStack Networking" network

Insert code snippet here

5.创建 API:
openstack endpoint create --region RegionOne network public http://controller:9696

Insert code snippet here

openstack endpoint create --region RegionOne network internal http://controller:9696

Insert code snippet here

openstack endpoint create --region RegionOne network admin http://controller:9696

Insert code snippet here

【安装和配置(基于Self-service networks)】
1.安装:
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
2.配置:
mv  /etc/neutron/neutron.conf  /etc/neutron/neutron.conf.bak
vi /etc/neutron/neutron.conf
######
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:123456@controller/neutron

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
######

3.配置
mv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
vi /etc/neutron/plugins/ml2/ml2_conf.ini
######
[ml2]
type_drivers = flat,vlan,vxlan,gre
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
#######
4.配置
mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini  /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#######
[linux_bridge]
physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = true
local_ip = 10.0.0.11
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
######
5.配置
mv /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
vi /etc/neutron/dhcp_agent.ini
######
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
#######
【配置和启动】
1.配置
mv /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
vi /etc/neutron/metadata_agent.ini
######
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456
####
2.配置
vi /etc/nova/nova.conf
#####添加
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
#######
3.link
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
4.填充数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
5.重启 API service
systemctl restart openstack-nova-api.service
6.启动
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

Network service [computer node]

【安装和配置】
1.下载
yum install openstack-neutron-linuxbridge ebtables ipset
2.配置
mv  /etc/neutron/neutron.conf  /etc/neutron/neutron.conf.bak
vi /etc/neutron/neutron.conf
######
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_url = http://10.0.0.11:35357
memcached_servers = 10.0.0.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
#######

【配置】
mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#####
[linux_bridge]
physical_interface_mappings = provider:ens33

[vxlan]
enable_vxlan = true
local_ip = 10.0.0.31
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
####

【配置和启动】
1.配置
vi /etc/nova/nova.conf
####添加
[neutron]
url = http://10.0.0.11:9696
auth_url = http://10.0.0.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
####
2.启动
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

【验证(Controller 节点)】!!!
1.登录:
. admin-openrc
2.查看列表:
openstack network agent list

Insert picture description here

In the first verification, there is only the service of the controller node. If the compute node is missing, the problem is found by viewing the log. After searching, it is found that it is a time synchronization problem, so install the short book tutorial to change the control node.

[Installation of dashboard (on controller node)]

1.下载
yum install openstack-dashboard
2.配置 
vim /etc/openstack-dashboard/local_settings
###安装以下顺序一个一个找
ALLOWED_HOSTS = ['*']
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
#添加
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
!
systemctl restart httpd.service memcached.service

同时修改httpd的文件
vi /etc/httpd/conf.d/openstack-dashboard.conf
#添加
WSGIApplicationGroup %{GLOBAL}

重启
systemctl restart httpd.service memcached.service
测试
http://10.0.0.11/dashboard
登录界面
域:defaul
用户名:admin
密码:123456

Next install the cinder service

【首先安装controller节点】
1.创建数据库
 mysql -u root -p
 MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'   IDENTIFIED BY '123456';
 MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller'   IDENTIFIED BY '123456';
 MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY '123456';
 2.来源admin凭据来访问仅管理员CLI命令:
 . admin-openrc
3.创建服务凭据
openstack user create --domain default --password-prompt cinder
openstack role add --project service --user cinder admin
 openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2
  openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
openstack endpoint create --region RegionOne   volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
 openstack endpoint create --region RegionOne   volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne   volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne   volumev3 internal http://controller:8776/v3/%\(project_id\)s
 openstack endpoint create --region RegionOne  volumev3 admin http://controller:8776/v3/%\(project_id\)s
【安装和配置组件】
yum install openstack-cinder
vi /etc/cinder/cinder.conf
####
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder

[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 10.0.0.11

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
####

 su -s /bin/sh -c "cinder-manage db sync" cinder
vi  /etc/nova/nova.conf
添加
[cinder]
os_region_name = RegionOne

systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
 systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
【compute节点】
先决条件
1.安装支持实用程序包:
yum install lvm2 device-mapper-persistent-data
 systemctl enable lvm2-lvmetad.service
 systemctl start lvm2-lvmetad.service
2.创建LVM物理卷/dev/sdb   【需要提前在虚机中添加一块硬盘】
 pvcreate /dev/sdb
3.创建LVM卷组cinder-volumes
 vgcreate cinder-volumes /dev/sdb
4.配置过滤器
vi  /etc/lvm/lvm.conf
#修改
filter  =  [  “a / sda /”, “a / sdb /”,“r /.*/” ]

安装和配置组件
 yum install openstack-cinder targetcli python-keystone

vi /etc/cinder/cinder.conf
###
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder

[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 10.0.0.31
enabled_backends = lvm
glance_api_servers = http://controller:9292

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
####
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

【验证Cinder操作】
. admin-openrc
openstack volume service list

这里遇到一个问题cinder-volume的state的为down
通过上网查询大概猜测原因为时间未完成同步,问题在于虽然chrony同步完成
但未强制使其初始时间相同
执行 chronyc -a makestep 后同步成功,volume up!

Node cloning: compute and cinder*2

关闭computer1
克隆:选择完整克隆
然后光启动克隆
修改配置
hostnamectl set-hostname computer2
修改网卡
两张网卡仅需修改ip
第二块为 10.0.0.32
vi /etc/hosts
增加 10.0.0.32 computer2
同时controller节点也做域名解析如上修改
然后所有机器重启

【computer2节点】
修改配置文件
储存
vi /etc/cinder/cinder.conf
修改my ip =10.0.0.32
systemctl restart openstack-cinder-volume.service target.service

网络
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
修改local_ip = 10.0.0.32

计算
vi /etc/nova/nova.conf
修改my_ip = 10.0.0.32

【controller】
计算
vi /etc/nova/nova.conf
添加
[scheduler]
discover_hosts_in_cells_interval = 300

做完之后全部重启

By checking the timedatectl, it is found that the time is out of sync,
so uninstall chrony and reinstall the synchronization service.
After restarting, it is fine, and you should also pay attention to restart the linux bridge

Guess you like

Origin blog.csdn.net/weixin_44747789/article/details/100041825