Deploy OpenStack (first, environment setup. Graphic details!)

Deploy OpenStack (first, environment setup. Graphic details!)

1. Virtual machine resource information

1. Control node CT

CPU: Dual-core dual-thread-CPU virtualization enabled
Memory: 8G Hard disk: 300G
Dual network card: VM1- (local area network) and NAT- (external network)
Operating system: Centos 7.6 (version 1810 and above) -minimal installation

2. Computing node C1

CPU: Dual-core dual-thread-CPU virtualization enabled
Memory: 8G Hard disk: 300G
Dual network card: VM1- (local area network) and NAT- (external network)
Operating system: Centos 7.6 (version 1810 and above) -minimal installation

3. Computing node C2

CPU: Dual-core dual-thread-CPU virtualization enabled
Memory: 8G Hard disk: 300G
Dual network card: VM1- (local area network) and NAT- (external network)
Operating system: Centos 7.6 (version 1810 and above) -minimal installation

Note: If the memory is not enough, you can appropriately reduce the memory size.
Insert picture description here
Click the Tab button on the Install CentOS 7 interface, and enter the two configuration fields
below net.ifnames=0 biosdevname=0 (can be modified to the network card starting from eth0 when creating)
Insert picture description here

Insert picture description here

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Set the root password, you can set it simple, if the prompt is too simple, just click twice to complete, and the setting can be completed
Insert picture description here
Insert picture description here

Insert picture description here
Insert picture description here
Insert picture description here

Two, deployment ideas

1. Configure the operating system + OpenStack operating environment
2. Configure the basic services of the OpenStack platform (rabbitmq, mariadb, memcache, Apache)
3. Configure the OpenStack keystone component
4. Configure the OpenStack Glance component
5. Configure the placement service
6. Configure the OpenStack Nova component
7. Configure OpenStack Neutron components
8, configure OpenStack dashboard components
9, configure OpenStack Cinder components
10, common cloud host operations

Three, environment configuration

CPU name RAM hard disk Network card system
CT 8G 300G VM:192.168.100.10
NAT:192.168.2.10
Centos7.6
C1 8G 300G VM:192.168.100.20
NAT:192.168.2.20
Centos7.6
C2 8G 300G VM:192.168.100.30
NAT:192.168.2.30
Centos7.6

Four, basic environment configuration

1. Modify and add NAT network card configuration

CT eth1 (inside): 192.168.100.10 eth0 (outside): 192.168.2.10
C1 eth1 (inside): 192.168.100.20 eth0 (outside): 192.168.2.20
C2 eth1 (inside): 192.168.100.30 eth0 (outside): 192.168. 2.30 For
all nodes, I will only demonstrate one CT here. The other C1 and C2 are similar, so I won’t repeat them. After the
modification is completed, copy and paste can be used in a secure terminal such as Xshell.

cd /etc/sysconfig/network-scripts/
ls
vi ifcfg-eth0

BOOTPROTO=static			#修改网卡配置参数,修改为静态IP
IPV4_ROUTE_METRIC=90		#设置网卡优先级,默认100,越低优先级越高,防止出现两张网卡为了抢占优先级导致无法连接问题

ONBOOT=yes					#开启设备开机启动
IPADDR=192.168.2.10  		#添加字段,静态模式IP自定义
NETMASK=255.255.255.0		#添加字段,设置子网掩码
GATEWAY=192.168.2.2		#添加字段,设置网关
#DNS1=192.168.2.2   #选择添加字段,设置域名服务器,有些需要配置dns服务器,否则无法使用yum

systemcrl restart network

Insert picture description here

Connect Xshell
Insert picture description here
Insert picture description here

Insert picture description here
Then choose to save and connect , you can connect to Xshell, note that the IP must be in the same network segment as the IP of the Vmnet8 virtual network card.
Insert picture description here
View IP

ip a

Insert picture description here

2. Turn off the firewall and system security mechanism, modify the host name

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
#永久关闭
vi /etc/sysconfig/selinux
SELINUX=disabled

Insert picture description here
You can use the following command to view the status of SElinux

/usr/sbin/sestatus
或者
getenforce
hostnamectl set-hostname ct
su

Insert picture description here

2. The basic environment depends on the package

yum -y install \
net-tools \
bash-completion \
vim \
gcc gcc-c++ \
make \
pcre  pcre-devel \
expat-devel \
cmake  \
bzip2 \
lrzsz

#-----------------------模块解释------------------------
net-tools           ifconfig命令行工具
bash-completion     辅助自动补全工具
vim                 vim工具
gcc gcc-c++         编译环境
make                编译器
pcre pcre-devel     是一个Perl库,包括 perl 兼容的正则表达式库
expat-devel         Expat库,Expat是一个面向流的xml解析器
cmake               CMake是一个跨平台的编译工具,CMkae目前主要使用场景是作为make的上层工具,产生可移植的makefile文件
lrzsz               可使用rz、sz命令上传、下载数据

Insert picture description here
OpenStack train version warehouse source installation package, install OpenStack client and openstack-selinux installation package at the same time

yum -y install \
centos-release-openstack-train \
python-openstackclient \
openstack-selinux \
openstack-utils

#----------------模块解释------------------------------
centos-release-openstack-train  train版本包
python-openstackclient          安装openstack客户端
openstack-selinux               安装selinux自动管理,这里selinux是关闭的
openstack-utils                 方便后续直接可以通过命令行方式修改配置文件

Insert picture description here

3. Modify the network card configuration

The external network card eth0 of each node has been configured, so here only need to
modify the VMnet1 network card configuration, that is, the eth1 network card configuration of each node

vim /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO=static
IPV4_ROUTE_METRIC=100  #默认值为100,这里为了保险可以设置一下
ONBOOT=yes
IPADDR=192.168.100.10
NETMASK=255.255.255.0

systemctl restart network
ip a

Insert picture description here

4. Configure the mapping

Configure Hosts and map the address to LAN ip

echo '192.168.100.10 ct' >> /etc/hosts
echo '192.168.100.20 c1' >> /etc/hosts
echo '192.168.100.30 c2' >> /etc/hosts
cat /etc/hosts

Insert picture description here

5. No interaction

Three nodes do interaction
-free asymmetric key

ssh-keygen -t rsa
ssh-copy-id ct
ssh-copy-id c1
ssh-copy-id c2

Insert picture description here
Insert picture description here

6. Time synchronization + periodic scheduled tasks

Note:
ct -> Synchronize Alibaba Cloud clock server
c1, c2 -> Synchronize ct

vi /etc/resolv.conf
nameserver 114.114.114.114

yum install chrony -y

Insert picture description here
Note: The following configuration ct is different from the two computing nodes, I will boldly mark the different node names and IPs in yellow
CT eth1 (inside): 192.168.100.10 eth0 (outside): 192.168.2.10

vim /etc/chrony.conf
server 0.centos.pool.ntp.org iburst   #注释掉
server 1.centos.pool.ntp.org iburst   #注释掉
server 2.centos.pool.ntp.org iburst	  #注释掉
server 3.centos.pool.ntp.org iburst	  #注释掉
server ntp6.aliyun.com iburst         #配置阿里云时钟服务器源  
allow 192.168.100.0/24     #允许192.168.100.0/24网段的主机来同步时钟服务

systemctl enable chronyd
systemctl restart chronyd

#使用 chronyc sources 时间同步
chronyc sources

Insert picture description here
C1 eth1 (inside): 192.168.100.20 eth0 (outside): 192.168.2.20
C2 eth1 (inside): 192.168.100.30 eth0 (outside): 192.168.2.30

vim /etc/chrony.conf 
server 0.centos.pool.ntp.org iburst	  #注释掉
server 1.centos.pool.ntp.org iburst	  #注释掉
server 2.centos.pool.ntp.org iburst   #注释掉
server 3.centos.pool.ntp.org iburst   #注释掉
server ct iburst			          #配置阿里云时钟服务器源,同步指向控制节点ct

systemctl enable chronyd.service
systemctl restart chronyd.service

chronyc sources

Insert picture description here
Configure scheduled tasks and synchronize them every 2 minutes

#配置计划任务,每隔2分钟同步一次
crontab -e
*/2 * * * * /usr/bin/chronyc sources >>/var/log/chronyc.log

#查看设置的周期性任务
crontab -l

Insert picture description here

Five, system environment configuration

1. Install and configure MariaDB

CT eth1 (inside): 192.168.100.10 eth0 (outside): 192.168.2.10

yum -y install mariadb mariadb-server python2-PyMySQL libibverbs

#---------模块解释---------------------------------
mariadb:是mysql的一个分支,是一款完全兼容mysql的开源软件
mariadb-server:数据库服务
python2-PyMySQL:用于openstack的控制端连接mysql所需要的模块,如果不安装,则无法连接数据库;此包只安装在控制端
libibverbs:远程直接内存访问

Insert picture description here
Then add the MySQL sub-configuration file on the ct control node, as follows

vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.100.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

#---------配置解释---------------------------------------------
bind-address = 192.168.100.10			#控制节点局域网地址
default-storage-engine = innodb 		#默认存储引擎 
innodb_file_per_table = on 				#每张表独立表空间文件
max_connections = 4096 				    #最大连接数 
collation-server = utf8_general_ci 		#服务器校对时不区分大小写
character-set-server = utf8             #默认字符集 
#---------------------------------------------------------------

systemctl enable mariadb
systemctl start mariadb

Insert picture description here
Script to execute MariaDB security configuration

mysql_secure_installation
#---------输出内容-------------------------------------------
Enter current password for root (enter for none): 	#输入当前root密码,回车
OK, successfully used password, moving on...
Set root password? [Y/n] Y							#是否需要更改root密码,Y确认
Remove anonymous users? [Y/n] Y						#是否移除其他用户,Y确认移除
 ... Success!
Disallow root login remotely? [Y/n] n				#是否不允许root用户远程登陆,输入n,允许root用户远程登陆
 ... skipping.
Remove test database and access to it? [Y/n] Y 		#是否删除test测试库,Y确认删除
Reload privilege tables now? [Y/n] Y 				#是否刷新规则,Y确认刷新
#------------------------------------------------------------

Insert picture description here

2. Install and configure RabbitMQ

CT eth1 (inside): 192.168.100.10 eth0 (outside): 192.168.2.10
All instructions for creating a virtual machine will be sent to rabbitmq from the ct control terminal, and the node node monitors rabbitmq

yum -y install rabbitmq-server

systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

Insert picture description here

1)创建消息队列用户,用于controler和 计算节点连接rabbitmq的认证(关联)
rabbitmqctl add_user openstack RABBIT_PASS
#-----------输出内容-------------
Creating user "openstack"
#----------------------------------

2)配置openstack用户的操作权限(正则,配置读写权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
#------------输出内容---------------------------------------
Setting permissions for user "openstack" in vhost "/"

#可查看25672和5672 两个端口(5672是Rabbitmq默认端口,25672是Rabbit的测试工具CLI的端口)
netstat -natp | grep 5672

3)查看rabbitmq插件列表
rabbitmq-plugins list

Insert picture description here

4)开启rabbitmq的web管理界面的插件,端口为15672
rabbitmq-plugins enable rabbitmq_management

5)检查端口(25672 5672 15672)
ss -natp | grep 5672

Insert picture description here
Now you can access 192.168.2.10:15672 on this machine to log in and enter the Rabbitmq management interface. The
default account and password are guest.
Insert picture description here
Insert picture description here

3. Install memcached

Function: installing memcached is used to store session information; the service authentication mechanism (keystone) uses Memcached to cache tokens when logging in to the dashboard of openstack, some session information will be generated, which will be stored in memcached
CT eth1 (inside): 192.168.100.10 eth0 (outside): 192.168.2.10

yum install -y memcached python-memcached
#python-*模块在OpenStack中起到连接数据库的作用

Insert picture description here
Modify the Memcached configuration file

vim /etc/sysconfig/memcached
PORT="11211"                      #memcached端口11211
USER="memcached"                  #用户memcached
MAXCONN="1024"                    #最大连接数1024
CACHESIZE="64"                    #字符集大小64位
OPTIONS="-l 127.0.0.1,::1,ct"     #监听地址,127.0.0.1:本地地址,::是ipv6地址,ct是本地VMnet1地址

systemctl enable memcached
systemctl start memcached
netstat -nautp | grep 11211

Insert picture description here

4. Install etcd and modify its configuration file

etcd is a distributed and reliable key-value storage database

安装etcd
yum -y install etcd

修改其配置文件
vim /etc/etcd/etcd.conf 
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.100.10:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.100.10:2379"
ETCD_NAME="ct"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.10:2379"
ETCD_INITIAL_CLUSTER="ct=http://192.168.100.10:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
#-------------------配置解释---------------------------------------
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  #数据目录位置
ETCD_LISTEN_PEER_URLS="http://192.168.100.10:2380" #监听其他etcd member的url(2380端口,集群之间通讯,域名为无效值)
ETCD_LISTEN_CLIENT_URLS="http://192.168.100.10:2379" #对外提供服务的地址(2379端口,集群内部的通讯端口)
ETCD_NAME="ct" #集群中节点标识(名称)
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.10:2380" #该节点成员的URL地址,2380端口:用于集群之间通讯。
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.10:2379"
ETCD_INITIAL_CLUSTER="ct=http://192.168.100.10:2380"	
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"	#集群唯一标识
ETCD_INITIAL_CLUSTER_STATE="new"  #初始集群状态,new为静态,若为existing,则表示此ETCD服务将尝试加入已有的集群若为DNS,则表示此集群将作为被加入的对象
#-----------------------------------------------------------------------

systemctl enable etcd.service
systemctl start etcd.service
netstat -anutp |grep 2379
netstat -anutp |grep 2380

Insert picture description here
OK, the openstack environment has been set up

Summary of environment construction

1. Basic environment configuration

Firewall, core protection, host name
Basic environment dependency package
Modify network card configuration
Configuration mapping No
interaction
Time synchronization + periodic scheduled tasks

2. Install openstack basic services

检验MariaDB数据库是否开启
systemctl status mariadb

检验RabbitMQ消息队列是否开启
#检查端口(25672 5672 15672)
ss -natp | grep 5672
netstat -natp | grep 5672

检验memcached缓存系统是否开启
ss -nautp | grep 11211
netstat -nautp | grep 11211

检验etcd分布式可靠的键值存储数据库是否开启
netstat -anutp |grep 2379
netstat -anutp |grep 2380

Guess you like

Origin blog.csdn.net/qq_35456705/article/details/114962435