Article Directory
One, environment configuration
1. Control node ct1
CPU | Dual-core dual-thread-CPU virtualization enabled |
---|---|
RAM | 8G hard disk: 300G+1024G (CEPH block storage) |
Dual network card | VM1-(Local Area Network) 192.168.100.110 NAT-192.168.162.120 |
operating system | Centos 7.6 (1810)-Minimal installation |
2. Compute node c1
CPU | Dual-core dual-thread-CPU virtualization enabled |
---|---|
RAM | 8G hard disk: 300G+1024G (CEPH block storage) |
Dual network card | VM1 (Local Area Network)-192.168.100.120 NAT-192.168.162.120 |
operating system | Centos 7.6 (1810)-Minimal installation |
3. Compute node c2
CPU | Dual-core dual-thread-CPU virtualization enabled |
---|---|
RAM | 8G hard disk: 300G+1024G (CEPH block storage) |
Dual network card | VM1 (Local Area Network)-192.168.100.130 NAT-192.168.162.130 |
operating system | Centos 7.6 (1810)-Minimal installation |
Basic environment configuration
Configuration items (all nodes)
1. Host name
2. Firewall, core protection
systemctl stop firewalld
setenforce 0
3. No interaction
4. Basic environment dependent package
yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre pcre-devel expat-devel cmake bzip2 lrzsz
EXPAT C语言发开库
yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils
#OpenStack 的 train 版本仓库源安装 包,同时安装 OpenStack 客户端和 openstack-selinux 安装包
5、Time synchronization + periodic scheduled tasks
[root@ct ~]# hostnamectl set-hostname ct
[root@ct ~]# bash
● Control node configuration (ct)
vi /etc/sysconfig/network-scripts/ifcfg-eth0
● Modify and confirm the parameters
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.100.11
NETMASK=255.255.255.0
#GATEWAY=192.168.100.2
vi /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO=static
IPV4_ROUTE_METRIC=90 ###调由优先级,NAT网卡优先
ONBOOT=yes
IPADDR=192.168.226.150
NETMASK=255.255.255.0
GATEWAY=192.168.226.2
systemctl restart network #重启网卡
Configure Hosts (all nodes)
[root@ct ~]# vi /etc/hosts
192.168.100.11 ct
192.168.100.12 c1
192.168.100.13 c2
PS:以上为局域网IP
[root@ct ~]# systemctl stop firewalld
[root@ct ~]# systemctl disable firewalld
[root@ct ~]# setenforce 0
[root@ct ~]# vim /etc/sysconfig/selinux
SELINUX=disabled
Three nodes do no interaction
● Asymmetric key
[root@ct ~]# ssh-keygen -t rsa
[root@ct ~]# ssh-copy-id ct
[root@ct ~]# ssh-copy-id c1
[root@ct ~]# ssh-copy-id c2
Configure DNS (all nodes)
[root@ct ~]# vim /etc/resolv.conf
nameserver 114.114.114.114
[Control node ct time synchronization configuration]
ct ->同步阿里云时钟服务器
c1、c2 -> 同步ct
[root@ct ~]# yum install chrony -y
[root@ct ~]# vim /etc/chrony.conf
[root@ct ~]# systemctl enable chronyd
[root@ct ~]# systemctl restart chronyd
● 【控制节点ct时间同步配置】
[root@ct ~]# vi /etc/chrony.conf
server 0.centos.pool.ntp.org iburst ###注释掉
server 1.centos.pool.ntp.org iburst ###注释掉
server 2.centos.pool.ntp.org iburst ###注释掉
server 3.centos.pool.ntp.org iburst ###注释掉
server ntp6.aliyun.com iburst ###配置阿里云时钟服务器源
allow 192.168.100.0/24 ###允许192.168.100.0/24网段的主机来同步时钟服务
● 使用 chronyc sources 命令查询时间同步信息
[root@ct ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88 2 6 17 0 +3559us[+3429us] +/- 25ms
[Control node c1, c2 time synchronization configuration]
[root@c1 ~]# vi /etc/chrony.conf
server 0.centos.pool.ntp.org iburst ###注释掉
server 1.centos.pool.ntp.org iburst ###注释掉
server 2.centos.pool.ntp.org iburst ###注释掉
server 3.centos.pool.ntp.org iburst ###注释掉
server ct iburst ###配置阿里云时钟服务器源
[root@c1 ~]# systemctl enable chronyd.service ###永久开启时间同步服务器
[root@c1 ~]# systemctl restart chronyd.service ###重启时间同步服务器
[root@c2 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
==============================================================================
^* ct
[root@c1 ~]# crontab -e ###配置计划任务,每隔2分钟同步一次
*/2 * * * * /usr/bin/chronyc sources >>/var/log/chronyc.log
System environment configuration
Configuration service (control node ct):
1. Install and configure MariaDB
[root@ct ~]# yum -y install mariadb mariadb-server python2-PyMySQL
mariadb: is a branch of mysql, it is an open source software that is fully compatible with mysql,
mariadb-server: database service
python2-PyMySQL: the module required for the control end of openstack to connect to mysql, if it is not installed, you cannot connect to the database; this The package is only installed on the console
yum -y install libibverbs
Add MySQL sub-configuration file, add the following content
[root@ct ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.100.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
[root@ct ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.100.11 #控制节点局域网地址
default-storage-engine = innodb #默认存储引擎
innodb_file_per_table = on #每张表独立表空间文件
max_connections = 4096 #最大连接数
collation-server = utf8_general_ci #默认字符集
character-set-server = utf8
Auto-start at boot, start service
[root@ct my.cnf.d]# systemctl enable mariadb
[root@ct my.cnf.d]# systemctl start mariadb
Execute MariaDB security configuration script
[root@ct my.cnf.d]# mysql_secure_installation
Enter current password for root (enter for none): #回车
OK, successfully used password, moving on...
Set root password? [Y/n] Y
Remove anonymous users? [Y/n] Y
... Success!
Disallow root login remotely? [Y/n] N #是否不允许root用户远程登陆
... skipping.
Remove test database and access to it? [Y/n] Y 是否删除test测试库
Reload privilege tables now? [Y/n] Y
Two, install RabbitMQ
All instructions for creating a virtual machine will be sent from the control end to rabbitmq, and the node node will monitor rabbitmq
[root@ct ~]# yum -y install rabbitmq-server
== Configure the service, start the RabbitMQ service, and set it to start at boot. ==
[root@ct ~]# systemctl enable rabbitmq-server.service
[root@ct ~]# systemctl start rabbitmq-server.service
Create a message queue user for authentication (association) of rabbitmq connection between the controller and the computing node
[root@ct ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack"
Configure the operating permissions of the openstack user (regular, configure read and write permissions)
[root@ct ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"
#可查看25672和5672 两个端口(5672是Rabbitmq默认端口,25672是Rabbit的测试工具CLI的端口)
Select configuration:
● View rabbitmq plug-in list
[root@ct ~]# rabbitmq-plugins list
Open the rabbitmq web management interface plug-in, the port is 15672
[root@ct ~]# rabbitmq-plugins enable rabbitmq_management
Check the port (25672 5672 15672)
[root@ct my.cnf.d]# ss -natp | grep 5672
At this time, you can access 192.168.162.100:15672. The
default account and password are both guest. Click Login to enter the following interface
Three, install memcached
Function:
installing memcached is used to store session information; the service authentication mechanism (keystone) uses Memcached to cache tokens when logging in to the dashboard of openstack, some session information will be generated, which will be stored in memcached
JWT
Install Memcached
[root@ct ~]# yum install -y memcached python-memcached
#python-*模块在OpenStack中起到连接数据库的作用
Modify Memcached configuration file
[root@ct ~]# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,ct"
[root@ct ~]# systemctl enable memcached
[root@ct ~]# systemctl start memcached
[root@ct ~]# netstat -nautp | grep 11211
Fourth, install etcd
[root@ct ~]# yum -y install etcd
Modify etcd configuration file
[root@ct ~]# cd /etc/etcd/
[root@ct etcd]# ls
etcd.conf
[root@ct etcd]# vim etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#Data directory location ETCD_LISTEN_PEER_URLS="http://192.168.10.10:2380" #Monitor the url of other etcd members (port 2380, communication between clusters, domain name is invalid Value)
ETCD_LISTEN_CLIENT_URLS=“http://192.168.10.10:2379” #Address for external services (port 2379, communication port inside the cluster)
ETCD_NAME=“ct”
#Node ID (name) in the cluster ETCD_INITIAL_ADVERTISE_PEER_URLS=“http:/ /192.168.10.10:2380” #URL address of the node member, port 2380: used for communication between clusters.
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.10:2379"
ETCD_INITIAL_CLUSTER="ct=http://192.168.10.10:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
#Cluster initialization token ETCD_INITIAL_CLUSTER_initial state="new" Status, new is static. If it is existing, it means that this ETCD service will try to join an existing cluster.
If it is DNS, it means that this cluster will be the object to be added.
Auto-start at boot, open service, detect port
systemctl enable etcd.service
systemctl start etcd.service
netstat -anutp |grep 2379
netstat -anutp |grep 2380