A key multi-node OpenStack deployment (ultra-detailed)

Under the experimental environment a key multi-node OpenStack deployment (ultra-detailed)

Foreword

OpenStack project is an open source cloud computing platform project, is in control of computing, networking and storage resources of the three distributed system. Cloud platform to build such a system can provide IaaS (Infrastructure as a Service) model of cloud services for us. This article is not related to the core theory, and therefore about the concept of cloud computing and OpenStack and other relevant overall presentation may refer to the following three articles:

Talking about cloud computing

OpenStack concepts and core components Overview

OpenStack deployment node types and architecture

This article aims to give the multi-environment experiments detailed Protocol node a key deployment of OpenStack, R OpenStack version of the locally deployed (using yum source) deployment. Here the author from his experimental environment and the resources required, system resources, planning the deployment of nodes, specific deployment, deployment summarize briefly the four aspects of the process, practice and summary.

First, the experimental environment and resources required

1.1 System Environment

win10 host using VMware15 version (to download may be preferred to use this version of the experiment) to install the operating system (Centos7.5) on;

1.2 Resource Kit

Centos7.5 image file, R OpenStack source version; link resources as follows:

Links: https://pan.baidu.com/s/1hFENGyrRTz3lOLUAordTGg
extraction code: mu5x

Second, the system resources

System resources are mainly introduce the author of the host hardware, the main consideration to the OpenStack project is still very resource-intensive, so you make a failure occurred during the experiment deployment unexpected, of course, system resources here just my notebook the case, still need specific hardware resources required several experiments to try for the job.

I experiment using hardware resources as follows:

CPU: i7 9-generation (i7 is enough, mainly to see the core number of threads); Memory: 32G (regarded as standard, can be some of the best low no less than 24G); HDD: 1TSSD solid (preferably more than 200G of free disk space, I give when deployed behind a 300G) major hardware resources is three.

The following experiments explain the deployment planning node of the author, the node type and given in the above linked article has been introduced, it will not repeat them here.

Third, the deployment plan node

Considering that the hardware configuration of the test environment, it is impossible to deploy a production environment such as the general many nodes, the overall planning of three nodes, a node, two control nodes is calculated. Or familiarize yourself with it again this architecture diagram:

A key multi-node OpenStack deployment (ultra-detailed)

Limited resources, can only test the deployment of a network deployed on the control node, the production environment can be deployed Ha Wan is not so! Pilot deployment is to deepen the theoretical understanding on the one hand, on the other hand is easy to be familiar with some of the deployment process and command operations as well as some troubleshooting ideas.

Now comes the deployment of the production environment, it is roughly give you an example:

Assuming that a deployment of 300 servers OpenStack platform services can be roughly like this plan:

The control node 30; 30 network nodes; computing node 100; remaining to be stored;

When it comes to storage, we know that there Cinder block OpenStack Swift storage and object storage, generally use another big project in a production environment, CEPH distributed storage, we will combine the general storage to deploy a storage node OpenStack, and in a production environment, CEPH is a high availability clustering to ensure high reliability of data storage and high availability, knowledge of CEPH, and interested friends can look them.

The specific allocation of resources, said the following:

The control node: The total number of kernel processors with 2 * 2; 8G of memory; disks are divided into two: 300G, 1024G (after storage for experimental ceph); dual NIC, only a host mode (eth) (ip planning is 192.168.100.20), a NAT mode (ip planning 20.0.0.20);

Compute nodes: two resource allocation computing nodes are the same, the total number of processor cores to work with 2 * 2; memory is 8G; disks are divided into two: 300G, 1024G; the card are a host-only mode (eth) ( IP address is 192.168.100.21 planning and 192.168.100.22);

The figure also shows the components on each node to be installed, but I still consider simplifying some of it convenient for experiments, so some components of choice, let's understand the charm appreciate OpenStack through specific deployment process.

Fourth, the specific deployment process

It will be a key to the deployment of OpenStack R version of the experiment into the following processes, usually in the deployment process, failure or other circumstances the probability is very high, will give some troubleshooting ideas in the summary at the end of the article , for your reference:

1、安装操作系统
2、系统环境配置
3、一键部署OpenStack

The following steps were broken and presentation for each step, the deployment process can define their own network segment IP address for some configuration of the network:

4.1 Installing the operating system

Speaking of the above control and experimental environment deploying a two computing nodes. Therefore, you need to install three virtual machines. The following are specific installation process.

1. Modify the local VMnet8 card

The following is the sequence of operations

A key multi-node OpenStack deployment (ultra-detailed)

Here are the results after the change:

A key multi-node OpenStack deployment (ultra-detailed)

2. Create a new virtual machine (VM here temporarily open)

Installing Linux system Centos7 specific processes have been described in detail in a previous article, the author, this is mainly a number of different places will be illustrated by the following illustration. Reference links: Centos7 operating system installation

The virtual machine control node is provided below:

A key multi-node OpenStack deployment (ultra-detailed)

Virtual machine computing nodes provided below (two nodes are the same):

A key multi-node OpenStack deployment (ultra-detailed)

3. After setting the above procedure, a virtual machine for mounting an open configuration (a mounting preferably a consistent three nodes setup process, for any node which will be described)

FIG explained as follows after opening:

A key multi-node OpenStack deployment (ultra-detailed)

Only need to select a minimum installation 4. Installation, planning and follow the disk of FIG.

A key multi-node OpenStack deployment (ultra-detailed)

After the disk allocation dialog box, click the disk allocation

A key multi-node OpenStack deployment (ultra-detailed)
The following dialog box appears after clicking Done continue the configuration

A key multi-node OpenStack deployment (ultra-detailed)

A key multi-node OpenStack deployment (ultra-detailed)

A key multi-node OpenStack deployment (ultra-detailed)

Consistent with the step of the corresponding steps not shown on the screenshot given above mounting system and a link, then from the complete set, and the following normal operation on the same mounting system. The final installation can log on to normal, then turn it off (to avoid resource consumption leads to other nodes in the virtual machine installation fails, taking into account everyone's hardware configuration problem).

Above is the whole process of our first steps may seem more, but when you are very familiar with the process of installing the Linux operating system on VMware will find that very simple fact, which before the installation of the most critical is that two commands do not forget.

When installed without any problem, we can turn on one by one three virtual machines (the best one by one open) to start the second step of the operation;

4.2 System environment configuration

Here a list of what the first steps of the main operating system environment configuration needs to be done

1、配置各个节点的主机名、网卡,重启网络
2、关闭防火墙、核心防护、网络管理、并且设置为禁止开机自启
3、上传软件包——openstack-rocky压缩包(源),并且进行解压缩等设置
4、配置本地yum源文件
5、三个节点做免交互并且验证
6、配置时间同步

Here begin the configuration

1, configure the hostname of each node, network card, restart the network (here to configure the local network for easy connection Xshell and other remote connectivity tools simulate the production environment as much as possible on the one hand, on the other hand to facilitate code demonstrates) The following look at the card set

Control node configuration:

[root@localhost ~]# hostnamectl set-hostname ct
[root@localhost ~]# su
[root@ct ~]# cd /etc/sysconfig/network-scripts/
#配置本地网卡eth0和nat网卡eth1
[root@ct network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=6dc229bf-8b5b-4170-ac0d-6577b4084fc0
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.100.20
NETMASK=255.255.255.0
GATEWAY=192.168.100.1 

[root@ct network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth1
UUID=37e4a752-3820-4d15-89ab-6f3ad7037e84
DEVICE=eth1
ONBOOT=yes
IPADDR=20.0.0.20
NETMASK=255.255.255.0
GATEWAY=20.0.0.2
#配置resolv.conf文件用于访问外网
[root@ct network-scripts]# cat /etc/resolv.conf
nameserver 8.8.8.8
#重启网络,进行测试
[root@ct ~]# ping www.baidu.com
PING www.wshifen.com (104.193.88.123) 56(84) bytes of data.
64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=1 ttl=128 time=182 ms
64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=2 ttl=128 time=182 ms
^C
--- www.wshifen.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 182.853/182.863/182.874/0.427 ms

In addition to the compute nodes NIC configuration ip address is not the same :( other are the same)

[root@localhost ~]# hostnamectl set-hostname c1
[root@localhost ~]# su
[root@c1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
UUID=d8f1837b-ce71-4465-8d6f-97668c343c6a
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.100.21
NETMASK=255.255.255.0
GATEWAY=192.168.100.1
#计算机节点2上配置ip地址为192.168.100.22

Configuration three nodes / etc / hosts file:

cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.20 ct
192.168.100.21 c1
192.168.100.22 c2
#测试是否可以互相ping通
root@ct ~]# ping c1
PING c1 (192.168.100.21) 56(84) bytes of data.
64 bytes from c1 (192.168.100.21): icmp_seq=1 ttl=64 time=0.800 ms
64 bytes from c1 (192.168.100.21): icmp_seq=2 ttl=64 time=0.353 ms
^C
--- c1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.353/0.576/0.800/0.224 ms
[root@ct ~]# ping c2
PING c2 (192.168.100.22) 56(84) bytes of data.
64 bytes from c2 (192.168.100.22): icmp_seq=1 ttl=64 time=0.766 ms
64 bytes from c2 (192.168.100.22): icmp_seq=2 ttl=64 time=0.316 ms
^C
--- c2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.316/0.541/0.766/0.225 ms

[root@c1 ~]# ping c2
PING c2 (192.168.100.22) 56(84) bytes of data.
64 bytes from c2 (192.168.100.22): icmp_seq=1 ttl=64 time=1.25 ms
64 bytes from c2 (192.168.100.22): icmp_seq=2 ttl=64 time=1.05 ms
64 bytes from c2 (192.168.100.22): icmp_seq=3 ttl=64 time=0.231 ms
^C
--- c2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 0.231/0.846/1.255/0.442 ms

2, turn off the firewall, core protection, network management, and is set to boot from Kai ban (three nodes need to configure the following command, try these services will be examined before this test environment using OpenStack)

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
vi /etc/sysconfig/selinux
SELINUX=disabled
systemctl stop NetworkManager
systemctl disable NetworkManager

3, a package upload --openstack-rocky compressed (source), and the like disposed decompresses

I use a tool to upload xftp three nodes are uploaded to decompress / opt directory under after uploading

As follows

[root@ct ~]# ls
anaconda-ks.cfg  openstack_rocky.tar.gz
[root@ct ~]# tar -zxf openstack_rocky.tar.gz -C /opt/
[root@ct ~]# cd /opt/
[root@ct opt]# ls
openstack_rocky
[root@ct opt]# du -h
2.4M    ./openstack_rocky/repodata
306M    ./openstack_rocky
306M    .

4, the local configuration files yum source (note the virtual machine image file in the connected state, viewed in the virtual machine settings, or to check whether the bottom right of the display drive icon green dot, usually the default is connected state) where the control node demonstration, the same operation can be on the remaining nodes.

4.1, mount system image

[root@ct opt]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Fri Mar  6 05:02:52 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=0d4b2a40-756a-4c83-a520-83289e8d50ca /                       xfs     defaults        0 0
UUID=bd59f052-d9bc-47e8-a0fb-55b701b5dd28 /boot                   xfs     defaults        0 0
UUID=8ad9f9e7-92db-4aa2-a93d-1fe93b63bd89 swap                    swap    defaults        0 0
/dev/sr0    /mnt    iso9660 defaults    0 0
[root@ct opt]# mount -a
mount: /dev/sr0 is write-protected, mounting read-only
[root@ct opt]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda3      xfs       291G  1.6G  290G   1% /
devtmpfs       devtmpfs  3.9G     0  3.9G   0% /dev
tmpfs          tmpfs     3.9G     0  3.9G   0% /dev/shm
tmpfs          tmpfs     3.9G   12M  3.8G   1% /run
tmpfs          tmpfs     3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1      xfs      1014M  134M  881M  14% /boot
tmpfs          tmpfs     781M     0  781M   0% /run/user/0
/dev/sr0       iso9660   4.2G  4.2G     0 100% /mnt

4.2, yum create a backup source to write a new source file

[root@ct opt]# cd /etc/yum.repos.d/
[root@ct yum.repos.d]# ls
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Media.repo    CentOS-Vault.repo
CentOS-CR.repo    CentOS-fasttrack.repo  CentOS-Sources.repo
[root@ct yum.repos.d]# mkdir backup
[root@ct yum.repos.d]# mv C* backup/
[root@ct yum.repos.d]# vi local.repo
[root@ct yum.repos.d]# cat local.repo 
[openstack]
name=openstack
baseurl=file:///opt/openstack_rocky #该路径为解压软件包源的路径
gpgcheck=0
enabled=1

[centos]
name=centos
baseurl=file:///mnt
gpgcheck=0
enabled=1

4.3, modify yum.conf file, keepcache set to 1, indicating that the cache is saved

[root@ct yum.repos.d]# head -10 /etc/yum.conf 
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=1 #只需要修改该参数
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5

[root@ct yum.repos.d]# yum clean all  #清空所有软件包
Loaded plugins: fastestmirror
Cleaning repos: centos openstack
Cleaning up everything
Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[root@ct yum.repos.d]# yum makecache #建立软件包本地缓存
Loaded plugins: fastestmirror
Determining fastest mirrors
centos                                                                                             | 3.6 kB  00:00:00     
openstack                                                                                          | 2.9 kB  00:00:00     
(1/7): centos/group_gz                                                                             | 166 kB  00:00:00     
(2/7): centos/filelists_db                                                                         | 3.1 MB  00:00:01     
(3/7): centos/primary_db                                                                           | 3.1 MB  00:00:01     
(4/7): centos/other_db                                                                             | 1.3 MB  00:00:00     
(5/7): openstack/primary_db                                                                        | 505 kB  00:00:00     
(6/7): openstack/filelists_db                                                                      | 634 kB  00:00:00     
(7/7): openstack/other_db                                                                          | 270 kB  00:00:00     
Metadata Cache Created

5. Free make interaction between three nodes, and validate

ssh-keygen -t rsa #一路回车即可,下面遇到交互是输入yes以及登录的虚拟机的root的密码即可
ssh-copy-id ct 
ssh-copy-id c1 
ssh-copy-id c2

So in order to ensure security and authentication settings before the first experiment we take a good snapshot and then restart the virtual machine to verify these configurations (following verification should be carried out on each node to control node as an example here)

[root@ct ~]# ls
anaconda-ks.cfg  openstack_rocky.tar.gz
[root@ct ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@ct ~]# systemctl status NetworkManager
● NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:NetworkManager(8)
[root@ct ~]# setenforce ?
setenforce: SELinux is disabled
#再次确认一下免交互是否成功
[root@ct ~]# ssh c1
Last login: Sun Mar  8 13:11:32 2020 from c2
[root@c1 ~]# exit
logout
Connection to c1 closed.
[root@ct ~]# ssh c2
Last login: Sun Mar  8 13:14:18 2020 from gateway
[root@c2 ~]# 

6, the time synchronization configuration

This step is critical, especially in our production environment, imagine if the time between each server can not be synchronized, so for many services and operations are not carried out, and even lead to a major accident.

The experimental environment to synchronize the clock Ali cloud server, for example, to control node synchronization Ali cloud server compute nodes and two synchronous control node time by ntpd service.

Control node configuration:

[root@ct ~]# yum -y install ntpdate 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package ntpdate.x86_64 0:4.2.6p5-28.el7.centos will be installed
--> Finished Dependency Resolution

//...//省略部分内容
Installed:
  ntpdate.x86_64 0:4.2.6p5-28.el7.centos                                                                                  

Complete!
#同步阿里云时钟服务器
[root@ct ~]# ntpdate ntp.aliyun.com
 8 Mar 05:20:32 ntpdate[9596]: adjust time server 203.107.6.88 offset 0.017557 sec
[root@ct ~]# date
Sun Mar  8 05:20:40 EDT 2020
[root@ct ~]# yum -y install ntp
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package ntp.x86_64 0:4.2.6p5-28.el7.centos will be installed
--> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-28.el7.centos.x86_64
--> Running transaction check
---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                        Arch                  Version                                 Repository             Size
==========================================================================================================================
Installing:
 ntp                            x86_64                4.2.6p5-28.el7.centos                   centos                549 k
Installing for dependencies:
 autogen-libopts                x86_64                5.18-5.el7                              centos                 66 k

Transaction Summary
==========================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 615 k
Installed size: 1.5 M
Downloading packages:
--------------------------------------------------------------------------------------------------------------------------
Total                                                                                     121 MB/s | 615 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : autogen-libopts-5.18-5.el7.x86_64                                                                      1/2 
  Installing : ntp-4.2.6p5-28.el7.centos.x86_64                                                                       2/2 
  Verifying  : autogen-libopts-5.18-5.el7.x86_64                                                                      1/2 
  Verifying  : ntp-4.2.6p5-28.el7.centos.x86_64                                                                       2/2 

Installed:
  ntp.x86_64 0:4.2.6p5-28.el7.centos                                                                                      

Dependency Installed:
  autogen-libopts.x86_64 0:5.18-5.el7                                                                               
Complete!

Modify ntp master configuration file

A key multi-node OpenStack deployment (ultra-detailed)

After saving the file to restart the service, the service shut down chronyd.service

[root@ct ~]# systemctl disable chronyd.service
Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.
[root@ct ~]# systemctl restart ntpd
[root@ct ~]# systemctl enable ntpd

Two computing node configuration

[root@c1 ~]# yum -y install ntpdate 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package ntpdate.x86_64 0:4.2.6p5-28.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================
 Package                  Arch                    Version                                   Repository               Size
==========================================================================================================================
Installing:
 ntpdate                  x86_64                  4.2.6p5-28.el7.centos                     centos                   86 k

Transaction Summary
==========================================================================================================================
Install  1 Package

Total download size: 86 k
Installed size: 121 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : ntpdate-4.2.6p5-28.el7.centos.x86_64                                                                   1/1 
  Verifying  : ntpdate-4.2.6p5-28.el7.centos.x86_64                                                                   1/1 

Installed:
  ntpdate.x86_64 0:4.2.6p5-28.el7.centos                                                                                  

Complete!
[root@c1 ~]# ntpdate ct
 8 Mar 05:36:26 ntpdate[9562]: step time server 192.168.100.20 offset -28798.160949 sec
[root@c1 ~]# crontab -e 
#写入周期性计划任务后保存退出,例如:*/30 * * * * /usr/sbin/ntpdate ct >> /var/log/ntpdate.log
no crontab for root - using an empty one
crontab: installing new crontab

4.3 a key to the deployment of OpenStack

Operation control node

#安装openstack-packstack工具用于生成openstack应答文件(txt文本格式)
[root@ct ~]# yum install -y openstack-packstack
[root@ct ~]# packstack --gen-answer-file=openstack.txt
[root@ct ~]# ls
anaconda-ks.cfg  openstack_rocky.tar.gz  openstack.txt

Focuses on how to modify: here is not specifically described, may After reading the article next article describes the specific configuration parameters of the answer file

Here we need to change the contents of which rows are given careful modification

41行:y-n 
50行:y-n 

97行:192.168.100.11,192.168.100.12

557行:20G
817 :physnet1
862 :physnet1:br-ex
873:br-ex:eth1
1185:y-n 
#还有一些网段需要修改以及密码这里使用sed正则表达式来全局修改
[root@ct ~]# sed -i -r 's/(.+_PW)=.+/\1=sf144069/' openstack.txt

[root@ct ~]# sed -i -r 's/20.0.0.20/192.168.100.20/g' openstack.txt

Command to deploy a key installation

[root@ct ~]# packstack --answer-file=openstack.txt
Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20200308-055746-HD3Zl3/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
Setting up ssh keys                                  [ DONE ]
Preparing servers                                    [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Preparing pre-install entries                        [ DONE ]
Setting up CACERT                                    [ DONE ]
Preparing AMQP entries                               [ DONE ]
Preparing MariaDB entries                            [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Preparing Keystone entries                           [ DONE ]
...//省略部分内容

In each node terminal (Xshell open terminal end of a connection control node using the following command to view the log information)

tail -f /var/log/messages

When the case shown in the figure appears to represent at present no problem, the next step is to wait patiently

A key multi-node OpenStack deployment (ultra-detailed)

The following figure illustrates the emergence of a successful deployment

A key multi-node OpenStack deployment (ultra-detailed)

We can use the browser (Google) login dashboard to test at the end of the article may refer to the following description:

OpenStack entry - theoretical articles (B): OpenStack node type and structure (including sample login interface dashboard)

V. Summary and deployment troubleshooting ideas

The author of the deployment process also encountered some unexpected errors and problems, but basically solved, these are mostly minor issues, the code is wrong, something wrong change and so on. But still give suggestions and troubleshooting ideas for everyone:

First, these larger pilot project, we need to clarify ideas and deployment order;

Secondly, the installation process, because it is experimental process that requires habitually backup (virtual machine we can take a snapshot), which is a storage Kazakhstan. When such a situation we can not solve the problem, etc. may occur after, you can roll back to the original point of time successful deployment phase of this snapshot mechanism. This saves a lot of time for us;

Followed by troubleshooting, first understand the meaning of ERROR, if one see the problem can be edited directly; if there is no solution ideas, went to check whether their environmental problems, the service is turned on, the stop of the service is shut down, etc. ; if the environment is no problem to go check to see if your configuration file is incorrect, for example, where some of the relevant parameters whether to modify the configuration file correctly, and so on; if you still can not resolve it for you to find information, such as go to Baidu, look at the official documentation to see predecessors or the previous engineer if you are experiencing similar problems, how to solve.

Specific troubleshooting still takes time to accumulate experience, the main purpose of this paper is to demonstrate the whole process of deploying multiple nodes in a local R version of the OpenStack platform, easy for beginners to try experiments, theoretical part, the author continues to update hope that readers can continue to focus on reading! Thank you!

Guess you like

Origin blog.51cto.com/14557673/2476431