用户云计算基础架构服务平台 手册

版权声明:@抛物线 https://blog.csdn.net/qq_28513801/article/details/83512134

用户云计算基础架构服务平台手册

  ###   

版本:先电 iaas V2.2
发布日期:2017年11月1日

南京第五十五所技术开发有限公司

版本修订说明

修订版本 修订时间 修订说明
Xiandian-iaas-v2.0 2016年10月28日 云计算基础架构服务平台用户手册2.0
Xiandian-iaas-v2.1 2017年04月20日 修改上个版本已知错误,部分配置文件进行修改优化,修改数据库连接,添加Trove组件,添加系统卸载脚本
Xiandian-iaas-v2.2 2017年11月1日 添加Lbaas组件、Fwaas组件,添加nginx模板使用。

目 录

  1. List item

1 基本环境配置 9
1.1安装CentOS7说明 10
1.2配置网络、主机名 10
1.3配置yum源 12
1.4编辑环境变量 14
1.5通过脚本安装服务 15
1.6安装Openstack包 16
1.7配置域名解析 16
1.8配置防火墙和Selinux 16
1.9安装ntp服务 17
1.10通过脚本安装服务 17
1.11安装Mysql数据库服务 17
1.12安装Mongo数据库服务 18
1.13安装RabbitMQ服务 19
1.14安装memcahce 19
2 安装Keystone认证服务 19
2.1 通过脚本安装keystone服务 19
2.2安装keystone服务软件包 20
2.3创建Keystone数据库 20
2.4配置数据库连接 20
2.5为keystone服务创建数据库表 20
2.6创建令牌 20
2.7创建签名密钥和证书 21
2.8定义用户、租户和角色 22
2.9创建admin-openrc.sh 23
3 安装Glance镜像服务 24
3.1 通过脚本安装glance服务 24
3.2 安装Glance镜像服务软件包 24
3.3创建Glance数据库 24
3.4配置文件创建数据库连接 24
3.5为镜像服务创建数据库表 25
3.6创建用户 25
3.7配置镜像服务 25
3.8创建Endpoint和API端点 27
3.9启动服务 27
3.10上传镜像 27
4 安装Nova计算服务 27
4.1通过脚本安装nova服务 28
4.2安装Nova 计算服务软件包 28
4.3创建Nova数据库 28
4.4创建计算服务表 29
4.5创建用户 29
4.6配置计算服务 29
4.7创建Endpoint和API端点 31
4.8启动服务 31
4.9验证Nova 31
4.10安装Nova计算服务软件包 31
4.11配置Nova服务 32
4.12检查系统处理器是否支持虚拟机的硬件加速 33
4.13启动 33
4.14 清除防火墙 34
5 安装Neutron网络服务 34
5.1通过脚本安装neutron服务 34
5.2通过脚本创建neutron网络 34
5.3创建Neutron数据库 35
5.4创建用户 35
5.5创建Endpoint和API端点 35
5.6安装neutron网络服务软件包 36
5.7配置Neutron服务 36
5.8 编辑内核 41
5.9 创建数据库 41
5.10 启动服务和创建网桥 41
5.11 安装软件包 42
5.12 配置Neutron服务 42
5.13 编辑内核 45
5.14 启动服务进而创建网桥 45
5.15 选择Neutron网络模式 45
5.15.1 Flat 45
5.15.2 Gre 48
5.15.3 Vlan 51
5.16 网络高级应用 54
5.16.1 负载均衡操作 54
5.16.2 防火墙操作 59
6 安装Dashboard服务 63
6.1通过脚本安装dashboard服务 63
6.2安装Dashboard服务软件包 63
6.3配置 63
6.4启动服务 67
6.5访问 67
6.6创建云主机(gre/vlan) 67
7 安装Cinder块存储服务 67
7.1 通过脚本安装Cinder服务 67
7.2 安装Cinder块存储服务软件包 68
7.3 创建数据库 68
7.4 创建用户 68
7.5 创建Endpoint和API端点 69
7.6 配置Cinder服务 69
7.7 创建数据库 70
7.8 启动服务 71
7.9 安装块存储软件 71
7.10 创建LVM物理和逻辑卷 71
7.11 修改Cinder配置文件 71
7.12 重启服务 73
7.13 验证 73
8 安装Swift对象存储服务 73
8.1通过脚本安装Swift服务 73
8.2创建用户 74
8.3创建Endpoint和API端点 74
8.4 编辑/etc/swift/proxy-server.conf 74
8.5 创建账号、容器、对象 77
8.6 编辑/etc/swift/swift.conf文件 77
8.7 启动服务和赋予权限 78
8.8 安装软件包 78
8.9 配置rsync 78
8.10 配置账号、容器和对象 80
8.11 修改Swift配置文件 82
8.12 重启服务和赋予权限 83
9 安装Trove服务 84
9.1 执行脚本进行安装 84
9.2 安装Trove数据库服务的软件包 84
9.3 创建数据库 84
9.4 创建用户 84
9.5 创建Endpoint和API端点 85
9.6 配置trove.conf文件 85
9.7 配置trove-taskmanager.conf 87
9.8 配置trove-conductor.conf文件 88
9.9 配置trove-guestagent.conf文件 89
9.10 同步数据库 91
9.11 启动服务 91
9.12 上传镜像 91
9.13 创建数据库存储 91
9.14 使用上传的镜像更新数据库 91
10 安装Heat编配服务 92
10.1通过脚本安装heat服务 92
10.2安装heat编配服务软件包 92
10.3创建数据库 92
10.4创建用户 93
10.5创建Endpoint和API端点 93
10.6配置Heat服务 94
10.7创建数据库 96
10.8启动服务 96
10.9 nginx模板 96
11 安装Ceilometer监控服务 99
11.1通过脚本安装Ceilometer服务 99
11.2 安装Ceilometer监控服务软件包 99
11.3 创建数据库 100
11.4 创建用户 100
11.5 创建Endpoint和API端点 100
11.6 配置Ceilometer 100
11.7 启动服务 102
11.8 监控组件 102
11.9 安装软件包 104
11.10 配置Ceilometer 104
12 安装Alarm监控服务 106
12.1通过脚本安装alarm服务 106
12.2 创建数据库 106
12.3 创建keystone用户 107
12.4 创建Endpoint和API 107
12.5 安装软件包 107
12.6 配置aodh 107
12.7 同步数据库 109
12.8 启动服务 109
13 添加控制节点资源到云平台 110
13.1 修改openrc.sh 110
13.2 运行iaas-install-nova-compute.sh 110
14 系统卸载 110
15 Xindian-IaaS-2.2版本升级说明: 110

1 基本环境配置
云计算平台的拓扑图如图1所示,IP地址规划如图1所示。

图1云计算平台拓扑图

本次搭建采用双节点安装,即controller node控制节点和compute node计算节点。enp8s0为外部网络,enp9s0为内部管理网络。存储节点安装操作系统时划分两个空白分区以sda,sdb为例。作为cinder和swift存储磁盘,搭建 ftp服务器作为搭建云平台的yum源。配置文件中密码需要根据实际环境进行配置。
1.1安装CentOS7说明
【CentOS7版本】
CentOS7系统选择1511版本:CentOS-7-x86_64-DVD-1511.iso
【空白分区划分】
CentOS7的安装与CentOS6.5的安装有明显的区别。在CentOS7安装过程中,设置分区都需要一个挂载点,这样一来就无法创建两个空白的磁盘分区作为cinder服务和swift服务的存储磁盘了。
所以我们应该在系统安装过程中留下足够的磁盘大小,系统安装完成后,使用命令parted划分新分区,然后使用mkfs.xfs进行文件系统格式化,完成空白分区的划分。具体命令如下:
[root@compute ~]# parted /dev/md126
(parted) mkpart swift 702G 803G //创建swift分区,从702G到803G
[root@compute ~]# mkfs.xfs /dev/md126p5
1.2配置网络、主机名
修改和添加/etc/sysconfig/network-scripts/ifcfg-enp*(具体的网口)文件。
(1)controller节点
配置网络:
enp8s0: 192.168.100.10
DEVICE=enp8s0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.100.10
PREFIX=24
GATEWAY=192.168.100.1

enp9s0: 192.168.200.10
DEVICE=enp9s0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.200.10
PREFIX=24
配置主机名:

hostnamectl set-hostname controller

按ctrl+d 退出 重新登陆
(2)compute 节点
配置网络:
enp8s0: 192.168.100.20
DEVICE=enp8s0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.100.20
PREFIX=24
GATEWAY=192.168.100.1

enp9s0: 192.168.200.20
DEVICE=enp9s0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.200.20
PREFIX=24

配置主机名:

hostnamectl set-hostname compute

按ctrl+d 退出 重新登陆

1.3配置yum源
#Controller和compute节点
(1)yum源备份
#mv /etc/yum.repos.d/* /opt/
(2)创建repo文件
【controller】
在/etc/yum.repos.d创建centos.repo源文件
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=file:///opt/iaas-repo
gpgcheck=0
enabled=1

【compute】
在/etc/yum.repos.d创建centos.repo源文件
[centos]
name=centos
baseurl=ftp://192.168.100.10/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=ftp://192.168.100.10/iaas-repo
gpgcheck=0
enabled=1

(3)挂载iso文件
【挂载CentOS-7-x86_64-DVD-1511.iso】
[root@controller ~]# mount -o loop CentOS-7-x86_64-DVD-1511.iso /mnt/
[root@controller ~]# mkdir /opt/centos
[root@controller ~]# cp -rvf /mnt/* /opt/centos/
[root@controller ~]# umount /mnt/

【挂载XianDian-IaaS-v2.2.iso】
[root@controller ~]# mount -o loop XianDian-IaaS-v2.2.iso /mnt/
[root@controller ~]# cp -rvf /mnt/* /opt/
[root@controller ~]# umount /mnt/

(4)搭建ftp服务器,开启并设置自启
[root@controller ~]# yum install vsftpd –y
[root@controller ~]# vi /etc/vsftpd/vsftpd.conf
添加anon_root=/opt/
保存退出

[root@controller ~]# systemctl start vsftpd
[root@controller ~]# systemctl enable vsftpd

(5)关闭防火墙并设置开机不自启
【controller/compute】
systemctl stop firewalld
systemctl disable firewalld

(6)清除缓存,验证yum源
【controller/compute】

yum clean all

yum list

1.4编辑环境变量

controller和compute节点

yum install iaas-xiandian -y

编辑文件/etc/xiandian/openrc.sh,此文件是安装过程中的各项参数,根据每项参数上一行的说明及服务器实际情况进行配置。
HOST_IP=192.168.100.10
HOST_NAME=controller
HOST_IP_NODE=192.168.100.20
HOST_NAME_NODE=compute
RABBIT_USER=openstack
RABBIT_PASS=000000
DB_PASS=000000
DOMAIN_NAME=demo(自定义)
ADMIN_PASS=000000
DEMO_PASS=000000
KEYSTONE_DBPASS=000000
GLANCE_DBPASS=000000
GLANCE_PASS=000000
NOVA_DBPASS=000000
NOVA_PASS=000000
NEUTRON_DBPASS=000000
NEUTRON_PASS=000000
METADATA_SECRET=000000
INTERFACE_NAME=enp9s0(外网网卡名)
CINDER_DBPASS=000000
CINDER_PASS=000000
TROVE_DBPASS=000000
TROVE_PASS=000000
BLOCK_DISK=md126p4(空白分区名)
SWIFT_PASS=000000
OBJECT_DISK=md126p5(空白分区名)
STORAGE_LOCAL_NET_IP=192.168.100.20
HEAT_DBPASS=000000
HEAT_PASS=000000
CEILOMETER_DBPASS=000000
CEILOMETER_PASS=000000
AODH_DBPASS=000000
AODH_PASS=000000

1.5通过脚本安装服务
1.6-1.9的基础配置操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

Controller节点和Compute节点

执行脚本iaas-pre-host.sh进行安装

安装完成后同时重启

[root@controller ~]# reboot

1.6安装Openstack包

controller和compute节点

yum -y install openstack-utils openstack-selinux python-openstackclient

yum upgrade

1.7配置域名解析
修改/etc/hosts添加一下内容
(1)controller 节点
20.0.0.10 controller
20.0.0.20 compute
(2)compute 节点
20.0.0.10 controller
20.0.0.20 compute
1.8配置防火墙和Selinux
编辑selinux文件

vi /etc/selinux/config

SELINUX=permissive
关闭防火墙并设置开机不自启

systemctl stop firewalld.service

systemctl disable firewalld.service

yum remove -y NetworkManager firewalld

yum -y install iptables-services

systemctl enable iptables

systemctl restart iptables

iptables -F

iptables -X

iptables -X

service iptables save

1.9安装ntp服务
(1)controller和compute节点

yum -y install ntp

(2)配置controller节点
编辑/etc/ntp.conf文件
添加以下内容(删除默认sever规则)
server 127.127.1.0
fudge 127.127.1.0 stratum 10
启动ntp服务器

service ntpd start

chkconfig ntpd on

(3)配置compute节点

ntpdate controller

chkconfig ntpdate on

1.10通过脚本安装服务
1.11-1.14基础服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

Controller节点

执行脚本iaas-install-mysql.sh进行安装

1.11安装Mysql数据库服务

yum install mysql mysql-server MySQL-python

修改 /etc/my.cnf文件[mysqld]中添加
max_connections=10000
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = ‘SET NAMES utf8’
character-set-server = utf8
启动服务
#systemctl enable mariadb.service
#systemctl start mariadb.service
配置Mysql
#mysql_secure_installation
修改/usr/lib/systemd/system/mariadb.service
[Service]
新添加两行如下参数:
LimitNOFILE=10000
LimitNPROC=10000
重新加载系统服务,并重启mariadb服务

systemctl daemon-reload

service mariadb restart

按enter确认后设置数据库root密码
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] n
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y
(2)compute节点
#yum -y install MySQL-python
1.12安装Mongo数据库服务
#yum install -y mongodb-server mongodb
编辑 /etc/mongod.conf文件
删除bind_ip行
修改 smallfiles = true
#systemctl enable mongod.service
#systemctl start mongod.service
1.13安装RabbitMQ服务

yum install -y rabbitmq-server

systemctl enable rabbitmq-server.service
systemctl restart rabbitmq-server.service
rabbitmqctl add_user openstack 000000
rabbitmqctl set_permissions openstack “." ".” “.*”
1.14安装memcahce
#yum install memcached python-memcached
systemctl enable memcached.service
systemctl restart memcached.service
2 安装Keystone认证服务
#Controller
2.1 通过脚本安装keystone服务
2.2-2.9的认证服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

Controller节点

执行脚本iaas-install-keystone.sh进行安装。

2.2安装keystone服务软件包
yum install -y openstack-keystone htsRtpd mod_wsgi
2.3创建Keystone数据库

mysql –u root -p(此处数据库密码为之前安装Mysql设置的密码) mysql> CREATE a kDATABASE keystone; mysql> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘localhost’ IDENTIFIED BY ‘KEYSTONE_DBPASS’; mysql> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY ‘KEYSTONE_DBPASS’; mysql> exit

2.4配置数据库连接
#openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
2.5为keystone服务创建数据库表
#su -s /bin/sh -c “keystone-manage db_sync” keystone
2.6创建令牌
#ADMIN_TOKEN=$(openssl rand -hex 10)
#openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
#openstack-config --set /etc/keystone/keystone.conf token provider fernet
2.7创建签名密钥和证书
#keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
修改/etc/httpd/conf/httpd.conf配置文件将ServerName www.example.com:80 替换为ServerName controller
创建/etc/httpd/conf.d/wsgi-keystone.conf文件,内容如下:
Listen 5000
Listen 35357

<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat “%{cu}t %M”
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
    Require all granted
</Directory>

<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat “%{cu}t %M”
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
    Require all granted
</Directory>
#systemctl enable httpd.service #systemctl start httpd.service 2.8定义用户、租户和角色 (1)设置环境变量 export OS_TOKEN=$ADMIN_TOKEN export OS_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 (2)创建keystone相关内容 openstack service create --name keystone --description "OpenStack Identity" identity openstack endpoint create --region RegionOne identity public http://controller:5000/v3 openstack endpoint create --region RegionOne identity internal http://controller:5000/v3 openstack endpoint create --region RegionOne identity admin http://controller:35357/v3 openstack domain create --description "Default Domain" default openstack project create --domain default --description "Admin Project" admin openstack user create --domain default --password 000000 admin openstack role create admin openstack role add --project admin --user admin admin openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" demo openstack user create --domain default --password 000000 demo openstack role create user openstack role add --project demo --user demo user (3)清除环境变量 #unset OS_TOKEN OS_URL 2.9创建admin-openrc.sh 创建admin环境变量admin-openrc.sh export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=000000 export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 生效环境变量 #source admin-openrc.sh 3 安装Glance镜像服务 #Controller 3.1 通过脚本安装glance服务 3.2-3.9的镜像服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下: # Controller 节点 执行脚本iaas-install-glance.sh进行安装

3.2 安装Glance镜像服务软件包

yum install -y openstack-glance

3.3创建Glance数据库
#mysql -u root -p
mysql> CREATE DATABASE glance;
mysql> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@‘localhost’ IDENTIFIED BY ‘GLANCE_DBPASS’;
mysql> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’%’ IDENTIFIED BY ‘GLANCE_DBPASS’;
3.4配置文件创建数据库连接

openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

#openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
3.5为镜像服务创建数据库表

su -s /bin/sh -c “glance-manage db_sync” glance

3.6创建用户
openstack user create --domain default --password 000000 glance
openstack role add --project service --user glance admin
3.7配置镜像服务
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password 000000
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf paste_deploy config_file /usr/share/glance/glance-api-dist-paste.ini
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password 000000
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf paste_deploy config_file /usr/share/glance/glance-registry-dist-paste.ini
3.8创建Endpoint和API端点
openstack service create --name glance --description “OpenStack Image” image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
3.9启动服务
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl restart openstack-glance-api.service openstack-glance-registry.service
3.10上传镜像
首先下载(Wget)提供的系统镜像到本地,本次以上传CentOS6.5x86_64镜像为例。
可以安装Wget,从Ftp服务器上下载镜像到本地。

source /etc/keystone/admin-openrc.sh

glance image-create --name “CentOS7.0” --disk-format qcow2 --container-format bare --progress < /opt/iaas/images/centos_7-x86_64_xiandian.qcow2

4 安装Nova计算服务
#Controller
4.1通过脚本安装nova服务
4.2-4.14计算服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-nova-controller.sh进行安装
#Compute节点
执行脚本iaas-install-nova-compute.sh进行安装

4.2安装Nova 计算服务软件包

yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler

4.3创建Nova数据库

mysql -u root -p

mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;
mysql> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’ IDENTIFIED BY ‘NOVA_DBPASS’;
mysql> create database IF NOT EXISTS nova_api;
mysql> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’ ;
mysql> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’%’ IDENTIFIED BY ‘NOVA_DBPASS’ ;
修改数据库连接
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
4.4创建计算服务表
su -s /bin/sh -c “nova-manage db sync” nova
su -s /bin/sh -c “nova-manage api_db sync” nova
4.5创建用户
openstack user create --domain default --password 000000 nova
openstack role add --project service --user nova admin
4.6配置计算服务
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 20.0.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 8775
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password 000000
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 20.0.0.10
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 20.0.0.10
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
4.7创建Endpoint和API端点
openstack service create --name nova --description “OpenStack Compute” compute
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%(tenant_id)s
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%(tenant_id)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%(tenant_id)s
4.8启动服务
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
4.9验证Nova
#nova image-list

#Compute
4.10安装Nova计算服务软件包
yum install lvm2 -y
yum install openstack-nova-compute -y
4.11配置Nova服务
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password 000000
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 20.0.0.20
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 20.0.0.20
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://20.0.0.10:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf libvirt inject_key True
4.12检查系统处理器是否支持虚拟机的硬件加速
执行命令
#egrep -c ‘(vmx|svm)’ /proc/cpuinfo
(1)如果该命令返回一个1或更大的值,说明你的系统支持硬件加速,通常不需要额外的配置。
(2)如果这个指令返回一个0值,说明你的系统不支持硬件加速,你必须配置libvirt取代KVM来使用QEMU。

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

4.13启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
4.14 清除防火墙
controller和compute节点
iptables -F
iptables -X
iptables -Z
/usr/libexec/iptables/iptables.init save
5 安装Neutron网络服务
#Controller节点
5.1通过脚本安装neutron服务
5.3-5.14网络服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-neutron-controller.sh进行安装
#Compute节点
执行脚本iaas-install-neutron-compute.sh进行安装

5.2通过脚本创建neutron网络
5.15网络服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:

创建flat网络
#Controller节点
执行脚本iaas-install-neutron-controller-flat.sh进行安装
#Compute节点
执行脚本iaas-install-neutron-compute-flat.sh进行安装

创建gre网络
#Controller节点
执行脚本iaas-install-neutron-controller-gre.sh进行安装
#Compute节点
执行脚本iaas-install-neutron-compute-gre.sh进行安装

创建vlan网络
#Controller节点
执行脚本iaas-install-neutron-controller-vlan.sh进行安装
#Compute节点
执行脚本iaas-install-neutron-compute-vlan.sh进行安装

5.3创建Neutron数据库
#mysql -u root -p
mysql> CREATE DATABASE neutron;
mysql> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘localhost’ IDENTIFIED BY ‘000000’;
mysql> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ IDENTIFIED BY ‘000000’;
5.4创建用户
openstack user create --domain default --password 000000 neutron
openstack role add --project service --user neutron admin
5.5创建Endpoint和API端点
openstack service create --name neutron --description “OpenStack Networking” network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
5.6安装neutron网络服务软件包

yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables openstack-neutron-openvswitch openstack-neutron-lbaas python-neutron-lbaas haproxy openstack-neutron-fwaas

5.7配置Neutron服务
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:000000@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password 000000
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password 000000
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,gre,vxlan,local
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver iptables_hybrid
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent prevent_arp_spoofing True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 20.0.0.10
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret 000000
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_port 8775
openstack-config --set /etc/nova/nova.conf DEFAULT auto_assign_floating_ip True
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 8775
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters ‘AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter’
openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver libvirt.LibvirtDriver
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password 20.0.0.10
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret 000000
5.8 编辑内核
编辑文件/etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
生效配置
sysctl –p
5.9 创建数据库
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
5.10 启动服务和创建网桥
systemctl restart openvswitch
systemctl enable openvswitch
ovs-vsctl add-br br-int
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-openvswitch-agent neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service
#Compute节点
5.11 安装软件包
yum install openstack-neutron-linuxbridge ebtables ipset openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch –y
5.12 配置Neutron服务
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password 000000
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,gre,vxlan,local
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver iptables_hybrid
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent prevent_arp_spoofing True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password 000000
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal True
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 300
5.13 编辑内核
编辑文件/etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
生效配置
sysctl –p
5.14 启动服务进而创建网桥
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
systemctl restart openvswitch
systemctl enable openvswitch
ovs-vsctl add-br br-int
systemctl restart openstack-nova-compute.service
systemctl restart openstack-nova-compute neutron-metadata-agent
systemctl restart neutron-openvswitch-agent
systemctl enable neutron-openvswitch-agent neutron-metadata-agent
5.15 选择Neutron网络模式
以下任意选择一种方式进行安装
5.15.1 Flat
#Controller节点
# source /etc/xiandian/openrc.sh

source /etc/keystone/admin-openrc.sh

ovs-vsctl add-br br-ex

修改/etc/sysconfig/network-scripts/ifcfg-enp9s0配置如下:
DEVICE=enp9s0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
修改完成后执行以下命令

ovs-vsctl add-port br-ex enp9s0

systemctl restart network

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks physnet1

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types flat

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings physnet1:br-ex

systemctl restart neutron-openvswitch-agent

配置lbaas服务
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router,lbaas
openstack-config --set /etc/neutron/neutron_lbaas.conf service_providers service_provider LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
openstack-config --set /etc/neutron/lbaas_agent.ini DEFAULT device_driver neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
openstack-config --set /etc/neutron/lbaas_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/lbaas_agent.ini haproxy user_group haproxy
创建数据库
neutron-db-manage --service lbaas upgrade head
重启服务
systemctl restart neutron-server neutron-lbaas-agent
systemctl enabled neutron-server neutron-lbaas-agent
#Compute节点

ovs-vsctl add-br br-ex

修改/etc/sysconfig/network-scripts/ifcfg-enp9s0配置如下:
DEVICE=enp9s0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
修改完成后执行以下命令

ovs-vsctl add-port br-ex enp9s0

systemctl restart network

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks physnet1

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types flat

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings physnet1:br-ex

systemctl restart neutron-openvswitch-agent

创建FLAT网络

Controller节点

tenantID=openstack project list | grep service | awk '{print $2}'
neutron net-create --tenant-id KaTeX parse error: Expected 'EOF', got '#' at position 108: …t1 5.15.2 Gre #̲Controller节点 op…minvlan:KaTeX parse error: Expected 'EOF', got '#' at position 655: …tron-l3-agent #̲Compute节点 opens…minvlan:$maxvlan(最小vlan号:最大vlanID号)
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge br-ex
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings physnet1:br-ex
ovs-vsctl add-br br-ex
修改/etc/sysconfig/network-scripts/ifcfg-enp9s0配置如下:
DEVICE=enp9s0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
修改完成后执行以下命令
ovs-vsctl add-port br-ex enp9s0
systemctl restart network
systemctl restart neutron-openvswitch-agent
配置lbaas服务
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router,lbaas,firewall
openstack-config --set /etc/neutron/neutron_lbaas.conf service_providers service_provider LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
openstack-config --set /etc/neutron/lbaas_agent.ini DEFAULT device_driver neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
openstack-config --set /etc/neutron/lbaas_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/lbaas_agent.ini haproxy user_group haproxy
配置fwaas服务
openstack-config --set /etc/neutron/neutron.conf service_providers FIREWALL:Iptables:neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver:default
openstack-config --set /etc/neutron/fwaas_driver.ini fwaas driver neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver
openstack-config --set /etc/neutron/fwaas_driver.ini fwaas enabled True
创建数据库
neutron-db-manage --service lbaas upgrade head
neutron-db-manage --subproject neutron-fwaas upgrade head
重启服务
systemctl restart neutron-server neutron-lbaas-agent systemctl restart neutron-l3-agent
systemctl enabled neutron-lbaas-agent
创建Vlan网络

Controller节点

neutron net-create ext-net --router:external True --provider:physical_network physnet1 --provider:network_type flat
neutron net-create demo-net --tenant-id openstack project list |grep -w admin |awk '{print $2}' --provider:network_type vlan
5.16 网络高级应用
5.16.1 负载均衡操作
(1)新增资源池

(2)为负载均衡资源池添加云主机成员(web页面),选择要添加的资源池以及成员云主机,设置协议端口。

(3)负载均衡资源池添加VIP,VIP子网需可以和云主机成员子网通信

(4)新增监控,建立成员云主机监控规则。

设置资源池关联监控

选择监控规则

(5)负载均衡器绑定浮动IP,(此网络模式为gre网络,需绑定浮动IP;flat网络模式不需要绑定浮动IP,即直接访问资源池VIP即可;vlan网络模式,如VIP为外部可访问地址,也可直接访问VIP地址,否则,需绑定浮动IP。)

分配访问浮动IP地址

访问负载均衡IP地址:http://172.30.11.7/

5.16.2 防火墙操作
GRE和VLAN网络模式可以使用fwaas防火墙服务,Flat不能使用防火墙功能。

(1)新建防火墙规则

(2)新建防火墙策略

在防火墙策略中添加防火墙规则。

(3)创建防火墙

防火墙选择所作用于的路由。

(4)web-2是一个nginx服务器,通过页面访问http://172.30.11.4

6 安装Dashboard服务
6.1通过脚本安装dashboard服务
6.2-6.4dashboard的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller
执行脚本iaas-install-dashboard.sh进行安装

6.2安装Dashboard服务软件包

yum install openstack-dashboard –y

6.3配置
修改/etc/openstack-dashboard/local_settings内容如下
修改
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
TEMPLATE_DEBUG = DEBUG
WEBROOT = ‘/dashboard/’
ALLOWED_HOSTS = [’’,‘localhost’]
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“volume”: 2,
“compute”: 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = ‘default’
LOCAL_PATH = ‘/tmp’
SECRET_KEY=‘e2cf67af77d15971b311’
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘127.0.0.1:11211’,
},
}
EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’
OPENSTACK_HOST = “controller”
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_KEYSTONE_BACKEND = {
‘name’: ‘native’,
‘can_edit_user’: True,
‘can_edit_group’: True,
‘can_edit_project’: True,
‘can_edit_domain’: True,
‘can_edit_role’: True,
}
OPENSTACK_HYPERVISOR_FEATURES = {
‘can_set_mount_point’: False,
‘can_set_password’: False,
‘requires_keypair’: False,
}
OPENSTACK_CINDER_FEATURES = {
‘enable_backup’: False,
}
OPENSTACK_NEUTRON_NETWORK = {
‘enable_router’: True,
‘enable_quotas’: True,
‘enable_ipv6’: True,
‘enable_distributed_router’: False,
‘enable_ha_router’: False,
‘enable_lb’: True,
‘enable_firewall’: True,
‘enable_vpn’: True,
‘enable_fip_topology_check’: True,
‘default_ipv4_subnet_pool_label’: None,
‘default_ipv6_subnet_pool_label’: None,
‘profile_support’: None,
‘supported_provider_types’: [’
’],
‘supported_vnic_types’: [’*’],
}
OPENSTACK_HEAT_STACK = {
‘enable_user_pass’: True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
“architecture”: _(“Architecture”),
“kernel_id”: _(“Kernel ID”),
“ramdisk_id”: _(“Ramdisk ID”),
“image_state”: _(“Euca2ools state”),
“project_id”: _(“Project ID”),
“image_type”: _(“Image Type”),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = “UTC”
POLICY_FILES_PATH = ‘/etc/openstack-dashboard’
LOGGING = {

}
SECURITY_GROUP_RULES = {

}
REST_API_REQUIRED_SETTINGS = [‘OPENSTACK_HYPERVISOR_FEATURES’,
‘LAUNCH_INSTANCE_DEFAULTS’]
6.4启动服务

systemctl restart httpd.service memcached.service

6.5访问
打开浏览器访问Dashboard
http://controller(或本机内网ip)/dashboard
注:检查防火墙规则,确保允许http服务相关端口通行,或者关闭防火墙。

6.6创建云主机(gre/vlan)
(1)管理员 → 网络 → 创建网络(内外网) → 创建子网(外网填服务器的外网网段)

(2)项目 → 网络 → 路由 → 新建路由 → 添加网关和内网接口

(3)项目 → 计算 → 访问安全 → 管理规则 → 添加规则(ICMP、TCP、UDP)

(4)项目 → 计算 → 云主机 → 创建云主机 → 绑定浮动IP

7 安装Cinder块存储服务
7.1 通过脚本安装Cinder服务
7.2-7.13块存储服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller
执行脚本iaas-install-cinder-controller.sh进行安装
#Compute节点
执行脚本iaas-install-cinder-compute.sh进行安装

7.2 安装Cinder块存储服务软件包

yum install openstack-cinder

修改权限配置文件,修改文件/etc/cinder/policy.json,将consistencygroup权限修改为空,修改如下:
“consistencygroup:create” : “”,
“consistencygroup:delete”: “”,
“consistencygroup:update”: “”,
“consistencygroup:get”: “”,
“consistencygroup:get_all”: “”,
“consistencygroup:create_cgsnapshot” : “”,
“consistencygroup:delete_cgsnapshot”: “”,
“consistencygroup:get_cgsnapshot”: “”,
“consistencygroup:get_all_cgsnapshots”: “”,
7.3 创建数据库

mysql -u root -p

mysql> CREATE DATABASE cinder;
mysql> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@‘localhost’ IDENTIFIED BY ‘000000’;
mysql> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’ IDENTIFIED BY ‘000000’;
7.4 创建用户
openstack user create --domain default --password 000000 cinder
openstack role add --project service --user cinder admin
7.5 创建Endpoint和API端点
openstack service create --name cinder --description “OpenStack Block Storage” volume
openstack service create --name cinderv2 --description “OpenStack Block Storage” volumev2
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%(tenant_id)s
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%(tenant_id)s
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%(tenant_id)s
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(tenant_id)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(tenant_id)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(tenant_id)s
7.6 配置Cinder服务
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:000000@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password 000000
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 20.0.0.10
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
7.7 创建数据库

su -s /bin/sh -c “cinder-manage db sync” cinder

7.8 启动服务

systemctl restart openstack-nova-api.service

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service

7.9 安装块存储软件
yum install lvm2 targetcli python-keystone openstack-cinder -y
7.10 创建LVM物理和逻辑卷
以磁盘/dev/sda为例

pvcreate /dev/sda

vgcreate cinder-volumes /dev/sda

7.11 修改Cinder配置文件
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:000000@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password 000000
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 20.0.0.20
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
7.12 重启服务
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
7.13 验证
#Controller
使用cinder create 创建一个新的卷

cinder create --display-name myVolume 1

通过cinder list 命令查看是否正确创建

cinder list

8 安装Swift对象存储服务
#Controller节点

source admin-openrc.sh

8.1通过脚本安装Swift服务
8.2-8.12对象存储服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller
执行脚本iaas-install-swift-controller.sh进行安装
#Compute节点
执行脚本iaas-install-swift-compute.sh进行安装
执行过程中需要确认登录controller节点和输入controller节点root用户密码。
8.2创建用户
openstack user create --domain default --password 000000 swift
openstack role add --project service --user swift admin
8.3创建Endpoint和API端点
openstack service create --name swift --description “OpenStack Object Storage” object-store
openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%(tenant_id)s
openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%(tenant_id)s
openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1
8.4 编辑/etc/swift/proxy-server.conf
编辑配置文件如下
[DEFAULT]
bind_port = 8080
swift_dir = /etc/swift
user = swift
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = True
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = swift
password = 000000
delay_auth_decision = True
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin,user
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
memcache_servers = controller:11211
use = egg:swift#memcache
[filter:ratelimit]
use = egg:swift#ratelimit
[filter:domain_remap]
use = egg:swift#domain_remap
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:cname_lookup]
use = egg:swift#cname_lookup
[filter:staticweb]
use = egg:swift#staticweb
[filter:tempurl]
use = egg:swift#tempurl
[filter:formpost]
use = egg:swift#formpost
[filter:name_check]
use = egg:swift#name_check
[filter:list-endpoints]
use = egg:swift#list_endpoints
[filter:proxy-logging]
use = egg:swift#proxy_logging
[filter:bulk]
use = egg:swift#bulk
[filter:slo]
use = egg:swift#slo
[filter:dlo]
use = egg:swift#dlo
[filter:container-quotas]
use = egg:swift#container_quotas
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:gatekeeper]
use = egg:swift#gatekeeper
[filter:container_sync]
use = egg:swift#container_sync
[filter:xprofile]
use = egg:swift#xprofile
[filter:versioned_writes]
use = egg:swift#versioned_writes
8.5 创建账号、容器、对象
存储节点存储磁盘名称以sdb为例
swift-ring-builder account.builder create 18 1 1
swift-ring-builder account.builder add --region 1 --zone 1 --ip 20.0.0.20 --port 6002 --device sdb --weight 100
swift-ring-builder account.builder
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder create 10 1 1
swift-ring-builder container.builder add --region 1 --zone 1 --ip 20.0.0.20 --port 6001 --device sdb --weight 100
swift-ring-builder container.builder
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder create 10 1 1
swift-ring-builder object.builder add --region 1 --zone 1 --ip 20.0.0.20 --port 6000 --device sdb --weight 100
swift-ring-builder object.builder
swift-ring-builder object.builder rebalance
8.6 编辑/etc/swift/swift.conf文件
编辑如下
[swift-hash]
swift_hash_path_suffix = changeme
swift_hash_path_prefix = changeme
[storage-policy:0]
name = Policy-0
default = yes
aliases = yellow, orange
[swift-constraints]
8.7 启动服务和赋予权限
chown -R root:swift /etc/swift
systemctl enable openstack-swift-proxy.service memcached.service
systemctl restart openstack-swift-proxy.service memcached.service
#Compute节点
8.8 安装软件包
存储节点存储磁盘名称以sdb为例

yum install xfsprogs rsync openstack-swift-account openstack-swift-container openstack-swift-object –y

mkfs.xfs -i size=1024 -f /dev/sdb

echo “/dev/sdb /swift/node xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0” >> /etc/fstab

mkdir -p /swift/node

mount /dev/sdb /swift/node

scp controller:/etc/swift/*.ring.gz /etc/swift/

8.9 配置rsync
(1)编辑/etc/rsyncd.conf文件如下
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
uid = swift
gid = swift
address = 127.0.0.1
[account]
path = /swift/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/account.lock
[container]
path = /swift/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/container.lock
[object]
path = /swift/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/object.lock
[swift_server]
path = /etc/swift
read only = true
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 5
lock file = /var/lock/swift_server.lock
(2)启动服务
systemctl enable rsyncd.service
systemctl restart rsyncd.service
8.10 配置账号、容器和对象
(1)修改/etc/swift/account-server.conf配置文件
[DEFAULT]
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /swift/node
mount_check = false
[pipeline:main]
pipeline = healthcheck recon account-server
[app:account-server]
use = egg:swift#account
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[account-replicator]
[account-auditor]
[account-reaper]
[filter:xprofile]
use = egg:swift#xprofile
(2)修改/etc/swift/container-server.conf配置文件
[DEFAULT]
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /swift/node
mount_check = false
[pipeline:main]
pipeline = healthcheck recon container-server
[app:container-server]
use = egg:swift#container
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
[filter:xprofile]
use = egg:swift#xprofile
(3)修改/etc/swift/object-server.conf配置文件
[DEFAULT]
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /swift/node
mount_check = false
[pipeline:main]
pipeline = healthcheck recon object-server
[app:object-server]
use = egg:swift#object
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
[object-replicator]
[object-reconstructor]
[object-updater]
[object-auditor]
[filter:xprofile]
use = egg:swift#xprofile
8.11 修改Swift配置文件
修改/etc/swift/swift.conf
[swift-hash]
swift_hash_path_suffix = changeme
swift_hash_path_prefix = changeme
[storage-policy:0]
name = Policy-0
default = yes
aliases = yellow, orange
[swift-constraints]
8.12 重启服务和赋予权限
chown -R swift:swift /swift/node
mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift
chown -R root:swift /etc/swift

systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
systemctl restart openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
systemctl restart openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
systemctl restart openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
9 安装Trove服务
9.1 执行脚本进行安装
9.2-9.11编配服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-trove.sh进行安装
需注意安装Trove服务之前需要配置好网络(flat或gre),创建好子网,并确认系统已经安装swift和cinder两个服务,否则安装会失败。
9.2 安装Trove数据库服务的软件包

yum install openstack-trove-guestagent openstack-trove python-troveclient openstack-trove-ui –y

9.3 创建数据库

mysql -u root -p

mysql> CREATE DATABASE trove;
mysql> GRANT ALL PRIVILEGES ON trove.* TO trove@‘localhost’ IDENTIFIED BY ‘000000’; mysql> GRANT ALL PRIVILEGES ON trove.* TO trove@’%’ IDENTIFIED BY ‘000000’;
9.4 创建用户

openstack user create --domain $DOMAIN_NAME --password 000000 trove

openstack role add --project service --user trove admin

openstack service create --name trove --description “Database” database

9.5 创建Endpoint和API端点

openstack endpoint create --region RegionOne database public http:// controller:8779/v1.0/%(tenant_id)s

openstack endpoint create --region RegionOne database internal http:// controller:8779/v1.0/%(tenant_id)s

openstack endpoint create --region RegionOne database admin http:// controller:8779/v1.0/%(tenant_id)s

9.6 配置trove.conf文件
openstack-config --set /etc/trove/trove.conf DEFAULT log_dir /var/log/trove
openstack-config --set /etc/trove/trove.conf DEFAULT log_file trove-api.log
openstack-config --set /etc/trove/trove.conf DEFAULT trove_auth_url http://controller:35357/v2.0
openstack-config --set /etc/trove/trove.conf DEFAULT notifier_queue_hostname controller
openstack-config --set /etc/trove/trove.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/trove/trove.conf DEFAULT nova_proxy_admin_user admin
openstack-config --set /etc/trove/trove.conf DEFAULT nova_proxy_admin_pass 000000
openstack-config --set /etc/trove/trove.conf DEFAULT nova_proxy_admin_tenant_name admin
openstack-config --set /etc/trove/trove.conf DEFAULT nova_compute_service_type compute
openstack-config --set /etc/trove/trove.conf DEFAULT cinder_service_type volumev2
openstack-config --set /etc/trove/trove.conf DEFAULT network_driver trove.network.neutron.NeutronDriver
openstack-config --set /etc/trove/trove.conf DEFAULT default_neutron_networks (网络的ID)
openstack-config --set /etc/trove/trove.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/trove/trove.conf DEFAULT add_addresses True
openstack-config --set /etc/trove/trove.conf DEFAULT network_label_regex .*
openstack-config --set /etc/trove/trove.conf DEFAULT api_paste_config api-paste.ini
openstack-config --set /etc/trove/trove.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/trove/trove.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/trove/trove.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/trove/trove.conf database connection mysql://trove:000000@controller/trove
openstack-config --set /etc/trove/trove.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/trove/trove.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/trove/trove.conf keystone_authtoken auth_type password
openstack-config --set /etc/trove/trove.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/trove/trove.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/trove/trove.conf keystone_authtoken project_name service
openstack-config --set /etc/trove/trove.conf keystone_authtoken username trove
openstack-config --set /etc/trove/trove.conf keystone_authtoken password 000000
9.7 配置trove-taskmanager.conf
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT log_dir /var/log/trove
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT log_file trove-taskmanager.log
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT trove_auth_url http://controller:5000/v2.0
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT nova_compute_url http://controller:8774/v2.1
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT notifier_queue_hostname controller
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT nova_proxy_admin_user admin
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT nova_proxy_admin_pass 000000
openstack-config crudini --set /etc/trove/trove-taskmanager.conf DEFAULT nova_proxy_admin_tenant_id $(openstack project list |grep -w ‘admin’ |awk ‘{print $2}’)
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT taskmanager_manager trove.taskmanager.manager.Manager
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT notification_driver messagingv2
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT network_driver trove.network.neutron.NeutronDriver
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT default_neutron_networks (网络ID)
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT network_label_regex .*
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT guest_config /etc/trove/trove-guestagent.conf
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT guest_info guest_info.conf
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT injected_config_location /etc/trove/conf.d
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT cloudinit_location /etc/trove/cloudinit
sed -i ‘/exists_notification/s//#/’ /etc/trove/trove-taskmanager.conf
openstack-config --set /etc/trove/trove-taskmanager.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/trove/trove-taskmanager.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/trove/trove-taskmanager.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/trove/trove-taskmanager.conf database connection mysql://trove:000000@controller/trove
9.8 配置trove-conductor.conf文件
openstack-config --set /etc/trove/trove-conductor.conf DEFAULT log_dir /var/log/trove
openstack-config --set /etc/trove/trove-conductor.conf DEFAULT log_file trove-conductor.log
openstack-config --set /etc/trove/trove-conductor.conf DEFAULT trove_auth_url http://controller:5000/v2.0
openstack-config --set /etc/trove/trove-conductor.conf DEFAULT notifier_queue_hostname controller
openstack-config --set /etc/trove/trove-conductor.conf DEFAULT nova_proxy_admin_user admin
openstack-config --set /etc/trove/trove-conductor.conf DEFAULT nova_proxy_admin_pass 000000
openstack-config --set /etc/trove/trove-conductor.conf DEFAULT nova_proxy_admin_tenant_name admin
openstack-config --set /etc/trove/trove-conductor.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/trove/trove-conductor.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/trove/trove-conductor.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/trove/trove-conductor.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/trove/trove-conductor.conf database connection mysql://trove:000000@controller/trove
9.9 配置trove-guestagent.conf文件
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT nova_proxy_admin_user admin
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT nova_proxy_admin_pass 000000
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT nova_proxy_admin_user admin
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT nova_proxy_admin_tenant_id $(openstack project list |grep -w ‘admin’ |awk ‘{print $2}’)
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT trove_auth_url http://192.168.100.10:35357/v2.0
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT swift_url http://192.168.100.10:8080/v1/AUTH_
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT os_region_name RegionOne
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT swift_service_type object-store
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT log_file trove-guestagent.log
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT rabbit_password 000000
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT rabbit_host 192.168.100.10
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT rabbit_userid openstack
openstack-config --set /etc/trove/trove-guestagent.conf DEFAULT rabbit_port 5672
openstack-config --set /etc/trove/trove-guestagent.conf oslo_messaging_rabbit rabbit_host 192.168.100.10
openstack-config --set /etc/trove/trove-guestagent.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/trove/trove-guestagent.conf oslo_messaging_rabbit rabbit_password 000000
9.10 同步数据库

su -s /bin/sh -c “trove-manage db_sync” trove

9.11 启动服务

service httpd restart

systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service openstack-trove-conductor.service

systemctl restart openstack-trove-api.service openstack-trove-taskmanager.service openstack-trove-conductor.service

9.12 上传镜像
将提供的MySQL_5.6_xiandian.qcow2 上传到系统内

glance image-create --name “mysql-5.6” --disk-format qcow2 --container-format bare --progress < MySQL_5.6_XD.qcow2

9.13 创建数据库存储

trove-manage datastore_update mysql ‘’

9.14 使用上传的镜像更新数据库

glance_id=$(glance image-list | awk ‘/ mysql-5.6 / { print $2 }’)

trove-manage datastore_version_update mysql mysql-5.6 mysql ${glance_id} ‘’ 1

9.15 启动数据库实例
使用m1.small类型启动名称为mysql-1、大小为5G、创建数据库名称为myDB,用户为user,密码为r00tme,创建过程一般需要3-4分钟。

FLAVOR_ID=$(openstack flavor list | awk ‘/ m1.small / { print $2 }’)

trove create mysql-1 ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql

创建完成后查询trove列表
[root@controller ~]# trove list
±-------------------------------------±--------±----------±------------------±-------±----------±-----+
| ID | Name | Datastore | Datastore Version | Status | Flavor ID | Size |
±-------------------------------------±--------±----------±------------------±-------±----------±-----+
| b9c9208d-5bca-434a-b258-127ff8496c5e | mysql-1 | mysql | mysql-5.6 | ACTIVE | 2 | 5 |
±-------------------------------------±--------±----------±------------------±-------±----------±-----+
trove数据库网页访问数据库备份模块可能会出现异常,可以通过命令行完成相关操作。
10 安装Heat编配服务

Controller节点

10.1通过脚本安装heat服务
10.2-10.8编配服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-heat.sh进行安装

10.2安装heat编配服务软件包

yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine –y

10.3创建数据库

mysql -u root -p

mysql> CREATE DATABASE heat;
mysql> GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@‘localhost’ IDENTIFIED BY ‘000000’; mysql> GRANT ALL PRIVILEGES ON heat.* TO ‘heat’@’%’ IDENTIFIED BY ‘000000’;
10.4创建用户
openstack user create --domain default --password 000000 heat
openstack role add --project service --user heat admin
openstack domain create --description “Stack projects and users” heat
openstack user create --domain heat --password 000000 heat_domain_admin
openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
openstack role create heat_stack_owner
openstack role add --project demo --user demo heat_stack_owner
openstack role create heat_stack_user
10.5创建Endpoint和API端点
openstack service create --name heat --description “Orchestration” orchestration
openstack service create --name heat-cfn --description “Orchestration” cloudformation
openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%(tenant_id)s
openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%(tenant_id)s
openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%(tenant_id)s
openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
10.6配置Heat服务
openstack-config --set /etc/heat/heat.conf database connection mysql+pymysql://heat:000000@controller/heat
openstack-config --set /etc/heat/heat.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/heat/heat.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/heat/heat.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/heat/heat.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/heat/heat.conf keystone_authtoken auth_type password
openstack-config --set /etc/heat/heat.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/heat/heat.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/heat/heat.conf keystone_authtoken project_name service
openstack-config --set /etc/heat/heat.conf keystone_authtoken username heat
openstack-config --set /etc/heat/heat.conf keystone_authtoken password 000000
openstack-config --set /etc/heat/heat.conf trustee auth_plugin password
openstack-config --set /etc/heat/heat.conf trustee auth_url http://controller:35357
openstack-config --set /etc/heat/heat.conf trustee username heat
openstack-config --set /etc/heat/heat.conf trustee password 000000
openstack-config --set /etc/heat/heat.conf trustee user_domain_name default
openstack-config --set /etc/heat/heat.conf clients_keystone auth_uri http://controller:35357
openstack-config --set /etc/heat/heat.conf trustee ec2authtoken auth_uri http://controller:5000
openstack-config --set /etc/heat/heat.conf DEFAULT heat_metadata_server_url http://controller:8000
openstack-config --set /etc/heat/heat.conf DEFAULT heat_waitcondition_server_url http://controller:8000/v1/waitcondition
openstack-config --set /etc/heat/heat.conf DEFAULT stack_domain_admin heat_domain_admin
openstack-config --set /etc/heat/heat.conf DEFAULT stack_domain_admin_password 000000
openstack-config --set /etc/heat/heat.conf DEFAULT stack_user_domain_name heat
10.7创建数据库
su -s /bin/sh -c “heat-manage db_sync” heat
10.8启动服务
systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
systemctl restart openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
10.9 nginx模板
nginx模板文件存放在/etc/xiandian/目录下,在使用模板之前需成功安装,ceilometer监控服务以及alarm监控服务。

构建一台http服务器,将lb-server.yaml模板文件上传至http服务器中。

配置环境参数

创建完成

查看栈信息

查看nginx栈所创建的负载均衡

访问页面:http://172.30.11.8

11 安装Ceilometer监控服务
11.1通过脚本安装Ceilometer服务
11.2-11.10ceilometer监控服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-ceilometer-controller.sh进行安装
#Compute节点
执行脚本iaas-install-ceilometer-compute.sh进行安装

11.2 安装Ceilometer监控服务软件包

yum install -y openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-notification openstack-ceilometer-central python-ceilometerclient python-ceilometermiddleware

11.3 创建数据库
数据库启动之后需要等待几秒后开始创建,否则会出现报错。

mongo --host controller --eval ’ db = db.getSiblingDB(“ceilometer”); db.addUser({user: “ceilometer”, pwd: “000000”, roles: [ “readWrite”, “dbAdmin” ]})’

11.4 创建用户

openstack user create --domain default --password 000000 ceilometer

openstack role add --project service --user ceilometer admin

openstack role create ResellerAdmin

openstack role add --project service --user ceilometer ResellerAdmin

11.5 创建Endpoint和API端点

openstack service create --name ceilometer --description “Telemetry” metering

openstack endpoint create --region RegionOne metering public http://controller:8777

openstack endpoint create --region RegionOne metering internal http://controller:8777

openstack endpoint create --region RegionOne metering admin http://controller:8777

11.6 配置Ceilometer

openstack-config --set /etc/ceilometer/ceilometer.conf database connection mongodb://ceilometer:000000@controller:27017/ceilometer

openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_host controller

openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_password 000000

openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_uri http://controller:5000

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_url http://controller:35357

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken memcached_servers controller:11211

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_type password

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_name service

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken username ceilometer

openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken password 000000

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials auth_type password

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials auth_url http://controller:5000/v3

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials project_domain_name default

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials user_domain_name default

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials project_name service

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials username ceilometer

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials password 000000

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials interface internalURL

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials region_name RegionOne

11.7 启动服务
systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service
systemctl restart openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service
11.8 监控组件
openstack-config --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/glance/glance-api.conf oslo_messaging_notifications driver messagingv2
openstack-config --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password 000000
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_notifications driver messagingv2
systemctl restart openstack-glance-api.service openstack-glance-registry.service
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_notifications driver messagingv2
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
openstack-config --set /etc/swift/proxy-server.conf filter:keystoneauth operator_roles “admin, user, ResellerAdmin”
openstack-config --set /etc/swift/proxy-server.conf pipeline:main pipeline “ceilometer catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server”
openstack-config --set /etc/swift/proxy-server.conf filter:ceilometer paste.filter_factory ceilometermiddleware.swift:filter_factory
openstack-config --set /etc/swift/proxy-server.conf filter:ceilometer url rabbit://openstack:000000@controller:5672/
openstack-config --set /etc/swift/proxy-server.conf filter:ceilometer driver messagingv2
openstack-config --set /etc/swift/proxy-server.conf filter:ceilometer topic notifications
openstack-config --set /etc/swift/proxy-server.conf filter:ceilometer log_level WARN
systemctl restart openstack-swift-proxy.service

compute 节点

11.9 安装软件包
yum install openstack-ceilometer-compute python-ceilometerclient python-pecan –y
11.10 配置Ceilometer
openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_password 000000

openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_type password
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken project_name service
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken username ceilometer
openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken password 000000

openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials auth_type password
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials auth_url http://controller:5000/v3
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials project_domain_name default
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials user_domain_name default
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials project_name service
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials username ceilometer
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials password 000000
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials interface internalURL
openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials region_name RegionOne

openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit True
openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour
openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state
openstack-config --set /etc/nova/nova.conf DEFAULT notification_driver messagingv2
12 安装Alarm监控服务
12.1通过脚本安装alarm服务
12.2-12.8 Alarm监控服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
#Controller节点
执行脚本iaas-install-alarm.sh进行安装

12.2 创建数据库

mysql -u root -p

mysql> CREATE DATABASE aodh;
mysql> GRANT ALL PRIVILEGES ON aodh.* TO aodh@‘localhost’ IDENTIFIED BY ‘000000’; mysql> GRANT ALL PRIVILEGES ON aodh.* TO aodh@’%’ IDENTIFIED BY ‘000000’;
12.3 创建keystone用户
openstack user create --domain default --password 000000 aodh
openstack role add --project service --user aodh admin
12.4 创建Endpoint和API
openstack service create --name aodh --description “Telemetry” alarming
openstack endpoint create --region RegionOne alarming public http://controller:8042
openstack endpoint create --region RegionOne alarming internal http://controller:8042
openstack endpoint create --region RegionOne alarming admin http://controller:8042
12.5 安装软件包
yum install -y openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python-ceilometerclient
12.6 配置aodh
openstack-config --set /etc/aodh/aodh.conf database connection mysql+pymysql://aodh:000000@controller/aodh
openstack-config --set /etc/aodh/aodh.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/aodh/aodh.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/aodh/aodh.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/aodh/aodh.conf oslo_messaging_rabbit rabbit_password 000000

openstack-config --set /etc/aodh/aodh.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken auth_type password
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken project_name service
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken username aodh
openstack-config --set /etc/aodh/aodh.conf keystone_authtoken password 000000
openstack-config --set /etc/aodh/aodh.conf service_credentials auth_type password
openstack-config --set /etc/aodh/aodh.conf service_credentials auth_url http://controller:5000/v3
openstack-config --set /etc/aodh/aodh.conf service_credentials project_domain_name default
openstack-config --set /etc/aodh/aodh.conf service_credentials user_domain_name default
openstack-config --set /etc/aodh/aodh.conf service_credentials project_name service
openstack-config --set /etc/aodh/aodh.conf service_credentials username aodh
openstack-config --set /etc/aodh/aodh.conf service_credentials password 000000
openstack-config --set /etc/aodh/aodh.conf service_credentials interface internalURL
openstack-config --set /etc/aodh/aodh.conf service_credentials region_name RegionOne
openstack-config --set /etc/aodh/api_paste.ini filter:authtoken oslo_config_project aodh
12.7 同步数据库
su -s /bin/sh -c “aodh-dbsync” aodh
12.8 启动服务
systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
systemctl restart openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service

13 添加控制节点资源到云平台
13.1 修改openrc.sh
把compute节点的IP和主机名改为controller节点的IP和主机名

13.2 运行iaas-install-nova-compute.sh
在控制节点运行iaas-install-nova-compute.sh
14 系统卸载
系统提供一键卸载脚本,节点运行iaas-uninstall-all.sh脚本,脚本卸载当前节点的所有云平台数据,如果卸载节点为controller节点则会清除所有的数据(此时云平台也被清空),如果卸载节点为compute节点则会删除compute节点的数据。
15 Xindian-IaaS-2.2版本升级说明:
Xiandian-IaaS-2.0基于Openstack Mitaka版本开发,与Xandian-IaaS-1.4版版本是基于OpenStack Icehouse版本开发的区别。
1) 操作系统从CentOS6.5升级到了cenots7,系统内核版本从2.x版本到了3.0本以上。
2) OpenStack的版本从I版本升级到了M版本,相比较I版本,具体来说,M版本重点提升了云部署者和管理者日常使用的便携性。M版本对OpenStack的命令做了集成,使之集成在OpenStack下,形成一个统一的整体,便于工程师理解记忆运维命令。
3) Xindian-IaaS-2.0版本消息服务从qpid服务升级到了轻量级的rabbitMQ服务,减少了系统的负载。
4) Xindian-IaaS-2.0版本新增了Alarm监控服务,提供了数字化监控平台,减少系统的安全隐患。
5) 在网络安装部分,Dashboard界面有改动,更结构化,部分功能有改动,更趋向于软件定义网络的思想!
6) Xiandian-IaaS-2.2版本新增了Lbaas负载均衡服务,加强网络数据处理能力、提高网络的灵活性和可用性。
7) Xiandian-IaaS-2.2版本新增了Fwaas防火墙服务,提供防火墙策略,增加传输数据的安全性。

13 支持
名称 联系方式
在线技术支持 QQ群215050294
邮箱 [email protected]
客服电话 400-025-9955

猜你喜欢

转载自blog.csdn.net/qq_28513801/article/details/83512134