Openstck-pick单点集群部署配置

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u010948569/article/details/82979758

手动搭建openstack pick版


Openstakc简介:OpenStack项目是一个开源云计算平台,提供基础架构即服务(IaaS)解决方案。

  • 1、环境准备
  • 2、Openstack 本地源
  • 3、时间同步服务器
  • 4、Mariadb数据库安装配置
  • 5、RabbitMQ消息队列
  • 6、Keystone认证服务
  • 7、RabbitMQ消息队列
  • 8、Nova计算服务
  • 9、Neutron网络服务
  • 10、Dashboard WEB服务
  • 11、计算节点Nova和Neutron部署
  • 12、Cinder块存储服务
  • 13、创建vxlan网络、实例类型、路由
  • 14、Cinder块存储服务
  • 15、采用ceph作为后端存储

openstack架构图

openstakc架构图


一. 环境准备

  1. 硬件:PC台式机 CPU: i7(4线程) Memory: 16G 系统盘:SSD 2T 网卡:3个千兆电口

    软件:VMware Workstation 14 Pro 版本14.1.2

  2. 系统版本:CentOS-7-x86_64-Minimal-1708

  3. 系统命名及安装分区

名称 boot swap
controller 500M 4096M 其余给/
computer 500M 4096M 其余给/ #挂载2块600G硬盘 1块100G硬盘
cinder 500M 4096M 其余给/ 挂载2块600G硬盘 1块100G硬盘
  1. Hosts主机名修改
   echo '
    125.39.187.70  yum-chrony
    192.168.0.8    openstack.pike.com 
    172.10.10.8    openstack.pike.com 
    172.10.10.40   controller
    172.10.10.41   computer
    172.10.10.42   cinder
    '>>/etc/hosts
  1. 关闭防火墙
    systemctl stop firewalld.service
    systemctl disable firewalld.service
    firewall-cmd --state
  1. 关闭Selinux
sed -i '/^SELINUX=.*/c SELINUX=disabled' /etc/selinux/config
sed -i 's/^SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config
grep --color=auto '^SELINUX' /etc/selinux/config
setenforce 0
  1. ssh免密验证(在controller-node1节点操作,其它节点不操作)
ssh-keygen    
ssh-copy-id root@computer
ssh-copy-id root@cinder
  1. 网络IP地址规划
    外网:192.168.0.x/24 管理:172.10.10.x/24 vxlan:172.20.20.x/24 数据:172.30.30.x/24
    br-ex: 不用配置(能出外网)
外网:192.168.0.x/24 管理:172.10.10.x/24 数据:172.30.30.x/24
controller ens33:192.168.0.40/24 ens34:172.10.10.40/24 ens35:172.20.20.40/24 设置ONBOOT=yes
computer ens33:192.168.0.41/24 ens34:172.10.10.41/24 ens35:172.20.20.41/24 设置ONBOOT=yes
cinder ens33:192.168.0.42/24 ens34:172.10.10.42/24 ens35:172.20.20.42/24 设置ONBOOT=yes

二. Openstack-本地源

  1. 关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
firewall-cmd --state
  1. 关闭Selinux
sed -i '/^SELINUX=.*/c SELINUX=disabled' /etc/selinux/config
sed -i 's/^SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config
grep --color=auto '^SELINUX' /etc/selinux/config
setenforce 0
  1. 安装下载、编辑工具
yum install wget vim ntpdate  -y
  1. 时间同步
/usr/sbin/ntpdate ntp1.aliyun.com 
echo "*/3 * * * * /usr/sbin/ntpdate ntp1.aliyun.com  &> /dev/null" > /tmp/crontab
crontab /tmp/crontab

#同步时间 ntpdate ntp1.aliyun.com

  1. 使用阿里源
rm -f /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

#创建openstack-pike阿里源

echo '
[openstack-pike]
name=openstack-pike
baseurl=https://mirrors.aliyun.com/centos/$releasever/cloud/$basearch/openstack-pike/
gpgcheck=0
enabled=1
cost=88

[qemu-ev]
name=qemu-ev
baseurl=https://mirrors.aliyun.com/centos/$releasever/virt/$basearch/kvm-common/
gpgcheck=0
enabled=1
'>/etc/yum.repos.d/openstack-pike.repo

#创建rpm-luminous阿里源

echo '
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
gpgcheck=0

[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0

[ceph-source]
name=ceph-source
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS/
gpgcheck=0
'>/etc/yum.repos.d/rpm-luminous.repo

  1. 更新缓存
yum clean all && yum makecache
  1. 创建yum下载目录
mkdir -p /www/share/yum
  1. 配置yum.conf
cp /etc/yum.conf{,.bak}
sed -i 's#^keepcache=0#keepcache=1#' /etc/yum.conf
sed -i 's/^cachedir/#cachedir/' /etc/yum.conf
sed -ir '3 icachedir=/www/share/yum/$basearch/$releasever \n' /etc/yum.conf
head /etc/yum.conf
  1. 升级yum、重启服务器
yum update -y && reboot
  1. 安装 httpd createrepo 服务
yum install httpd createrepo  -y
  1. 配置http目录共享
echo '#http share
Alias /share /www/share
<Directory "/www/share">
    Options Indexes FollowSymLinks
    IndexOptions NameWidth=* DescriptionWidth=* FoldersFirst
    IndexOptions SuppressIcon HTMLTable Charset=UTF-8 SuppressHTMLPreamble
    Order allow,deny
    Allow from all
    Require all granted
</Directory>
'>/etc/httpd/conf.d/share.conf

cp /etc/httpd/conf/httpd.conf{,.bak}

echo "
ServerName localhost
#关闭版本号显示
ServerSignature Off
ServerTokens Prod
">>/etc/httpd/conf/httpd.conf

systemctl enable httpd.service
systemctl restart httpd.service

#浏览器访问 125.39.187.70/share ,能访问即正常

  1. 创建yum仓库
mkdir -p /www/share/centos7_rpm
createrepo -p /www/share/centos7_rpm/

  1. 创建源文件
echo "
[My_share]
name=My_Souce
baseurl=http://125.39.187.70/share/centos7_rpm/
gpgcheck=0
enabled=1
cost=88
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
">/www/share/Lan7.repo
  1. yum缓存的rpm包拷贝到/www/share/centos7_rpm/
find /www/share/yum -name *.rpm |sed -r 's#.*#mv & /www/share/centos7_rpm/\n#'|bash
  1. 下载没有安装过的包
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y mariadb mariadb-server mariadb-galera-server python2-PyMySQL galera xinetd rsync bash-completion percona-xtrabackup socat  gcc gcc-c++ net-tools openstack-utils
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y erlang rabbitmq-server lvm2 cifs-utils quota psmisc pcs pacemaker corosync fence-agents-all resource-agents
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y haproxy openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached openstack-glance
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ebtables openstack-dashboard openstack-cinder
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y openstack-selinux python-openstackclient openstack-nova-compute openstack-utils 
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y openstack-neutron-openvswitch ebtables ipset openstack-glance python-glance python-glanceclient
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y nfs-utils rpcbind ceph ceph-radosgw
yum install --downloadonly --downloaddir=/www/share/centos7_rpm/ -y ceph-deploy ceph ceph-radosgw

下载已经安装过的包

 yum reinstall --downloadonly --downloaddir=/www/share/centos7_rpm/ -y 包名称

16.更新源

createrepo --update -p /www/share/centos7_rpm/

二、 客户端使用源

  1. 安装wget vim 工具
yum install wget vim net-tools -y
  1. 使用本地源
rm -f /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/Lan7.repo http://125.39.187.70/share/Lan7.repo
  1. 更新缓存
yum clean all && yum makecache

三. 时间同步服务器

时间服务器chrony搭建

服务端:


服务端:

  1. 安装chrony服务
yum -y install chrony
  1. 设置开机启动
systemctl enable chronyd
systemctl start chronyd
systemctl status chronyd
  1. 配置chrony文件
cp /etc/chrony.conf{,.bak} #备份默认配置
echo "
# #外部NTP服务#
server ntp6.aliyun.com iburst
server cn.ntp.org.cn iburst
server s2m.time.edu.cn iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
noclientlog
logchange 0.5
logdir /var/log/chrony
">/etc/chrony.conf

  1. 重启chrony服务
systemctl restart chronyd
systemctl status chronyd
  1. 查看chrony同步源
chronyc sources -v

[root@no-data ~]# chronyc sources -v

210 Number of sources = 3

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6   377    37  +2103us[+4840us] +/-   26ms
^- 46.211.65.223.static.js.>     3   6   377    38  -3577us[ -849us] +/-   60ms
^? ns.pku.edu.cn                 0   8     0     -     +0ns[   +0ns] +/-    0ns
[root@no-data ~]# 
------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------

客户端:

  1. 安装chrony服务
yum -y install chrony
  1. 设置开机启动
systemctl enable chronyd
systemctl start chronyd
systemctl status chronyd
  1. 配置chrony文件
cp /etc/chrony.conf{,.bak} #备份默认配置
echo "
server 125.39.187.70 iburst    
stratumweight 0    
driftfile /var/lib/chrony/drift    
rtcsync    
makestep 10 3      
bindcmdaddress 127.0.0.1    
bindcmdaddress ::1    
keyfile /etc/chrony.keys    
commandkey 1    
generatecommandkey    
noclientlog    
logchange 0.5    
logdir /var/log/chrony
">/etc/chrony.conf
  1. 重启chrony服务
systemctl restart chronyd
systemctl status chronyd
  1. 查看chrony同步源
chronyc sources -v

[root@con-node1 ~]# chronyc sourcestats -v

210 Number of sources = 1
                             .- Number of sample points in measurement set.
                            /    .- Number of residual runs with same sign.
                           |    /    .- Length of measurement set (time).
                           |   |    /      .- Est. clock freq error (ppm).
                           |   |   |      /           .- Est. error in freq.
                           |   |   |     |           /         .- Est. offset.
                           |   |   |     |          |          |   On the -.
                           |   |   |     |          |          |   samples. \
                           |   |   |     |          |          |             |
Name/IP Address            NP  NR  Span  Frequency  Freq Skew  Offset  Std Dev
==============================================================================
yum-ntp                     0   0     0     +0.000   2000.000     +0ns  4000ms
---------------------------------------------------------------------------------------------------------------------

四. Mariadb数据库安装配置

部署Mariadb数据库

  1. controller上安装mariadb数据库
yum install MariaDB-server MariaDB-client -y
  1. controller上配置mariadb
> /etc/my.cnf.d/server.cnf
echo '
[server]
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
bind-address = 172.10.10.40
[galera]
[mariadb]
'>>/etc/my.cnf.d/server.cnf
  1. controller上设置mariadb数据库密码(这里数据库密码设置为cnnhv2018,按回车,一直输入y。)
mysql_secure_installation

4、启动数据库及设置mariadb开机启动

systemctl enable mariadb.service
systemctl restart mariadb.service
systemctl status mariadb.service
systemctl list-unit-files |grep mariadb.service

五. RabbitMQ消息队列

部署RabbitMQ消息队列

  1. controller上安装rabbitmq
yum install erlang  rabbitmq-server -y
  1. controller上设置rabbitmq启动服务
systemctl enable rabbitmq-server.service
systemctl restart rabbitmq-server.service
systemctl status rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service
  1. controller上创建openstack,密码设置为cnnhv2018
rabbitmqctl add_user openstack cnnhv2018
  1. controller上将openstack用户赋予权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl list_users
  1. controller上监听端口 rabbitmq用的是5672端口
netstat -ntlp |grep 5672
  1. controller上打开RabbitMQ需要选中插件
/usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management mochiweb webmachine rabbitmq_web_dispatch
amqp_client rabbitmq_management_agent
  1. controller上重启rabbitmq服务
systemctl restart rabbitmq-server.service
systemctl status rabbitmq-server.service
  1. 查看打开了那些插件
/usr/lib/rabbitmq/bin/rabbitmq-plugins list
  1. 验证,打开浏览器,输入以下网址
http://192.168.0.40:15672

六. Keystone认证服务

部署Keystone认证服务

  1. controller上创建keystone数据库
mysql -uroot -pcnnhv2018
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'cnnhv2018';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'cnnhv2018';
exit
  1. controller上安装keystone、memcached
yum install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached openstac-kutils -y
  1. controller上启动memcache服务设置开机自启动
systemctl enable memcached.service
systemctl restart memcached.service
systemctl status memcached.service
  1. controller上配置/etc/keystone/keystone.conf文件
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
>/etc/keystone/keystone.conf
echo '
[DEFAULT]
transport_url = rabbit://openstack:cnnhv2018@controller

[database]
connection = mysql://keystone:cnnhv2018@controller/keystone

[cache]
backend = oslo_cache.memcache_pool
enabled = true
memcache_servers = controller:11211

[memcache]
servers = controller:11211

[token]
expiration = 3600
provider = ferne
'>>/etc/keystone/keystone.conf
  1. controller上配置httpd.conf与memcached文件
sed -i "s/#ServerName www.example.com:80/ServerName controller/" /etc/httpd/conf/httpd.conf
sed -i 's/OPTIONS*.*/OPTIONS="-l 127.0.0.1,::1,172.10.10.40"/' /etc/sysconfig/memcached
  1. controller上配置keystone与httpd结合
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
  1. controller上同步keystone数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
  1. controller上初始化fernet
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
  1. controller上启动httpd设置httpd开机启动
systemctl enable httpd.service
systemctl restart httpd.service
systemctl status httpd.service
systemctl list-unit-files |grep httpd.service
  1. controller上创建 admin 用户角色
keystone-manage bootstrap --bootstrap-password cnnhv2018 --bootstrap-username admin --bootstrap-project-name admin --bootstrap-role-name admin --bootstrap-service-name keystone --bootstrap-region-id RegionOne --bootstrap-admin-url http://controller:35357/v3 --bootstrap-internal-url http://controller:35357/v3 --bootstrap-public-url http://controller:5000/v3
  1. controller上验证
openstack project list --os-username admin --os-project-name admin --os-user-domain-id default --os-project-domain-id default --os-identity-api-version 3 --os-auth-url http://controller:5000 --os-password cnnhv2018
  1. controller上创建admin-openstack.sh
echo '
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PASSWORD=cnnhv2018
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://controller:35357/v3
'>>/root/admin-openstack.sh
  1. controller上创建demo-openstack.sh
echo '
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=demo
export OS_PROJECT_NAME=demo
export OS_PASSWORD=cnnhv2018
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://controller:5000
'>>/root/demo-openstack.sh

  1. controller上创建service项目
source  /root/admin-openstack.sh
openstack project create --domain default --description "Service Project" service
  1. controller上创建demo项目
openstack project create --domain default --description "Demo Project" demo
  1. controller上创建demo用户
openstack user create --domain default demo --password cnnhv2018
  1. controller上创建user角色将demo用户赋予user角色
openstack role create user
openstack role add --project demo --user demo user
  1. controller上验证keystone
unset OS_TOKEN OS_URL
openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue --os-password cnnhv2018
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue --os-password cnnhv2018

七. RabbitMQ消息队列

部署Glance认证服务

  1. controller上创建glance数据库
mysql -uroot -pcnnhv2018
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.*TO'glance'@'localhost'IDENTIFIED BY'cnnhv2018';
GRANT ALL PRIVILEGES ON glance.*TO'glance'@'%'IDENTIFIED BY'cnnhv2018';
exit
  1. controller上创建glance用户及赋予admin权限
source /root/admin-openstack
openstack user create --domain default glance --password cnnhv2018
openstack role add --project service --user glance admin
  1. controller上创建image服务
openstack service create --name glance --description "OpenStack Image service" image
  1. controller上创建glance的endpoint

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
  1. controller上安装glance
yum install openstack-glance -y
  1. controller上修改glance配置文件/etc/glance/glance-api.conf

cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
>/etc/glance/glance-api.conf

echo '
[DEFAULT]
debug = False
verbose = True
transport_url = rabbit://openstack:cnnhv2018@controller

[database]
connection = mysql+pymysql://glance:cnnhv2018@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
username = glance
password = cnnhv2018
project_name = service

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
'>>/etc/glance/glance-api.conf

  1. controller上修改glance配置文件/etc/glance/glance-registry.conf:
cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
>/etc/glance/glance-registry.conf
echo '
[DEFAULT]
debug = False
verbose = True
transport_url = rabbit://openstack:cnnhv2018@controller

[database]
connection = mysql+pymysql://glance:cnnhv2018@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = cnnhv2018

[paste_deploy]
flavor = keystone
'>>/etc/glance/glance-registry.conf

  1. controller上同步glance数据库
su -s /bin/sh -c "glance-manage db_sync" glance
  1. controller上启动glance及设置开机启动
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service

八. Nova计算服务

部署Nova认证服务

  1. controller上创建nova数据库
mysql -uroot -pcnnhv2018
CREATE DATABASE nova;
CREATE DATABASE nova_api;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova.* TO'nova'@'localhost' IDENTIFIED BY'cnnhv2018';
GRANT ALL PRIVILEGES ON nova.* TO'nova'@'%' IDENTIFIED BY'cnnhv2018';
GRANT ALL PRIVILEGES ON nova_api.* TO'nova'@'localhost' IDENTIFIED BY'cnnhv2018';
GRANT ALL PRIVILEGES ON nova_api.* TO'nova'@'%' IDENTIFIED BY'cnnhv2018';
GRANT ALL PRIVILEGES ON nova_cell0.* to 'nova'@'localhost' identified by 'cnnhv2018';
GRANT ALL PRIVILEGES ON nova_cell0.* to 'nova'@'%' identified by 'cnnhv2018';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller' IDENTIFIED BY 'cnnhv2018';
flush privileges;
exit
  1. controller上创建nova用户及赋予admin权限
source admin-openstack.sh
openstack user create --domain default nova --password cnnhv2018
openstack role add --project service --user nova admin
  1. controller上创建computer服务
openstack service create --name nova --description "OpenStack Compute" compute
  1. controller上创建nova的endpoint
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
  1. controller上创建placement用户、服务、API
openstack user create --domain default placement --password cnnhv2018
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller.6:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
  1. controller上分别安装nova相关组件包
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

7 . controller上上备份及创建nova.conf配置文件

cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
>/etc/nova/nova.conf
echo '
[DEFAULT]
enabled_apis = osapi_compute,metadata
auth_strategy = keystone
my_ip = 192.168.0.40
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:cnnhv2018@controller

[database]
connection = mysql+pymysql://nova:cnnhv2018@controller/nova

[api_database]
connection = mysql+pymysql://nova:cnnhv2018@controller/nova_api

[scheduler]
discover_hosts_in_cells_interval = -1

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = cnnhv2018
service_token_roles_required = True

[vnc]
vncserver_listen = 192.168.0.40
vncserver_proxyclient_address = 192.168.0.40

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = cnnhv2018
os_region_name = RegionOne

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = cnnhv2018
service_metadata_proxy = True
metadata_proxy_shared_secret = cnnhv2018
'>>/etc/nova/nova.conf
  1. controller上上配置nova-placement-api
> /etc/httpd/conf.d/00-nova-placement-api.conf
echo "
Listen 8778
<VirtualHost *:8778>
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
  WSGIScriptAlias / /usr/bin/nova-placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/nova/nova-placement-api.log
  <Directory /usr/bin>
     <IfVersion >= 2.4>
         Require all granted
     </IfVersion>
     <IfVersion < 2.4>
           Order allow,deny
           Allow from all
     </IfVersion>
</Directory>
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
</VirtualHost>

Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>
">>/etc/httpd/conf.d/00-nova-placement-api.conf

  1. controller上赋予权限
chown root:root /etc/httpd/conf.d/00-nova-placement-api.conf
chmod -R 644 /etc/httpd/conf.d/00-nova-placement-api.conf
  1. controller上重启 httpd服务
systemctl restart httpd
systemctl status httpd
  1. controller上同步nova_api 数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
  1. controller上确认nova cell0 和cell1注册创建是否成功
nova-status upgrade check
nova-manage cell_v2 list_cells
  1. controller上查看是否已经有数据
mysql -uroot -pcnnhv2018
use nova;
show tables;
use nova_api;
show tables;
exit

或

mysql -h controller -u nova -pcnnhv2018 -e "use nova_api;show tables;"
mysql -h controller  -u nova -pcnnhv2018 -e "use nova;show tables;" 
mysql -h controller  -u nova -pcnnhv2018 -e "use nova_cell0;show tables;"
  1. controller开机自启动
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
  1. controller查看节点
nova service-list 
openstack catalog list
nova-status upgrade check
openstack compute service list
#删除
nova-manage cell_v2 delete_cell --cell_uuid  db9c9aa8-2360-49d6-9eed-925ee7728327

# #发现计算节点,新增计算节点时执行
#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

九. Neutron网络服务

部署Neutron认证服务

  1. controller上创建neutron数据库
mysql -uroot -pcnnhv2018
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'cnnhv2018';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'cnnhv2018';
exit
  1. controller上上创建neutron用户及赋予admin权限
source admin-openstack.sh 
openstack user create --domain default neutron --password cnnhv2018
openstack role add --project service --user neutron admin
  1. controller上创建network服务
openstack service create --name neutron --description "OpenStack Networking" network
  1. controller上上创建endpoint
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
  1. controller上分别安装neutron相关组件包
yum  install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ebtables -y
  1. controller上上备份并创建neutron.conf
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
> /etc/neutron/neutron.conf
echo '
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
auth_strategy = keystone
transport_url = rabbit://openstack:cnnhv2018@controller
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = cnnhv2018

[database]
connection = mysql+pymysql://neutron:cnnhv2018@controller/neutron

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = cnnhv2018

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
'>>/etc/neutron/neutron.conf

  1. controller上配置/etc/neutron/plugins/ml2/ml2_conf.ini
> /etc/neutron/plugins/ml2/ml2_conf.ini

echo '
[ml2]
type_drivers = flat,vlan,vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
tenant_network_types = vxlan
path_mtu = 1500

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = True
'>>/etc/neutron/plugins/ml2/ml2_conf.ini
  1. controller上配置 openvswitch_agent.ini
cp /etc/neutron/plugins/ml2/openvswitch_agent.ini  /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak 
> /etc/neutron/plugins/ml2/openvswitch_agent.ini 
echo '
[DEFAULT]
debug = false
verbose = true

[ovs]
bridge_mappings = provider:br-ex
local_ip = 172.20.20.40

[agent]
tunnel_types = vxlan
l2_population = True
prevent_arp_spoofing = True
arp_responder = true
enable_distributed_routing = True

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
'>>/etc/neutron/plugins/ml2/openvswitch_agent.ini 
  1. controller上配置l3_agent.ini
> /etc/neutron/l3_agent.ini

echo '
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
debug = false
verbose = true
agent_mode = dvr_snat
'>>/etc/neutron/l3_agent.ini

  1. controller上配置dhcp_agent.int
> /etc/neutron/dhcp_agent.ini

echo '
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
verbose = True
debug = false
'>>/etc/neutron/dhcp_agent.ini
  1. controller上配置metadata_agent.ini
> /etc/neutron/metadata_agent.ini

echo '
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = cnnhv2018
metadata_workers = 4
verbose = True
debug = false
nova_metadata_protocol = http
'>>/etc/neutron/metadata_agent.ini 
  1. controller上创建软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  1. controller上同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  1. controller上重启openvswitch服务
systemctl enable openvswitch.service
systemctl restart openvswitch.service
systemctl status openvswitch.service
  1. controller上添加外网网桥(用ip a 查看网卡名称,网桥必须连接外网)
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex ens37
  1. controller上查看
ovs-vsctl show
  1. controller上重启nova服务
systemctl restart openstack-nova-api.service  
systemctl status openstack-nova-api.service
  1. controller上重启neutron服务并设置开机启动
systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl restart neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
systemctl status neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
  1. controller上上启动neutron-l3-agent.service并设置开机启动
systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service
systemctl status neutron-l3-agent.service
  1. controller上执行验证(需要多执行以上命令几次 等待3-5分钟)
source /root/admin-openstack.sh 
neutron agent-list

[root@controller ~]# neutron agent-list

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 495c666f-7bbf-4087-a05f-2b09252a6b9c | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| a84502b2-354c-4260-9276-f310656bf8d1 | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent          |
| f525f2c1-4bbc-4fb1-bbf0-dda2dac04064 | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
| febbdad1-1eb9-4d17-abee-27782445548a | Open vSwitch agent | controller |                   | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

十. Dashboard WEB服务

部署Dashboard WEB服务

  1. controller上创建dashboard数据库
yum  install openstack-dashboard -y
  1. controller上清空内容
>/etc/openstack-dashboard/local_settings
  1. vim /etc/openstack-dashboard/local_settings
    添加如下内容
# -*- coding: utf-8 -*-

import os

from django.utils.translation import ugettext_lazy as _


from openstack_dashboard import exceptions
from openstack_dashboard.settings import HORIZON_CONFIG

DEBUG = False


# WEBROOT is the location relative to Webserver root
# should end with a slash.
WEBROOT = '/dashboard/'
#LOGIN_URL = WEBROOT + 'auth/login/'
#LOGOUT_URL = WEBROOT + 'auth/logout/'
#
# LOGIN_REDIRECT_URL can be used as an alternative for
# HORIZON_CONFIG.user_home, if user_home is not set.
# Do not set it to '/home/', as this will cause circular redirect loop
#LOGIN_REDIRECT_URL = WEBROOT

# If horizon is running in production (DEBUG is False), set this
# with the list of host/domain names that the application can serve.
# For more information see:
# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
#ALLOWED_HOSTS = ['horizon.example.com', 'localhost']
ALLOWED_HOSTS = ['*', ]

# Set SSL proxy settings:
# Pass this header from the proxy after terminating the SSL,
# and don't forget to strip it from the client's request.
# For more information see:
# https://docs.djangoproject.com/en/dev/ref/settings/#secure-proxy-ssl-header
#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')

# If Horizon is being served through SSL, then uncomment the following two
# settings to better secure the cookies from security exploits
#CSRF_COOKIE_SECURE = True
#SESSION_COOKIE_SECURE = True

# The absolute path to the directory where message files are collected.
# The message file must have a .json file extension. When the user logins to
# horizon, the message files collected are processed and displayed to the user.
#MESSAGES_PATH=None

# Overrides for OpenStack API versions. Use this setting to force the
# OpenStack dashboard to use a specific API version for a given service API.
# Versions specified here should be integers or floats, not strings.
# NOTE: The version should be formatted as it appears in the URL for the
# service API. For example, The identity service APIs have inconsistent
# use of the decimal point, so valid options would be 2.0 or 3.
# Minimum compute version to get the instance locked status is 2.9.
#OPENSTACK_API_VERSIONS = {
#    "data-processing": 1.1,
#    "identity": 3,
#    "image": 2,
#    "volume": 2,
#    "compute": 2,
#}
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

# Set this to True if running on a multi-domain model. When this is enabled, it
# will require the user to enter the Domain name in addition to the username
# for login.
#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

# Overrides the default domain used when running on single-domain model
# with Keystone V3. All entities will be created in the default domain.
# NOTE: This value must be the ID of the default domain, NOT the name.
# Also, you will most likely have a value in the keystone policy file like this
#    "cloud_admin": "rule:admin_required and domain_id:<your domain id>"
# This value must match the domain id specified there.
#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'

# Set this to True to enable panels that provide the ability for users to
# manage Identity Providers (IdPs) and establish a set of rules to map
# federation protocol attributes to Identity API attributes.
# This extension requires v3.0+ of the Identity API.
#OPENSTACK_KEYSTONE_FEDERATION_MANAGEMENT = False

# Set Console type:
# valid options are "AUTO"(default), "VNC", "SPICE", "RDP", "SERIAL" or None
# Set to None explicitly if you want to deactivate the console.
#CONSOLE_TYPE = "AUTO"

# If provided, a "Report Bug" link will be displayed in the site header
# which links to the value of this setting (ideally a URL containing
# information on how to report issues).
#HORIZON_CONFIG["bug_url"] = "http://bug-report.example.com"

# Show backdrop element outside the modal, do not close the modal
# after clicking on backdrop.
#HORIZON_CONFIG["modal_backdrop"] = "static"

# Specify a regular expression to validate user passwords.
#HORIZON_CONFIG["password_validator"] = {
#    "regex": '.*',
#    "help_text": _("Your password does not meet the requirements."),
#}

# Disable simplified floating IP address management for deployments with
# multiple floating IP pools or complex network requirements.
#HORIZON_CONFIG["simple_ip_management"] = False

# Turn off browser autocompletion for forms including the login form and
# the database creation workflow if so desired.
#HORIZON_CONFIG["password_autocomplete"] = "off"

# Setting this to True will disable the reveal button for password fields,
# including on the login form.
#HORIZON_CONFIG["disable_password_reveal"] = False

LOCAL_PATH = '/tmp'

# Set custom secret key:
# You can either set it to a specific value or you can let horizon generate a
# default secret key that is unique on this machine, e.i. regardless of the
# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However,
# there may be situations where you would want to set this explicitly, e.g.
# when multiple dashboard instances are distributed on different machines
# (usually behind a load-balancer). Either you have to make sure that a session
# gets all requests routed to the same dashboard instance or you set the same
# SECRET_KEY for all of them.
SECRET_KEY='7810cf803727c0fb3cc4'

# We recommend you use memcached for development; otherwise after every reload
# of the django development server, you will have to login again. To use
# memcached set CACHES to something like
#CACHES = {
#    'default': {
#        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
#        'LOCATION': '127.0.0.1:11211',
#    },
#}

CACHES = {
    'default': {
  'OPTIONS': {
                'DEAD_RETRY': 1,
                'SERVER_RETRIES': 1,
                'SOCKET_TIMEOUT': 1,
          },
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': ['controller:11211'],
    }
}

SESSION_ENGINE = "django.contrib.sessions.backends.cache"

# Send email to the console by default
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# Or send them to /dev/null
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'

# Configure these for your outgoing email host
#EMAIL_HOST = 'smtp.my-company.com'
#EMAIL_PORT = 25
#EMAIL_HOST_USER = 'djangomail'
#EMAIL_HOST_PASSWORD = 'top-secret!'

# For multiple regions uncomment this configuration, and add (endpoint, title).
#AVAILABLE_REGIONS = [
#    ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
#    ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
#]

#OPENSTACK_HOST = "127.0.0.1"
OPENSTACK_HOST = "192.168.0.40"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# Enables keystone web single-sign-on if set to True.
#WEBSSO_ENABLED = False

# Determines which authentication choice to show as default.
#WEBSSO_INITIAL_CHOICE = "credentials"

# The list of authentication mechanisms which include keystone
# federation protocols and identity provider/federation protocol
# mapping keys (WEBSSO_IDP_MAPPING). Current supported protocol
# IDs are 'saml2' and 'oidc'  which represent SAML 2.0, OpenID
# Connect respectively.
# Do not remove the mandatory credentials mechanism.
# Note: The last two tuples are sample mapping keys to a identity provider
# and federation protocol combination (WEBSSO_IDP_MAPPING).
#WEBSSO_CHOICES = (
#    ("credentials", _("Keystone Credentials")),
#    ("oidc", _("OpenID Connect")),
#    ("saml2", _("Security Assertion Markup Language")),
#    ("acme_oidc", "ACME - OpenID Connect"),
#    ("acme_saml2", "ACME - SAML2"),
#)

# A dictionary of specific identity provider and federation protocol
# combinations. From the selected authentication mechanism, the value
# will be looked up as keys in the dictionary. If a match is found,
# it will redirect the user to a identity provider and federation protocol
# specific WebSSO endpoint in keystone, otherwise it will use the value
# as the protocol_id when redirecting to the WebSSO by protocol endpoint.
# NOTE: The value is expected to be a tuple formatted as: (<idp_id>, <protocol_id>).
#WEBSSO_IDP_MAPPING = {
#    "acme_oidc": ("acme", "oidc"),
#    "acme_saml2": ("acme", "saml2"),
#}

# Disable SSL certificate checks (useful for self-signed certificates):
#OPENSTACK_SSL_NO_VERIFY = True

# The CA certificate to use to verify SSL connections
#OPENSTACK_SSL_CACERT = '/path/to/cacert.pem'

# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
# capabilities of the auth backend for Keystone.
# If Keystone has been configured to use LDAP as the auth backend then set
# can_edit_user to False and name to 'ldap'.
#
# TODO(tres): Remove these once Keystone has an API to identify auth backend.
OPENSTACK_KEYSTONE_BACKEND = {
    'name': 'native',
    'can_edit_user': True,
    'can_edit_group': True,
    'can_edit_project': True,
    'can_edit_domain': True,
    'can_edit_role': True,
}

# Setting this to True, will add a new "Retrieve Password" action on instance,
# allowing Admin session password retrieval/decryption.
#OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False

# This setting allows deployers to control whether a token is deleted on log
# out. This can be helpful when there are often long running processes being
# run in the Horizon environment.
#TOKEN_DELETE_DISABLED = False

# The Launch Instance user experience has been significantly enhanced.
# You can choose whether to enable the new launch instance experience,
# the legacy experience, or both. The legacy experience will be removed
# in a future release, but is available as a temporary backup setting to ensure
# compatibility with existing deployments. Further development will not be
# done on the legacy experience. Please report any problems with the new
# experience via the Launchpad tracking system.
#
# Toggle LAUNCH_INSTANCE_LEGACY_ENABLED and LAUNCH_INSTANCE_NG_ENABLED to
# determine the experience to enable.  Set them both to true to enable
# both.
#LAUNCH_INSTANCE_LEGACY_ENABLED = True
#LAUNCH_INSTANCE_NG_ENABLED = False

# A dictionary of settings which can be used to provide the default values for
# properties found in the Launch Instance modal.
#LAUNCH_INSTANCE_DEFAULTS = {
#    'config_drive': False,
#    'enable_scheduler_hints': True
#    'disable_image': False,
#    'disable_instance_snapshot': False,
#    'disable_volume': False,
#    'disable_volume_snapshot': False,
#}

# The Xen Hypervisor has the ability to set the mount point for volumes
# attached to instances (other Hypervisors currently do not). Setting
# can_set_mount_point to True will add the option to set the mount point
# from the UI.
OPENSTACK_HYPERVISOR_FEATURES = {
    'can_set_mount_point': False,
    'can_set_password': False,
    'requires_keypair': False,
    'enable_quotas': True
}

# The OPENSTACK_CINDER_FEATURES settings can be used to enable optional
# services provided by cinder that is not exposed by its extension API.
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': False,
}

# The OPENSTACK_NEUTRON_NETWORK settings can be used to enable optional
# services provided by neutron. Options currently available are load
# balancer service, security groups, quotas, VPN service.
OPENSTACK_NEUTRON_NETWORK = {
#    'enable_router': True,
#    'enable_quotas': True,
#    'enable_ipv6': True,
#    'enable_distributed_router': False,
#    'enable_ha_router': False,
#    'enable_lb': True,
#    'enable_firewall': True,
#    'enable_vpn': True,
#    'enable_fip_topology_check': True,
    'enable_distributed_router': False,
    'enable_firewall': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_quotas': True,
    'enable_security_group': True,
    'enable_vpn': False,
    'profile_support': None,
    # Default dns servers you would like to use when a subnet is
    # created.  This is only a default, users can still choose a different
    # list of dns servers when creating a new subnet.
    # The entries below are examples only, and are not appropriate for
    # real deployments
    # 'default_dns_nameservers': ["8.8.8.8", "8.8.4.4", "208.67.222.222"],

    # The profile_support option is used to detect if an external router can be
    # configured via the dashboard. When using specific plugins the
    # profile_support can be turned on if needed.
    ##'profile_support': None,
    #'profile_support': 'cisco',

    # Set which provider network types are supported. Only the network types
    # in this list will be available to choose from when creating a network.
    # Network types include local, flat, vlan, gre, vxlan and geneve.
    # 'supported_provider_types': ['*'],

    # You can configure available segmentation ID range per network type
    # in your deployment.
    # 'segmentation_id_range': {
    #     'vlan': [1024, 2048],
    #     'vxlan': [4094, 65536],
    # },

    # You can define additional provider network types here.
    # 'extra_provider_types': {
    #     'awesome_type': {
    #         'display_name': 'Awesome New Type',
    #         'require_physical_network': False,
    #         'require_segmentation_id': True,
    #     }
    # },

    # Set which VNIC types are supported for port binding. Only the VNIC
    # types in this list will be available to choose from when creating a
    # port.
    # VNIC types include 'normal', 'macvtap' and 'direct'.
    # Set to empty list or None to disable VNIC type selection.
    #'supported_vnic_types': ['*'],
}

# The OPENSTACK_HEAT_STACK settings can be used to disable password
# field required while launching the stack.
#OPENSTACK_HEAT_STACK = {
#    'enable_user_pass': True,
#}

# The OPENSTACK_IMAGE_BACKEND settings can be used to customize features
# in the OpenStack Dashboard related to the Image service, such as the list
# of supported image formats.
#OPENSTACK_IMAGE_BACKEND = {
#    'image_formats': [
#        ('', _('Select format')),
#        ('aki', _('AKI - Amazon Kernel Image')),
#        ('ami', _('AMI - Amazon Machine Image')),
#        ('ari', _('ARI - Amazon Ramdisk Image')),
#        ('docker', _('Docker')),
#        ('iso', _('ISO - Optical Disk Image')),
#        ('ova', _('OVA - Open Virtual Appliance')),
#        ('qcow2', _('QCOW2 - QEMU Emulator')),
#        ('raw', _('Raw')),
#        ('vdi', _('VDI - Virtual Disk Image')),
#        ('vhd', _('VHD - Virtual Hard Disk')),
#        ('vmdk', _('VMDK - Virtual Machine Disk')),
#    ],
#}

# The IMAGE_CUSTOM_PROPERTY_TITLES settings is used to customize the titles for
# image custom property attributes that appear on image detail pages.
IMAGE_CUSTOM_PROPERTY_TITLES = {
    "architecture": _("Architecture"),
    "kernel_id": _("Kernel ID"),
    "ramdisk_id": _("Ramdisk ID"),
    "image_state": _("Euca2ools state"),
    "project_id": _("Project ID"),
    "image_type": _("Image Type"),
}

# The IMAGE_RESERVED_CUSTOM_PROPERTIES setting is used to specify which image
# custom properties should not be displayed in the Image Custom Properties
# table.
IMAGE_RESERVED_CUSTOM_PROPERTIES = []

# Set to 'legacy' or 'direct' to allow users to upload images to glance via
# Horizon server. When enabled, a file form field will appear on the create
# image form. If set to 'off', there will be no file form field on the create
# image form. See documentation for deployment considerations.
#HORIZON_IMAGES_UPLOAD_MODE = 'legacy'

# Allow a location to be set when creating or updating Glance images.
# If using Glance V2, this value should be False unless the Glance
# configuration and policies allow setting locations.
#IMAGES_ALLOW_LOCATION = False

# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is 'publicURL'.
#OPENSTACK_ENDPOINT_TYPE = "publicURL"

# SECONDARY_ENDPOINT_TYPE specifies the fallback endpoint type to use in the
# case that OPENSTACK_ENDPOINT_TYPE is not present in the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is None. This
# value should differ from OPENSTACK_ENDPOINT_TYPE if used.
#SECONDARY_ENDPOINT_TYPE = None

# The number of objects (Swift containers/objects or images) to display
# on a single page before providing a paging element (a "more" link)
# to paginate results.
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20

# The size of chunk in bytes for downloading objects from Swift
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024

# The default number of lines displayed for instance console log.
INSTANCE_LOG_LENGTH = 35

# Specify a maximum number of items to display in a dropdown.
DROPDOWN_MAX_ITEMS = 30

# The timezone of the server. This should correspond with the timezone
# of your entire OpenStack installation, and hopefully be in UTC.
TIME_ZONE = "Asia/Shanghai"

# When launching an instance, the menu of available flavors is
# sorted by RAM usage, ascending. If you would like a different sort order,
# you can provide another flavor attribute as sorting key. Alternatively, you
# can provide a custom callback method to use for sorting. You can also provide
# a flag for reverse sort. For more info, see
# http://docs.python.org/2/library/functions.html#sorted
#CREATE_INSTANCE_FLAVOR_SORT = {
#    'key': 'name',
#     # or
#    'key': my_awesome_callback_method,
#    'reverse': False,
#}

# Set this to True to display an 'Admin Password' field on the Change Password
# form to verify that it is indeed the admin logged-in who wants to change
# the password.
#ENFORCE_PASSWORD_CHECK = False

# Modules that provide /auth routes that can be used to handle different types
# of user authentication. Add auth plugins that require extra route handling to
# this list.
#AUTHENTICATION_URLS = [
#    'openstack_auth.urls',
#]

# The Horizon Policy Enforcement engine uses these values to load per service
# policy rule files. The content of these files should match the files the
# OpenStack services are using to determine role based access control in the
# target installation.

# Path to directory containing policy.json files
POLICY_FILES_PATH = '/etc/openstack-dashboard'

# Map of local copy of service policy files.
# Please insure that your identity policy file matches the one being used on
# your keystone servers. There is an alternate policy file that may be used
# in the Keystone v3 multi-domain case, policy.v3cloudsample.json.
# This file is not included in the Horizon repository by default but can be
# found at
# http://git.openstack.org/cgit/openstack/keystone/tree/etc/ \
# policy.v3cloudsample.json
# Having matching policy files on the Horizon and Keystone servers is essential
# for normal operation. This holds true for all services and their policy files.
#POLICY_FILES = {
#    'identity': 'keystone_policy.json',
#    'compute': 'nova_policy.json',
#    'volume': 'cinder_policy.json',
#    'image': 'glance_policy.json',
#    'orchestration': 'heat_policy.json',
#    'network': 'neutron_policy.json',
#    'telemetry': 'ceilometer_policy.json',
#}

# TODO: (david-lyle) remove when plugins support adding settings.
# Note: Only used when trove-dashboard plugin is configured to be used by
# Horizon.
# Trove user and database extension support. By default support for
# creating users and databases on database instances is turned on.
# To disable these extensions set the permission here to something
# unusable such as ["!"].
#TROVE_ADD_USER_PERMS = []
#TROVE_ADD_DATABASE_PERMS = []

# Change this patch to the appropriate list of tuples containing
# a key, label and static directory containing two files:
# _variables.scss and _styles.scss
#AVAILABLE_THEMES = [
#    ('default', 'Default', 'themes/default'),
#    ('material', 'Material', 'themes/material'),
#]

LOGGING = {
    'version': 1,
    # When set to True this will disable all logging except
    # for loggers specified in this configuration dictionary. Note that
    # if nothing is specified here and disable_existing_loggers is True,
    # django.db.backends will still log unless it is disabled explicitly.
    'disable_existing_loggers': False,
    'formatters': {
        'operation': {
            # The format of "%(message)s" is defined by
            # OPERATION_LOG_OPTIONS['format']
            'format': '%(asctime)s %(message)s'
        },
    },
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'logging.NullHandler',
        },
        'console': {
            # Set the level to "DEBUG" for verbose output logging.
            'level': 'INFO',
            'class': 'logging.StreamHandler',
        },
        'operation': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'operation',
        },
    },
    'loggers': {
        # Logging from django.db.backends is VERY verbose, send to null
        # by default.
        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
        },
        'requests': {
            'handlers': ['null'],
            'propagate': False,
        },
        'horizon': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'horizon.operation_log': {
            'handlers': ['operation'],
            'level': 'INFO',
            'propagate': False,
        },
        'openstack_dashboard': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'cinderclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'glanceclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'neutronclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'heatclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'ceilometerclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'swiftclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_auth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'nose.plugins.manager': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'iso8601': {
            'handlers': ['null'],
            'propagate': False,
        },
        'scss': {
            'handlers': ['null'],
            'propagate': False,
        },
    },
}

# 'direction' should not be specified for all_tcp/udp/icmp.
# It is specified in the form.
SECURITY_GROUP_RULES = {
    'all_tcp': {
        'name': _('All TCP'),
        'ip_protocol': 'tcp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_udp': {
        'name': _('All UDP'),
        'ip_protocol': 'udp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_icmp': {
        'name': _('All ICMP'),
        'ip_protocol': 'icmp',
        'from_port': '-1',
        'to_port': '-1',
    },
    'ssh': {
        'name': 'SSH',
        'ip_protocol': 'tcp',
        'from_port': '22',
        'to_port': '22',
    },
    'smtp': {
        'name': 'SMTP',
        'ip_protocol': 'tcp',
        'from_port': '25',
        'to_port': '25',
    },
    'dns': {
        'name': 'DNS',
        'ip_protocol': 'tcp',
        'from_port': '53',
        'to_port': '53',
    },
    'http': {
        'name': 'HTTP',
        'ip_protocol': 'tcp',
        'from_port': '80',
        'to_port': '80',
    },
    'pop3': {
        'name': 'POP3',
        'ip_protocol': 'tcp',
        'from_port': '110',
        'to_port': '110',
    },
    'imap': {
        'name': 'IMAP',
        'ip_protocol': 'tcp',
        'from_port': '143',
        'to_port': '143',
    },
    'ldap': {
        'name': 'LDAP',
        'ip_protocol': 'tcp',
        'from_port': '389',
        'to_port': '389',
    },
    'https': {
        'name': 'HTTPS',
        'ip_protocol': 'tcp',
        'from_port': '443',
        'to_port': '443',
    },
    'smtps': {
        'name': 'SMTPS',
        'ip_protocol': 'tcp',
        'from_port': '465',
        'to_port': '465',
    },
    'imaps': {
        'name': 'IMAPS',
        'ip_protocol': 'tcp',
        'from_port': '993',
        'to_port': '993',
    },
    'pop3s': {
        'name': 'POP3S',
        'ip_protocol': 'tcp',
        'from_port': '995',
        'to_port': '995',
    },
    'ms_sql': {
        'name': 'MS SQL',
        'ip_protocol': 'tcp',
        'from_port': '1433',
        'to_port': '1433',
    },
    'mysql': {
        'name': 'MYSQL',
        'ip_protocol': 'tcp',
        'from_port': '3306',
        'to_port': '3306',
    },
    'rdp': {
        'name': 'RDP',
        'ip_protocol': 'tcp',
        'from_port': '3389',
        'to_port': '3389',
    },
}

# Deprecation Notice:
#
# The setting FLAVOR_EXTRA_KEYS has been deprecated.
# Please load extra spec metadata into the Glance Metadata Definition Catalog.
#
# The sample quota definitions can be found in:
# <glance_source>/etc/metadefs/compute-quota.json
#
# The metadata definition catalog supports CLI and API:
#  $glance --os-image-api-version 2 help md-namespace-import
#  $glance-manage db_load_metadefs <directory_with_definition_files>
#
# See Metadata Definitions on: http://docs.openstack.org/developer/glance/

# TODO: (david-lyle) remove when plugins support settings natively
# Note: This is only used when the Sahara plugin is configured and enabled
# for use in Horizon.
# Indicate to the Sahara data processing service whether or not
# automatic floating IP allocation is in effect.  If it is not
# in effect, the user will be prompted to choose a floating IP
# pool for use in their cluster.  False by default.  You would want
# to set this to True if you were running Nova Networking with
# auto_assign_floating_ip = True.
#SAHARA_AUTO_IP_ALLOCATION_ENABLED = False

# The hash algorithm to use for authentication tokens. This must
# match the hash algorithm that the identity server and the
# auth_token middleware are using. Allowed values are the
# algorithms supported by Python's hashlib library.
#OPENSTACK_TOKEN_HASH_ALGORITHM = 'md5'

# AngularJS requires some settings to be made available to
# the client side. Some settings are required by in-tree / built-in horizon
# features. These settings must be added to REST_API_REQUIRED_SETTINGS in the
# form of ['SETTING_1','SETTING_2'], etc.
#
# You may remove settings from this list for security purposes, but do so at
# the risk of breaking a built-in horizon feature. These settings are required
# for horizon to function properly. Only remove them if you know what you
# are doing. These settings may in the future be moved to be defined within
# the enabled panel configuration.
# You should not add settings to this list for out of tree extensions.
# See: https://wiki.openstack.org/wiki/Horizon/RESTAPI
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
                              'LAUNCH_INSTANCE_DEFAULTS',
                              'OPENSTACK_IMAGE_FORMATS']

# Additional settings can be made available to the client side for
# extensibility by specifying them in REST_API_ADDITIONAL_SETTINGS
# !! Please use extreme caution as the settings are transferred via HTTP/S
# and are not encrypted on the browser. This is an experimental API and
# may be deprecated in the future without notice.
#REST_API_ADDITIONAL_SETTINGS = []

# DISALLOW_IFRAME_EMBED can be used to prevent Horizon from being embedded
# within an iframe. Legacy browsers are still vulnerable to a Cross-Frame
# Scripting (XFS) vulnerability, so this option allows extra security hardening
# where iframes are not used in deployment. Default setting is True.
# For more information see:
# http://tinyurl.com/anticlickjack
#DISALLOW_IFRAME_EMBED = True

# Help URL can be made available for the client. To provide a help URL, edit the
# following attribute to the URL of your choice.
#HORIZON_CONFIG["help_url"] = "http://openstack.mycompany.org"

# Settings for OperationLogMiddleware
# OPERATION_LOG_ENABLED is flag to use the function to log an operation on
# Horizon.
# mask_targets is arrangement for appointing a target to mask.
# method_targets is arrangement of HTTP method to output log.
# format is the log contents.
#OPERATION_LOG_ENABLED = False
#OPERATION_LOG_OPTIONS = {
#    'mask_fields': ['password'],
#    'target_methods': ['POST'],
#    'format': ("[%(domain_name)s] [%(domain_id)s] [%(project_name)s]"
#        " [%(project_id)s] [%(user_name)s] [%(user_id)s] [%(request_scheme)s]"
#        " [%(referer_url)s] [%(request_url)s] [%(message)s] [%(method)s]"
#        " [%(http_status)s] [%(param)s]"),
#}

# The default date range in the Overview panel meters - either <today> minus N
# days (if the value is integer N), or from the beginning of the current month
# until today (if set to None). This setting should be used to limit the amount
# of data fetched by default when rendering the Overview panel.
#OVERVIEW_DAYS_RANGE = 1

# To allow operators to require users provide a search criteria first
# before loading any data into the views, set the following dict
# attributes to True in each one of the panels you want to enable this feature.
# Follow the convention <dashboard>.<view>
#FILTER_DATA_FIRST = {
#    'admin.instances': False,
#    'admin.images': False,
#    'admin.networks': False,
#    'admin.routers': False,
#    'admin.volumes': False
#}

# Dict used to restrict user private subnet cidr range.
# An empty list means that user input will not be restricted
# for a corresponding IP version. By default, there is
# no restriction for IPv4 or IPv6. To restrict
# user private subnet cidr range set ALLOWED_PRIVATE_SUBNET_CIDR
# to something like
#ALLOWED_PRIVATE_SUBNET_CIDR = {
#    'ipv4': ['10.0.0.0/8', '192.168.0.0/16'],
#    'ipv6': ['fc00::/7']
#}
ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []}

# Project and user can have any attributes by keystone v3 mechanism.
# This settings can treat these attributes on Horizon.
# It means, when you show Create/Update modal, attribute below is
# shown and you can specify any value.
# If you'd like to display these extra data in project or user index table,
# Keystone v3 allows you to add extra properties to Project and Users.
# Horizon's customization (http://docs.openstack.org/developer/horizon/topics/customizing.html#horizon-customization-module-overrides)
# allows you to display this extra information in the Create/Update modal and
# the corresponding tables.
#PROJECT_TABLE_EXTRA_INFO = {
#   'phone_num': _('Phone Number'),
#}
#USER_TABLE_EXTRA_INFO = {
#   'phone_num': _('Phone Number'),
#}

  1. controller上启动dashboard服务并设置开机启动
systemctl enable httpd.service memcached.service
systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service
systemctl restart memcached.service
  1. 登陆 dashboard
http://192.168.0.40/dashboard    admin  cnnhv2018

十一. 计算节点Nova和Neutron部署

部署计算节点Nova计算服务

  1. computer上安装nova相关软件包
yum -y install openstack-selinux python-openstackclient openstack-nova-compute openstack-utils  

#查看qemu-img版本(版本低nova-compute down)

qemu-img --help | grep version
  1. computer上编辑nova.conf配置文件(注意:虚拟机virt_type = qemu 物理机virt_type = kvm)
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
>/etc/nova/nova.conf

echo '
[DEFAULT]
auth_strategy = keystone
my_ip = 192.168.0.41
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:cnnhv2018@controller

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = cnnhv2018

[placement]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = cnnhv2018
os_region_name = RegionOne

[vnc]
enabled = True
keymap = en-us
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.0.41
novncproxy_base_url = http://192.168.0.40:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[libvirt]
virt_type = qemu

[cinder]
os_region_name = RegionOne
'>>/etc/nova/nova.conf
  1. computer上设置libvirtd.service 和openstack-nova-compute.service开机启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
  1. controller上发现计算节点
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
  1. controller上节点查询
source admin-openstack.sh
nova-status upgrade check
openstack compute service list

[root@controller ~]# nova-status upgrade check

+---------------------------+
| Upgrade Check Results     |
+---------------------------+
| Check: Cells v2           |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Placement API      |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Resource Providers |
| Result: Success           |
| Details: None             |
+---------------------------+
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | controller | internal | enabled | up    | 2018-07-21T18:58:18.000000 |
|  2 | nova-scheduler   | controller | internal | enabled | up    | 2018-07-21T18:58:19.000000 |
|  3 | nova-conductor   | controller | internal | enabled | up    | 2018-07-21T18:58:19.000000 |
|  6 | nova-compute     | computer   | nova     | enabled | up    | 2018-07-21T18:58:11.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

十二. Cinder块存储服务

部署块节点Cinder卷服务

  1. controller上创建数据库用户并赋予权限
mysql -uroot -pcnnhv2018
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cnnhv2018';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cnnhv2018';
flush privileges; 
exit
  1. controller上创建cinder用于赋予admin权限
source /root/admin-openstack.sh 
openstack user create --domain default cinder --password cnnhv2018
openstack role add --project service --user cinder admin
  1. controller上创建volume服务
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
  1. controller上创建endpoint
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
  1. controller上安装cinder相关组件服务
yum  install openstack-cinder -y
  1. controller上配置cinder.conf文件
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
>/etc/cinder/cinder.conf
echo '
[DEFAULT]
transport_url = rabbit://openstack:cnnhv2018@controller
my_ip = 192.168.0.40
auth_strategy = keystone

[database]
connection = mysql+pymysql://cinder:cnnhv2018@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cnnhv2018

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
'>>/etc/cinder/cinder.conf
  1. controller上同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
  1. controller上设置开机启动
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
  1. controller上验证cinder服务是否正常
source /root/admin-openstack.sh 

cinder service-list

[root@controller ~]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host       | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up    | 2018-07-21T20:22:46.000000 | -               |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

块存储节点安装cinder

  1. cinder上安装cinder
yum  install openstack-cinder openstack-utils python-keystone scsi-target-utils targetcli  -y
  1. cinder上上配置cinder.conf文件
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
>/etc/cinder/cinder.conf
echo '
[DEFAULT]
auth_strategy = keystone
my_ip = 192.168.0.42
enabled_backends = ceph
glance_api_servers = http://controller:9292
glance_api_version = 2
enable_v1_api = True
enable_v2_api = True
enable_v3_api = True
storage_availability_zone = nova
default_availability_zone = nova
os_region_name = RegionOne
api_paste_config = /etc/cinder/api-paste.ini
transport_url = rabbit://openstack:cnnhv2018@controller

[database]
connection = mysql+pymysql://cinder:cnnhv2018@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cnnhv2018

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
'>>/etc/cinder/cinder.conf
  1. cinder上启动openstack-cinder-volume和target并设置开机启动
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service

十三. 创建vxlan网络、实例类型、路由

十四. Cinder块存储服务

十五. 采用ceph作为后端存储

猜你喜欢

转载自blog.csdn.net/u010948569/article/details/82979758