OpenStack Ocata三节点实验环境部署

一、测试环境准备

  1. 主机节点准备及网络规划
    我物理节点是一台塔式服务器,40核CPU,64G内存,SSD盘800G,HDD盘4T。
    操作系统:win7 x64
    虚拟化软件:VMware WorkStation 11
    OpenStack Ocata三节点实验环境部署
  2. 系统环境准备
    --最小化安装CentOS7.2系统(CentOS-7-x86_64-Minimal-1511.iso)
    --关闭防火墙、关闭SELinux
    systemctl stop firewalld.service
    systemctl disable firewalld.service

    --关闭SELinux

    setenforce 0
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
  3. 分别在三台节点上更改hostname
    hostnamectl set-hostname controller1
    hostnamectl set-hostname compute1
    hostnamectl set-hostname cinder

    然后每个节点配置/etc/hosts文件:

    10.1.1.120 controller1
    10.1.1.121 compute1
    10.1.1.122 cinder

    3.必须的软件包的安装
    yum -y install ntp vim-enhanced wget bash-completion net-tools

  4. NTP同步系统时间
    ntpdate cn.pool.ntp.org
    编辑ntp.conf添加时间自动同步
    vim /etc/ntp.conf
    cn.pool.ntp.org
  5. 搭建openstack本地yum源
    见我另一篇文档OpenStack Ocata本地yum源搭建

二、 安装Mariadb

  1. 安装mariadb
    yum install -y mariadb-server mariadb-client`
  2. 配置mariadb
    vim /etc/my.cnf.d/mariadb-openstack.cnf

    添加如下内容:

    [mysqld]
    default-storage-engine = innodb
    innodb_file_per_table
    collation-server = utf8_general_ci
    init-connect = 'SET NAMES utf8'
    character-set-server = utf8
    bind-address = 10.1.1.120

    3、启动数据库并设置mariadb开机启动

    systemctl start mariadb.service
    systemctl enable mariadb.service
    systemctl status mariadb.service
    systemctl list-unit-files | grep mariadb.service
  3. 初始化mariadb、设置密码
    mysql_secure_installation

三、安装RabbitMQ

  1. 安装RabbitMQ
    yum install -y erlang rabbitmq-server
  2. 启动MQ及设置开机启动
    systemctl start rabbitmq-server.service
    systemctl enable rabbitmq-server.service
    systemctl status rabbitmq-server.service
    systemctl list-unit-files | grep rabbitmq-server.service
  3. 创建MQ用户openstack
    rabbitmqctl add_user openstack xuml26
  4. 将openstack用户赋予权限
    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    rabbitmqctl set_user_tags openstack administrator
    rabbitmqctl list_users
  5. 验证MQ监听端口5672
    netstat -ntlp |grep 5672
  6. 查看RabbitMQ插件
    /usr/lib/rabbitmq/bin/rabbitmq-plugins list
  7. 打开RabbitMQ相关插件
    /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent
    打开相关插件后,重启下rabbitmq服务
    systemctl restart rabbitmq-server

    浏览器输入:http://123.45.67.120:15672 默认用户名密码:guest/guest或者openstack/xuml26
    通过这个界面能很直观的看到rabbitmq的运行和负载情况

四、安装配置Keystone
1、创建keystone数据库

mysql -uroot -p
CREATE DATABASE keystone;

2、创建数据库keystone用户&root用户及赋予权限

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'xuml26';

3、安装keystone和memcached
yum -y install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached openstack-utils
4、启动memcache服务并设置开机自启动

systemctl start memcached.service
systemctl enable memcached.service
systemctl status memcached.service
systemctl list-unit-files | grep memcached.service
netstat -ntlp | grep 11211

5、配置/etc/keystone/keystone.conf文件

cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
>/etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:xuml26@controller1/keystone
openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/keystone/keystone.conf cache enabled true
openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller1:11211
openstack-config --set /etc/keystone/keystone.conf memcache servers controller1:11211
openstack-config --set /etc/keystone/keystone.conf token expiration 3600
openstack-config --set /etc/keystone/keystone.conf token provider fernet

6、配置httpd.conf

sed -i "s/#ServerName www.example.com:80/ServerName controller1/" /etc/httpd/conf/httpd.conf`

配置memcached

sed -i 's/OPTIONS*.*/OPTIONS="-l 127.0.0.1,::1,10.1.1.120"/' /etc/sysconfig/memcached`

7、keystone与httpd关联

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/`

8、数据库同步

su -s /bin/sh -c "keystone-manage db_sync" keystone

9、初始化fernet

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

10、启动httpd,并设置httpd开机启动

systemctl start httpd.service
systemctl enable httpd.service
systemctl status httpd.service

11、创建 admin 用户角色

keystone-manage bootstrap \
--bootstrap-password xuml26 \
--bootstrap-username admin \
--bootstrap-project-name admin \
--bootstrap-role-name admin \
--bootstrap-service-name keystone \
--bootstrap-region-id RegionOne \ 
--bootstrap-admin-url http://controller1:35357/v3 \
--bootstrap-internal-url http://controller1:35357/v3 \
--bootstrap-public-url http://controller1:5000/v3

验证:
openstack project list --os-username admin --os-project-name admin --os-user-domain-id default --os-project-domain-id default --os-identity-api-version 3 --os-auth-url http://controller1:5000 --os-password xuml26

  1. 创建admin用户环境变量:
    vim /root/admin-openrc

    添加以下内容:

    export OS_USER_DOMAIN_ID=default
    export OS_PROJECT_DOMAIN_ID=default
    export OS_USERNAME=admin
    export OS_PROJECT_NAME=admin
    export OS_PASSWORD=xuml26
    export OS_IDENTITY_API_VERSION=3
    export OS_IMAGE_API_VERSION=2
    export OS_AUTH_URL=http://controller1:35357/v3

    13、创建service项目

    source /root/admin-openrc
    openstack project create --domain default --description "Service Project" service

    14、创建demo项目
    openstack project create --domain default --description "Demo Project" demo
    15、创建demo用户

    openstack user create --domain default demo --password xuml26

    16、创建user角色将demo用户赋予user角色

    openstack role create user
    openstack role add --project demo --user demo user

    17、验证keystone

    unset OS_TOKEN OS_URL
    openstack --os-auth-url http://controller1:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue --os-password xuml26
    openstack --os-auth-url http://controller1:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue --os-password xuml26

五、安装配置glance
1、创建glance数据库

CREATE DATABASE glance;

2、创建数据库用户并赋予权限

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'xuml26';

3、创建glance用户及赋予admin权限

source /root/admin-openrc
openstack user create --domain default glance --password xuml26
openstack role add --project service --user glance admin

4、创建image服务

openstack service create --name glance --description "OpenStack Image service" image

5、创建glance的endpoint

openstack endpoint create --region RegionOne image public http://controller1:9292
openstack endpoint create --region RegionOne image internal http://controller1:9292
openstack endpoint create --region RegionOne image admin http://controller1:9292

6、安装glance相关rpm包

yum -y install openstack-glance

7、修改glance配置文件/etc/glance/glance-api.conf

cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
>/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:xuml26@controller1/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller1:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller1:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller1:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password xuml26
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

8、修改glance配置文件/etc/glance/glance-registry.conf:

cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
>/etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:xuml26@controller1/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller1:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller1:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller1:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password xuml26
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

9、同步glance数据库

su -s /bin/sh -c "glance-manage db_sync" glance

10、启动glance及设置开机启动

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service
systemctl list-unit-files | grep openstack-glance-api.service
systemctl list-unit-files | grep openstack-glance-registry.service

12、下载测试镜像文件

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

13、上传镜像到glance

source /root/admin-openrc
glance image-create --name "cirros-0.3.4-x86_64" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress

查看镜像列表:

glance image-list

六、安装配置nova
1、创建nova数据库

CREATE DATABASE nova;
CREATE DATABASE nova_api;
CREATE DATABASE nova_cell0;

2、创建数据库用户并赋予权限

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller1' IDENTIFIED BY 'xuml26';
flush privileges;

3、创建nova用户及赋予admin权限

source /root/admin-openrc
openstack user create --domain default nova --password xuml26
openstack role add --project service --user nova admin

4、创建computer服务

openstack service create --name nova --description "OpenStack Compute" compute

5、创建nova的endpoint

openstack endpoint create --region RegionOne compute public http://controller1:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://controller1:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller1:8774/v2.1/%\(tenant_id\)s

6、安装nova相关软件

yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-cert openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler

7、配置nova的配置文件/etc/nova/nova.conf

cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
>/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.120
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:xuml26@controller1/nova
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:xuml26@controller1/nova_api
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval -1
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller1:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller1:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password xuml26
openstack-config --set /etc/nova/nova.conf keystone_authtoken service_token_roles_required True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 10.1.1.120
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 10.1.1.120
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller1:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

8、设置单元格cell
同步nova数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova

设置cell_v2关联上创建好的数据库nova_cell0

nova-manage cell_v2 map_cell0 --database_connection mysql+pymysql://root:xuml26@controller1/nova_cell0

创建一个常规cell,名字叫cell1,这个单元格里面将会包含计算节点

nova-manage cell_v2 create_cell --verbose --name cell1 --database_connection mysql+pymysql://root:xuml26@controller1/nova_cell0 --transport-url rabbit://openstack:xuml26@controller1:5672/

检查部署是否正常,这个时候肯定是会报错的,因为此时没有没有compute节点,也没有部署placement

nova-status upgrade check

创建和映射cell0,并将现有计算主机和实例映射到单元格中

nova-manage cell_v2 simple_cell_setup

查看已经创建好的单元格列表

nova-manage cell_v2 list_cells --verbose

9、安装placement
从Ocata开始,需要安装配置placement参与nova调度了,不然虚拟机将无法创建。

yum install -y openstack-nova-placement-api

创建placement用户和placement 服务

openstack user create --domain default placement --password xuml26
openstack role add --project service --user placement admin
openstack service create --name placement --description "OpenStack Placement" placement

创建placement endpoint

openstack endpoint create --region RegionOne placement public http://controller1:8778
openstack endpoint create --region RegionOne placement admin http://controller1:8778
openstack endpoint create --region RegionOne placement internal http://controller1:8778

把placement 整合到nova.conf里

openstack-config --set /etc/nova/nova.conf placement auth_url http://controller1:35357
openstack-config --set /etc/nova/nova.conf placement memcached_servers controller1:11211
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement project_domain_name default
openstack-config --set /etc/nova/nova.conf placement user_domain_name default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password xuml26
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne

配置修改00-nova-placement-api.conf文件,这步没做创建虚拟机的时候会出现禁止访问资源的问题

cd /etc/httpd/conf.d/
cp 00-nova-placement-api.conf 00-nova-placement-api.conf.bak
>00-nova-placement-api.conf
vim 00-nova-placement-api.conf

添加以下内容:

Listen 8778
<VirtualHost *:8778>
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
WSGIScriptAlias / /usr/bin/nova-placement-api
<Directory "/">
Order allow,deny
Allow from all
Require all granted
</Directory>
<IfVersion >= 2.4>
ErrorLogFormat "%M"
</IfVersion>
ErrorLog /var/log/nova/nova-placement-api.log
</VirtualHost>
Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>

重启下httpd服务

systemctl restart httpd

检查下是否配置成功

nova-status upgrade check

10、启动nova服务:

systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstacknova-novncproxy.service
设置nova相关服务开机启动
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstacknova-novncproxy.service
查看nova服务:
systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstacknova-novncproxy.service
systemctl list-unit-files |grep openstack-nova-*

11、验证nova服务

unset OS_TOKEN OS_URL
source /root/admin-openrc
nova service-list

查看endpoint list,看是否有结果正确输出

openstack endpoint list

七、安装配置neutron
1、创建neutron数据库

CREATE DATABASE neutron;

2、创建数据库用户并赋予权限

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'xuml26';

3、创建neutron用户及赋予admin权限

source /root/admin-openrc
openstack user create --domain default neutron --password xuml26
openstack role add --project service --user neutron admin

4、创建network服务

openstack service create --name neutron --description "OpenStack Networking" network

5、创建endpoint

openstack endpoint create --region RegionOne network public http://controller1:9696
openstack endpoint create --region RegionOne network internal http://controller1:9696
openstack endpoint create --region RegionOne network admin http://controller1:9696

6、安装neutron相关软件

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

7、配置neutron配置文件/etc/neutron/neutron.conf

cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
>/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller1:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller1:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller1:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password xuml26
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:xuml26@controller1/neutron
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller1:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password xuml26
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

8、配置/etc/neutron/plugins/ml2/ml2_conf.ini

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 path_mtu 1500
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True

9、配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini DEFAULT debug false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eno16777736
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.2.2.120
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

注意eno16是外网网卡,如果不是外网网卡,那么VM就会与外界网络隔离。
10、配置 /etc/neutron/l3_agent.ini

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT debug false

11、配置/etc/neutron/dhcp_agent.ini

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT debug false

12、重新配置/etc/nova/nova.conf,配置这步的目的是让compute节点能使用上neutron网络

openstack-config --set /etc/nova/nova.conf neutron url http://controller1:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller1:35357
openstack-config --set /etc/nova/nova.conf neutron auth_plugin password
openstack-config --set /etc/nova/nova.conf neutron project_domain_id default
openstack-config --set /etc/nova/nova.conf neutron user_domain_id default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password xuml26
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret xuml26

13、将dhcp-option-force=26,1450写入/etc/neutron/dnsmasq-neutron.conf

echo "dhcp-option-force=26,1450" >/etc/neutron/dnsmasq-neutron.conf

14、配置/etc/neutron/metadata_agent.ini

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller1
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret xuml26
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_workers 4
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT debug false
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_protocol http

15、创建硬链接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

16、同步数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

17、重启nova服务,因为刚才改了nova.conf

systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service

18、重启neutron服务并设置开机启动

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

19、启动neutron-l3-agent.service并设置开机启动

systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service
systemctl status neutron-l3-agent.service

20、执行验证

source /root/admin-openrc
neutron ext-list
neutron agent-list

22、检查网络服务

neutron agent-list

看服务是否是笑脸

八、安装Dashboard
1、安装dashboard相关软件包

yum install openstack-dashboard -y

2、修改配置文件/etc/openstack-dashboard/local_settings

vim /etc/openstack-dashboard/local_settings

将171行OPENSTACK_HOST的值改为controller1
3、启动dashboard服务并设置开机启动

systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service

到此,Controller节点搭建完毕,打开firefox浏览器即可访问http://123.45.67.120/dashboard/ 可进入openstack界面!

九、安装配置cinder
1、创建数据库用户并赋予权限

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'xuml26';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'xuml26';

2、创建cinder用户并赋予admin权限

source /root/admin-openrc
openstack user create --domain default cinder --password xuml26
openstack role add --project service --user cinder admin

3、创建volume服务

openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

4、创建endpoint

openstack endpoint create --region RegionOne volume public http://controller1:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller1:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller1:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 public http://controller1:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s

5、安装cinder相关服务

yum -y install openstack-cinder

6、配置cinder配置文件

cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
>/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.1.120
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:xuml26@controller1/cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller1:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller1:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller1:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password xuml26
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

7、同步数据库

su -s /bin/sh -c "cinder-manage db sync" cinder

8、在controller上启动cinder服务,并设置开机启动

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl enbale openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

十、Cinder节点部署
1、安装Cinder节点

yum -y install lvm2

2、启动服务并设置为开机自启

systemctl start lvm2-lvmetad.service
systemctl enable lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service

3、创建lvm

fdisk -l
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
  1. 编辑cinder节点lvm.conf文件
    vim /etc/lvm/lvm.conf

    在[devices]下面添加
    filter = [ "a/sda/", "a/sdb/", "r/.*/"],130行
    重启lvm2

    systemctl restart lvm2-lvmetad.service

    13、安装openstack-cinder和targetcli

    yum -y install openstack-cinder openstack-utils targetcli python-keystone ntpdate

    14、配置cinder配置文件

    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
    >/etc/cinder/cinder.conf
    openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.1.122
    openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
    openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller1:9292
    openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_version 2
    openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v1_api True
    openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v2_api True
    openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v3_api True
    openstack-config --set /etc/cinder/cinder.conf DEFAULT storage_availability_zone nova
    openstack-config --set /etc/cinder/cinder.conf DEFAULT default_availability_zone nova
    openstack-config --set /etc/cinder/cinder.conf DEFAULT os_region_name RegionOne
    openstack-config --set /etc/cinder/cinder.conf DEFAULT api_paste_config /etc/cinder/api-paste.ini
    openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
    openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:xuml26@controller1/cinder
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller1:5000
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller1:35357
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller1:11211
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
    openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password xuml26
    openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
    openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
    openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
    openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
    openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

    15、启动openstack-cinder-volume和target并设置开机启动

    systemctl start openstack-cinder-volume.service target.service
    systemctl enbale openstack-cinder-volume.service target.service
    systemctl status openstack-cinder-volume.service target.service

    16、验证cinder服务是否正常

    source /root/admin-openrc
    cinder service-list

十一、Compute节点部署
安装相关依赖包

yum -y install openstack-selinux python-openstackclient yum-plugin-priorities openstack-nova-compute openstack-utils ntpdate
  1. 配置nova.conf
    cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
    >/etc/nova/nova.conf
    openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.121
    openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
    openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
    openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller1:5000
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller1:35357
    openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
    openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
    openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
    openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken password xuml26
    openstack-config --set /etc/nova/nova.conf placement auth_uri http://controller1:5000
    openstack-config --set /etc/nova/nova.conf placement auth_url http://controller1:35357
    openstack-config --set /etc/nova/nova.conf placement memcached_servers controller1:11211
    openstack-config --set /etc/nova/nova.conf placement auth_type password
    openstack-config --set /etc/nova/nova.conf placement project_domain_name default
    openstack-config --set /etc/nova/nova.conf placement user_domain_name default
    openstack-config --set /etc/nova/nova.conf placement project_name service
    openstack-config --set /etc/nova/nova.conf placement username placement
    openstack-config --set /etc/nova/nova.conf placement password xuml26
    openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
    openstack-config --set /etc/nova/nova.conf vnc enabled True
    openstack-config --set /etc/nova/nova.conf vnc keymap en-us
    openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
    openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 10.1.1.121
    openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://123.45.67.120:6080/vnc_auto.html
    openstack-config --set /etc/nova/nova.conf glance api_servers http://controller1:9292
    openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
    openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
  2. 设置libvirtd.service 和openstack-nova-compute.service开机启动
    systemctl enable libvirtd.service openstack-nova-compute.service
    systemctl restart libvirtd.service openstack-nova-compute.service
    systemctl status libvirtd.service openstack-nova-compute.service
  3. 到controller上执行验证
    source /root/admin-openrc
    openstack compute service list

十二、安装Neutron

  1. 安装相关软件包
    yum -y install openstack-neutron-linuxbridge ebtables ipset
  2. 配置neutron.conf
    cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
    >/etc/neutron/neutron.conf
    openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
    openstack-config --set /etc/neutron/neutron.conf DEFAULT advertise_mtu True
    openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2
    openstack-config --set /etc/neutron/neutron.conf DEFAULT control_exchange neutron
    openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller1:8774/v2
    openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:xuml26@controller1
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller1:5000
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller1:35357
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller1:11211
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
    openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password xuml26
    openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
  3. 配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
    openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
    openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.2.2.121
    openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
    openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
    openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  4. 配置nova.conf
    openstack-config --set /etc/nova/nova.conf neutron url http://controller1:9696
    openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller1:35357
    openstack-config --set /etc/nova/nova.conf neutron auth_type password
    openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
    openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
    openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
    openstack-config --set /etc/nova/nova.conf neutron project_name service
    openstack-config --set /etc/nova/nova.conf neutron username neutron
    openstack-config --set /etc/nova/nova.conf neutron password xuml26
  5. 重启和enable相关服务
    systemctl restart libvirtd.service openstack-nova-compute.service
    systemctl enable neutron-linuxbridge-agent.service
    systemctl start neutron-linuxbridge-agent.service
    systemctl status libvirtd.service openstack-nova-compute.service neutron-linuxbridge-agent.service

十三、计算节点结合Cinder
1.计算节点要是想用cinder,那么需要配置nova配置文件

openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
systemctl restart openstack-nova-compute.service

2.controller重启nova服务

systemctl restart openstack-nova-api.service`

十四. 在controller上执行:

nova-manage cell_v2 simple_cell_setup
source /root/admin-openrc
neutron agent-list
nova-manage cell_v2 discover_hosts

查看新加入的compute1节点
nova host-list

猜你喜欢

转载自blog.51cto.com/12114052/2418834