openstack four service components and openstack environment to build

opensatck virtual machine creation flow chart
openstack four service components and openstack environment to build

A .openstack four service components and functions

Some concepts 1.keystone certification services.

1)User:

使用openstack的用户

2)Role:

给用户添加到一个角色中,给予此用户操作权限

3)Tenant:

人、项目或组织拥有的资源合集,一个租户下有多个用户,可以给用户权限划分来使用租户中的资源

4)TOken:

密令,口令,keystone认证将token口令返回到浏览器,即在一段时间免秘钥登录,功能类似与cookie
会话保持,但又不同于cookie,cookie记录浏览器的登录信息,不能分配用户访问权限;token保存了用户认证信息,
与用户权限相关。

Components and functions 2.glance image of the service

1)glance-api

接收镜像的删除、上传和读取

2)glance-registry

负责与mysql数据库的交互,用于存储和获取镜像的元数据,在数据库中用于存储镜像信息的两张表image表
和image-property表
image表:用于存放镜像文件的格式、大小等信息
image-property表:用于存放镜像文件定制化信息

3)image-store

镜像的保存与读取的接口,仅仅是一个接口

4) Note

glance服务不需要配置消息队列,但要配置keystone认证和数据库

3.nova configure the virtual machine service components and functions (one of the earliest components openstack)

1)nova-api

接收和响应外部请求,将接收到的请求通过消息队列发送给其他服务组件

2)nova-compute

创建虚拟机,是通过libvirt来调用kvm模块创建虚拟机,nova分为控制节点和计算节点,nova之间通过
消息队列进行通信

3) New schdule

是用来调度创建虚拟机所需的物理机

4) nova-palcement-fire

监控提供者的库存和使用量,如跟踪计算节点资源存储池的使用量、ip的分配等情况,配合schdule实现物理机的调度

5) new-driver

计算节点访问数据库时的中间件,即当nova-compute需要获取或更新数据库中的实例信息,不会直接访问数据库,而
是通过conductor来访问数据库,当在较大的集群环境中时,需要横向扩展conductor,但不要扩展在计算节点上

6) New novncproxy

VNC代理,用来显示虚拟机操作终端的界面

7) nova-medata-fire

接收虚拟机的元数据请求

Some components and features 4.neutron network services (formerly called nova-network, has been renamed netron)

1) divided into self-service network and provider network

自服务网络:可以自己创建网络,用虚拟路由器连接外网,此网络类型使用极少
提供者网络:虚拟机网络桥接到物理机网络,且必须和物理机在同一网络段,大多数都选择此网络类型

2)neutron-server

对外提供openstack的网络API,接收请求,并调用插件

3)plugin

处理neutron-server接收的请求维护逻辑网络状态并调用agent处理请求

4)neutron-linuxbridge-agent

处理plugin请求,确保网络提供者实现各种网络功能

5) the message queue

neutron-server、agent、plugin它们之间是通过消息队列来进行通信和调用的

6) network providers

提供虚拟网络或物理网络设备,例如linux-bridge、支持neutron服务的物理交换机,    
其网络、子网、端口、路由等信息都存放在数据库中

II. Environmental preparation (all nodes centos7.2 version)

1.controll-node control node

1) card

            eth0:192.168.1.10/24   
            eth1:192.168.23.100/24   

2) required packages:

            python-openstackclient  #openstack的客户端连接包
            python2-PyMySQL #连接数据库包
            mariadb #用于mysql数据库连接测试的客户端
            python-memcached   #连接memcached数据包
            openstack-keystone   #认证服务包
            httpd
            mod_wsgi  #httpd的模块包
            openstack-glance  #镜像服务包
            openstack-nova-api #接收和响应外部请求包
            openstack-nova-conductor 
            openstack-nova-console 
            openstack-nova-novncproxy 
            openstack-nova-scheduler 
            openstack-nova-placement-api  
            openstack-neutron
            openstack-neutron-ml2  #二层模块插件
            openstack-neutron-linuxbridge
            ebtables
            openstack-dashboard

2.compute-node compute nodes

1) card

     eth0:192.168.1.10/24   
     eth1:192.168.23.201/24

2) required packages:

     python-openstackclient
     openstack-nova-compute
     openstack-neutron-linuxbridge
     ebtables
     ipset 

3.mysql-node database node

1) card

    eth0:192.168.1.41/24 

2) required packages:

     python-openstackclient  
     mariadb 
     mariadb-server  
     rabbitmq-server   
     memcached

4. Before the experiment must be turned off and turned on the function of the function test can not be avoided

1) To disable the firewall

2) NetworkManager To disable: Can cause can not not take effect bond card and bridging card

3) selinux sure to disable: may cause network problems does not make sense

4) open chrony clock synchronization, time synchronization to keep all the nodes, the nodes can not be found to avoid control and error experiments leading to the compute nodes can not be

Three .openstack the building process (ocata version of openstack)

1. Prepare openstack warehouse and the installation of some packages

1) are mounted openstack warehouse controll-node and compute-node

~]#yum install centos-release-openstack-ocata -y 

2) All nodes installed in the client openstack:

~]#yum install python-openstackclient -y

3) controll-node packet control node database connection setup:

用于连接memcached数据的包
~]#yum install  python-memcached -y
用于连接mysql数据库的包
~]#yum install python2-PyMySQL -y  
安装mysql数据库客户端用于远程测试连接
~]#yum install mariadb -y

4) mysql-node node to install mysql databases, message queues, and so the memcached

安装mysql数据库:
     ~]#yum install mariadb mariadb-server -y   
配置mysql数据库的配置文件:
     ~]#vim/etc/my.cnf.d/openstack.cnf   
    [mysqld]
    bind-address = 192.168.1.41
 default-storage-engine = innodb
 innodb_file_per_table = on
 max_connections = 4096
 collation-server = utf8_general_ci
 character-set-server = utf8
 …………
 ~]#vim/etc/my.cnf
 ……
 [mysqld]
 bind-address = 192.168.1.41
 ……
启动mysql服务:
systemctl enable mariadb.service && systemctl start mariadb.service 
执行数据库安全安装命令,删除匿名用户以及无密码登录,确保数据库安全
~]#mysql_secure_installation 
安装消息队列rabbitmq:
~]# yum install rabbitmq-server -y   
 ~]#systemctl enable rabbitmq-server.service  && systemctl start rabbitmq-server.service  
~]#rabbitmqctl add_user openstack openstack  #添加 openstack用户
 ~]#rabbitmqctl set_permissions openstack ".*" ".*" ".*" #允许读写权限
安装memcached数据库:
~]#yum install memcached -y
~]#vim /etc/sysconfig/memcached
     OPTIONS="-l 127.0.0.1,::1,192.168.1.41"
~]#systemctl enable memcached.service && systemctl start memcached.service

2. Certification service keystone of deployment

1) mysql-node node create keystone and keystone database authorized users

MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'  IDENTIFIED BY 'keystone'

2) related to controll-node installation certification service package and configure the authentication service profile

~]#yum install openstack-keystone httpd mod_wsgi -y
 ~]# vim /etc/keystone/keystone.conf
    [database]
    # ...
 connection = mysql+pymysql://keystone:[email protected]/keystone #指明连接的数据库
[token]
# ...
provider = fernet

3) controll-node initialization commands to perform some keystone

~]#su -s /bin/sh -c "keystone-manage db_sync" keystone  #同步数据到数据库中
~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone   初始化fernet
~]#keystone-manage credential_setup --keystone-user keystone --keystone-group keystone #初始化证书
引导启动keystone服务:
~]#keystone-manage bootstrap --bootstrap-password keystone --bootstrap-admin-url http://192.168.23.100:35357/v3/ --bootstrap-internal-url http://192.168.23.100:5000/v3/ --bootstrap-public-url http://192.168.23.100:5000/v3/ --bootstrap-region-id RegionOne
编辑httpd配置文件:
~]#vim /etc/httpd/conf/httpd.conf
ServerName 192.168.23.100  
创建软连接:
~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动http的服务:
~]#systemctl enable httpd.service && systemctl start httpd.service
声明openstack管理员账号及密码等:
~]#export OS_USERNAME=admin
~]# export OS_PASSWORD=keystone
~]# export OS_PROJECT_NAME=admin
~]# export OS_USER_DOMAIN_NAME=Default
~]# export OS_PROJECT_DOMAIN_NAME=Default
~]# export OS_AUTH_URL=http://192.168.23.100:35357/v3
~]# export OS_IDENTITY_API_VERSION=3

4) Create a domain name, projects in controll-node

创建一个service项目:
~]#openstack project create --domain default --description "Service Project" service
创建一个demo项目:
~]#openstack project create --domain default --description "Demo Project" demo
创建一个demo用户:
~]#openstack user create --domain default --password-prompt demo
创建一个user角色:
~]#openstack role create user
将demo用户添加到demo项目中并授予user角色的权限:
~]#openstack role add --project demo --user demo user
编辑keystone-paste.ini配置文件,移出掉以下[….]section中的admin_token_auth选项,为安全不启用临时账户
~]#vim /etc/keystone/keystone-paste.ini
remove 'admin_token_auth' from the [pipeline:public_api],[pipeline:admin_api], and [pipeline:api_v3] sections.
将设置的变量秘钥、系统认证url变量复位:
~]#unset OS_AUTH_URL OS_PASSWORD
admin用户请求一个身份认证密令,访问时需要输入管理员秘钥:
~]#openstack --os-auth-url http://192.168.23.100:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
demo用户请求一个身份认证密令,访问时需要输入demo用户秘钥:
~]#openstack --os-auth-url http://192.168.23.100:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue

5) create user authorization script controll-node, so that the next access services

admin用户授权脚本:
~]#vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=keystone
export OS_AUTH_URL=http://192.168.23.100:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
~]#chmod +x /data/admin-openrc
demo用户授权脚本:
~]#vim /data/demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.23.100:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
~]#chmd +x /data/demo-openrc
执行admin收取脚本测试访问
~]#. admin-openrc
~]#openstack token issue

3.glance mirror service deployment

1) Create a glance and glance authorized user database on mysql-node:

MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY 'glance';

2) controll-node implementation of service-related commands glance

执行admin授权脚本,以admin身份访问:
~]#. admin-openrc
创建glance用户:
~]#openstack user create --domain default --password-prompt glance
将glance用户赋予admin管理员权限:
~]#openstack role add --project service --user glance admin
创建一个服务名为image的服务
~]#openstack service create --name glance --description "OpenStack Image" image
创建endpoint
~]#openstack endpoint create --region RegionOne image public http://192.168.23.100:9292 #公有端点
~]#openstack endpoint create --region RegionOne image internal http://192.168.23.100:9292  #私有端点
~]#openstack endpoint create --region RegionOne image admin http://192.168.23.100:9292 #管理端点

3) controll-node installation package and related services glance mirror configuration profile glance

~]#yum install openstack-glance -y
~]#vim /etc/glance/glance-api.conf
[database]
# ...
connection = mysql+pymysql://glance:[email protected]/glance #指定连接的数据库

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211  #指定memcached数据库服务
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance #glance服务的用户名
password = glance #glance服务的用户名密码

[paste_deploy]
# ...
flavor = keystone

[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/   #镜像文件的路径

~]#vim /etc/glance/glance-registry.conf
[database]
# ...
connection = mysql+pymysql://glance:[email protected]/glance  #指定glance数据库

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
# ...
flavor = keystone
生成表文件到mysql的glance数据库中:
~]# su -s /bin/sh -c "glance-manage db_sync" glance
启动glance多有的相关服务:
~]# systemctl enable openstack-glance-api.service  openstack-glance-registry.service
~]#systemctl start openstack-glance-api.service openstack-glance-registry.service
执行admin权限脚本:
 ~]#. admin-openrc
下载cirros镜像文件:
~]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
将镜像文件加载到glance服务中:
~]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
查看服务是否加成功:
~]#openstack image list

4.nova deployment services

1) Create a database related to nova and nova authorized user on mysql-node

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    IDENTIFIED BY 'nova123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    IDENTIFIED BY 'nova123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    IDENTIFIED BY 'nova123456';

2) controll-node execution nova services related commands

 执行admin授权脚本:
~]#. admin-openrc
创建nova用户:
~]#openstack user create --domain default --password-prompt nova
授予nova用户admin权限:
~]#openstack role add --project service --user nova admin
创建一个服务名为cpmpute的服务
~]#openstack service create --name nova --description "OpenStack Compute" compute
创建enpoint:
~]#openstack endpoint create --region RegionOne compute public http://192.168.23.100:8774/v2.1  #公共端点
~]#openstack endpoint create --region RegionOne compute internal http://192.168.23.100:8774/v2.1 # 私有端点
 ~]#openstack endpoint create --region RegionOne compute admin http://192.168.23.100:8774/v2.1 # 管理端点
创建placement用户:
~]#openstack user create --domain default --password-prompt placement
将placement用户授予admin权限
~]#openstack role add --project service --user placement admin
创建一个服务名为placement服务
~]# openstack service create --name placement --description "Placement API" placement
创建endpoint:
~]#openstack endpoint create --region RegionOne placement public http://192.168.23.100:8778 #公共端点
~]#openstack endpoint create --region RegionOne placement internal http://192.168.23.100:8778 # 私有端点
~]#openstack endpoint create --region RegionOne placement admin http://192.168.23.100:8778 # 管理端点

3) In the bag controll-node related to the installation and configuration nova nova profile

~]#yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

 ~]#vim /etc/nova/nova.conf
[api_database]
# ...
connection = mysql+pymysql://nova:[email protected]/nova_api  #指定nova_api数据

[database]
# ...
connection = mysql+pymysql://nova:[email protected]/nova #指定nova数据库

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]  #指定消息队列
my_ip = 192.168.23.100   管理节点ip,可以不启用
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver  #禁用掉计算防火墙

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers =192.168.1.41:11211 #指定memcached数据库
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova  #nova服务的用户名
password = nova123456 #nova服务的用户名密码

[vnc]
enabled = true
# ...
vncserver_listen = $my_ip  #指定vnc的监听地址
vncserver_proxyclient_address = $my_ip #指定vnc代理地址

[glance]
# ...
api_servers = http://192.168.23.100:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp  #指定锁目录

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.23.100:35357/v3
username = placement
password = placement

~]#vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
     <IfVersion >= 2.4>
            Require all granted
     </IfVersion>
     <IfVersion < 2.4>
            Order allow,deny
            Allow from all
     </IfVersion>
</Directory>
重新启动httd服务:
~]#systemctl restart httpd

4) configuration, performs all service import start command and data on nova controll-node

导入nova的表格数据到mysql中的nova对应的数据库:
~]#su -s /bin/sh -c "nova-manage api_db sync" nova
~]#su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
~]#su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
~]#su -s /bin/sh -c "nova-manage db sync" nova
查询cell单元:
~]#nova-manage cell_v2 list_cells
启动nova所有的相关服务:
~]#systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
~]#systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

5) In the compute-node deployment nova Service

计算节点安装nova服务相关包并配置nova服务的配置文件:
~]#yum install openstack-nova-compute -y
~]#vim /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]
my_ip = 192.168.23.201   #计算节点

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova123456

[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.23.100:6080/vnc_auto.html

[glance]
# ...
api_servers = http://192.168.23.100:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]:
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.23.100:35357/v3
username = placement
password = placement
查看你的计算节点对虚拟机是否支持硬件,返回值为1时或比一大说明就支持
~]#egrep -c '(vmx|svm)' /proc/cpuinfo
如果你的计算节点不支持虚拟机硬件加速就需要配置一下选项libvirt
~]#vim /etc/nova/nova.conf
[libvirt]
# ...
virt_type = qemu

6) Start nova service after more compute-node configuration is completed

~]#systemctl enable libvirtd.service openstack-nova-compute.service
~]#systemctl start libvirtd.service openstack-nova-compute.service
 Note注意
If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. 
The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall
 on the controller node is preventing access to port 5672. 
Configure the firewall to open port 5672 on the controller node and restart nova-compute service on the compute node.
执行admin授权脚本:
~]#. admin-openrc
查看管理程序状态:
~]# openstack hypervisor list 
查询计算主机并添加到cell库:
~]#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova  
如果需要添加一个新的计算节点需要在主节点上运行nova-manage cell_v2 discover_hosts
或者
~]#vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300   #设置合适的时间间隔

After 7) Check the configuration service on nova controll-node

~]#. admin-openrc
列出所有计算服务
~]#openstack compute service list
~]#openstack catalog list
 ~]#openstack image list
 nova服务的升级状态检查:
~]#nova-status upgrade check

5.neutron deployment of network services

1) authorized user database, and create a neutron neutron services on mysql-node

MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

2) to perform neutron services on controll-node related commands

执行admin授权脚本,以admin的身份执行命令
~]#. admin-openrc
创建neutron用户:
~]#openstack user create --domain default --password-prompt neutron
将neutron用户属于admin权限
~]#openstack role add --project service --user neutron admin
创建一个服务名为neutron,类型为network的网络服务:
~]#openstack service create --name neutron --description "OpenStack Networking" network
创建endpoint:
~]#openstack endpoint create --region RegionOne network public http://192.168.23.100:9696 #公共端点
~]#openstack endpoint create --region RegionOne network internal http://192.168.23.100:9696 #私用端点
~]#openstack endpoint create --region RegionOne network admin http://192.168.23.100:9696 #管理端点

3) mounted on neutron and services related to controll-node packet and the associated profile configuration (here, a provider-network type network Example)

~]#yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

~]#vim /etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:[email protected]/neutron

[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
# ...
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

~]#vim /etc/neutron/plugins/ml2/ml2_conf.ini  #模块二层配置文件(model layer2)
[ml2]    
# ...
type_drivers = flat,vlan  #分别启用flat(桥接网络)、vlan(虚拟局域网络)
tenant_network_types =        #为空表示禁用用户创建子网
mechanism_drivers = linuxbridge  #机制驱动为桥接
extension_drivers = port_security  #开启扩展驱动端口安全

[ml2_type_flat]
# ...
flat_networks = provider   #指定的虚拟网络名称

[securitygroup]
# ...
enable_ipset = true

~]#vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1  #物理网卡映射到eth1,其中发热provider为虚拟网络名,必须跟上面指定的flat_networks=
指定的虚拟网络名称相同

[vxlan]
enable_vxlan = false   #禁用vxlan技术,从而进制用户创建自己的网络

[securitygroup]   #开启安全组可以限定外界主机的访问规则
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

~]#vim /etc/neutron/dhcp_agent.ini  #dhcp代理服务
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

~]#vim /etc/neutron/metadata_agent.ini
[DEFAULT]
# ...
nova_metadata_ip = 192.168.23.100
metadata_proxy_shared_secret = 123456

~]#vim /etc/nova/nova.conf
[neutron]
# ...
url = http://192.168.23.100:9696
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
……

4) configuration, initialization import database controll-node network service performing neutron neutron and open all of the services

网络初始化脚本软连接:
~]#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini  
将neutron生成的表格导入mysql中的neutron数据库中:
~]#su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启nova-api服务:
~]#systemctl restart openstack-nova-api.service
启动neutron网络服务的所有服务:
~]#systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
~]#systemctl start neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

提供者网络模式工作在二层,无需开启此三层服务,此服务用在自服务网络模式下需要开启:
~]#systemctl enable neutron-l3-agent.service && systemctl start neutron-l3-agent.service 

5) In the compute-node deployment neutron Service

在计算节点安装neutron服务的相关包并配置neutron配置文件
~]#yum install openstack-neutron-linuxbridge ebtables ipset -y
~]#vim /etc/neutron/neutron.conf
在[database]下注释掉所有connection选项

[DEFAULT]
# ...
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

选择和控制node相同的网络选项
~]#vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1  #此处的虚拟网卡名provider必须和控制节点保持一致

[vxlan]
enable_vxlan = false    #同样关闭VXlan

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
~]#vim /etc/nova/nova.conf
[neutron]
# ...
url = http://192.168.23.100:9696
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
重启nova服务:
~]#systemctl restart openstack-nova-compute.service
启动计算节点上的neutron服务:
 ~]#systemctl enable neutron-linuxbridge-agent.service && systemctl start neutron-linuxbridge-agent.service 

web 6. meter deployment dashboard access terminal

1) In the controll-nod is mounted on the dashboard related packages and configuration profile dashboard

~]#yum install openstack-dashboard -y
~]#vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.23.100"  #指定主机
ALLOWED_HOSTS = ['*,']  #允许所有主机访问dashboard

SESSION_ENGINE = 'django.contrib.sessions.backends.cache' #

CACHES = {   #
        'default': {
                 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
                 'LOCATION': 'controller:11211',
        }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST #

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True #

OPENSTACK_API_VERSIONS = {   
        "identity": 3,
        "image": 2,
        "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"  #

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #

OPENSTACK_NEUTRON_NETWORK = {    #
        'enable_router': False,
        'enable_quotas': False,
        'enable_ipv6': False,
        'enable_distributed_router': False,
        'enable_ha_router': False,
        'enable_lb': False,
        'enable_firewall': False,
        'enable_***': False,
        'enable_fip_topology_check': False,
}

TIME_ZONE = "Asia/Shanghai"  #时区设置
重新启动httpd服务和memcached服务
~]# systemctl restart httpd.service memcached.service
dashboard页面访问测试:

openstack four service components and openstack environment to build

7. Create a virtual network

1) execute script permissions admin

~]#. admin-openrc

2) Create a network's name provided outside the network share for the provider, the network type is flat, but also for the provider network name

~]#openstack network create  --share --external --provider-physical-network provider --provider-network-type flat provider

3) create the virtual network provider on the basis of a sub-network provider also named, assigned address pool 192.168.23.10-192.168.23.99, subnet range 192.168.23.0/24

~]#openstack subnet create --network provider --allocation-pool start=192.168.23.10,end=192.168.23.99 --dns-nameserver 192.168.23.1 --gateway 192.168.23.1 --subnet-range 192.168.23.0/24 provider

8. Create a virtual machine instance two ways

Method One: created directly in the web

1) Click on the administrator to create an example of the type
openstack four service components and openstack environment to build
2) Select the instance type
openstack four service components and openstack environment to build
3) Click to create an instance of type
openstack four service components and openstack environment to build
4) Fill in the number of virtual cpu instance types, memory, disk size and other information, and then click Create an instance of type
openstack four service components and openstack environment to build
5) View created examples of the type
openstack four service components and openstack environment to build
6) to select an item, select the instance in
openstack four service components and openstack environment to build
select create example 7)
openstack four service components and openstack environment to build

8) Fill in the name of the virtual machines created
openstack four service components and openstack environment to build
9) Select the image source cirrors
openstack four service components and openstack environment to build

10) at the option of creating a good example of the type created in the example right corner
openstack four service components and openstack environment to build

11) Examples being created, wait for hatching, is about to complete the creation
openstack four service components and openstack environment to build

Method two: direct control node in the command line to create a virtual machine record

1) Create a virtual machine command

~]#openstack server create --flavor m1.nano --image cirros   --nic net-id=06a17cc8-57b8-4823-a6df-28da24061d8a --security-group default test-vm

2) Command Options Notes

    server create #创建一个虚拟机实例
    --flavor  #指定的实例类型即创建的虚拟机vcpu个数、内存大小、磁盘大小等配置信息
    --nic net-id=指定的网络Id号,即虚拟机是基于网络的创建
     --image 指定的镜像名称
    default test-vm #指定默认的虚拟机名称为test-vm

Guess you like

Origin blog.51cto.com/14234542/2415140