CHUCK手把手带你搞定OPENSTACK

一、OpenStack初探

1.1 OpenStack简介

 OpenStack是一整套开源软件项目的综合,它允许企业或服务提供者建立、运行自己的云计算和存储设施。Rackspace与NASA是最初重要的两个贡献者,前者提供了“云文件”平台代码,该平台增强了OpenStack对象存储部分的功能,而后者带来了“Nebula”平台形成了OpenStack其余的部分。而今,OpenStack基金会已经有150多个会员,包括很多知名公司如“Canonical、DELL、Citrix”等。

1.2 OpenStack的几大组件

1.2.1 图解各大组件之间关系

1.2.2 谈谈openstack的组件

  • OpenStack 认证(keystone)

      Keystone为所有的OpenStack组件提供认证和访问策略服务,它依赖自身REST(基于Identity API)系统进行工作,主要对(但不限于)Swift、Glance、Nova等进行认证与授权。事实上,授权通过对动作消息来源者请求的合法性进行鉴定 
      Keystone采用两种授权方式,一种基于用户名/密码,另一种基于令牌(Token)。除此之外,Keystone提供以下三种服务: 
    a.令牌服务:含有授权用户的授权信息 
    b.目录服务:含有用户合法操作的可用服务列表 
    c.策略服务:利用Keystone具体指定用户或群组某些访问权限 

认证服务组件

1)通过宾馆对比keystone 
User 住宾馆的人 
Credentials 身份证 
Authentication 认证你的身份证 
Token 房卡 
project 组间 
Service 宾馆可以提供的服务类别,比如,饮食类,娱乐类 
Endpoint 具体的一种服务,比如吃烧烤,打羽毛球 
Role VIP 等级,VIP越高,享有越高的权限 
2)keystone组件详细说明 
a.服务入口endpoint:如Nova、Swift和Glance一样每个OpenStack服务都拥有一个指定的端口和专属的URL,我们称其为入口(endpoints)。 
b.用户user:Keystone授权使用者 
注:代表一个个体,OpenStack以用户的形式来授权服务给它们。用户拥有证书(credentials),且可能分配给一个或多个租户。经过验证后,会为每个单独的租户提供一个特定的令牌。 
c.服务service:总体而言,任何通过Keystone进行连接或管理的组件都被称为服务。举个例子,我们可以称Glance为Keystone的服务。 
d.角色role:为了维护安全限定,就内特定用户可执行的操作而言,该用户关联的角色是非常重要的。注:一个角色是应是某个租户的使用权限集合,以允许某个指定用户访问或使用特定操作。角色是使用权限的逻辑分组,它使得通用的权限可以简单地分组并绑定到与某个指定租户相关的用户。 
e.租间project:租间指的是具有全部服务入口并配有特定成员角色的一个项目。注:一个租间映射到一个Nova的“project-id”,在对象存储中,一个租间可以有多个容器。根据不同的安装方式,一个租间可以代表一个客户、帐号、组织或项目。

  • OpenStack Dashboard界面 (horizon)

      Horizon是一个用以管理、控制OpenStack服务的Web控制面板,它可以管理实例、镜像、创建密匙对,对实例添加卷、操作Swift容器等。除此之外,用户还可以在控制面板中使用终端(console)或VNC直接访问实例。总之,Horizon具有如下一些特点: 
    a.实例管理:创建、终止实例,查看终端日志,VNC连接,添加卷等 
    b.访问与安全管理:创建安全群组,管理密匙对,设置浮动IP等 
    c.偏好设定:对虚拟硬件模板可以进行不同偏好设定 
    d.镜像管理:编辑或删除镜像 
    e.查看服务目录 
    f.管理用户、配额及项目用途 
    g.用户管理:创建用户等 
    h.卷管理:创建卷和快照 
    i.对象存储处理:创建、删除容器和对象 
    j.为项目下载环境变量

  • OpenStack nova

图解nova

API:负责接收和响应外部请求,支持OpenStackAPI,EC2API

nova-api 组件实现了RESTfulAPI功能,是外部访问Nova的唯一途径,接收外部的请求并通过Message Queue将请求发送给其他服务组件,同时也兼容EC2API,所以可以用EC2的管理工具对nova进行日常管理

Cert:负责身份认证
Scheduler:用于云主机调度

Nova Scheduler模块在openstack中的作用是决策虚拟机创建在哪个主机(计算节点),一般会根据过滤计算节点或者通过加权的方法调度计算节点来创建虚拟机。 
1)过滤 
首先得到未经过过滤的主机列表,然后根据过滤属性,选择服务条件的计算节点主机 
 
2)调度 
经过过滤后,需要对主机进行权值的计算,根据策略选择相应的某一台主机(对于每一个要创建的虚拟机而言) 
 
注:Openstack默认不支持指定的计算节点创建虚拟机 
你可以得到更多nova的知识==>>Nova过滤调度器

Conductor:计算节点访问,数据的中间件
Consloeauth:用于控制台的授权认证
Novncproxy:VNC代理
  • OpenStack 对象存储 (swift)

      Swift为OpenStack提供一种分布式、持续虚拟对象存储,它类似于Amazon Web Service的S3简单存储服务。Swift具有跨节点百级对象的存储能力。Swift内建冗余和失效备援管理,也能够处理归档和媒体流,特别是对大数据(千兆字节)和大容量(多对象数量)的测度非常高效。

swift功能及特点
  • 海量对象存储
  • 大文件(对象)存储
  • 数据冗余管理
  • 归档能力—–处理大数据集
  • 为虚拟机和云应用提供数据容器
  • 处理流媒体
  • 对象安全存储
  • 备份与归档
  • 良好的可伸缩性
Swift的组件
  • Swift账户
  • Swift容器
  • Swift对象
  • Swift代理
  • Swift RING
Swift代理服务器

  用户都是通过Swift-API与代理服务器进行交互,代理服务器正是接收外界请求的门卫,它检测合法的实体位置并路由它们的请求。 
此外,代理服务器也同时处理实体失效而转移时,故障切换的实体重复路由请求。

Swift对象服务器

  对象服务器是一种二进制存储,它负责处理本地存储中的对象数据的存储、检索和删除。对象都是文件系统中存放的典型的二进制文件,具有扩展文件属性的元数据(xattr)。注:xattr格式被Linux中的ext3/4,XFS,Btrfs,JFS和ReiserFS所支持,但是并没有有效测试证明在XFS,JFS,ReiserFS,Reiser4和ZFS下也同样能运行良好。不过,XFS被认为是当前最好的选择。

Swift容器服务器

  容器服务器将列出一个容器中的所有对象,默认对象列表将存储为SQLite文件(译者注:也可以修改为MySQL,安装中就是以MySQL为例)。容器服务器也会统计容器中包含的对象数量及容器的存储空间耗费。

Swift账户服务器

  账户服务器与容器服务器类似,将列出容器中的对象。

Ring(索引环)

  Ring容器记录着Swift中物理存储对象的位置信息,它是真实物理存储位置的实体名的虚拟映射,类似于查找及定位不同集群的实体真实物理位置的索引服务。这里所谓的实体指账户、容器、对象,它们都拥有属于自己的不同的Rings。

  • OpenStack 块存储(cinder)

      API service:负责接受和处理Rest请求,并将请求放入RabbitMQ队列。Cinder提供Volume API V2 
      Scheduler service:响应请求,读取或写向块存储数据库为维护状态,通过消息队列机制与其他进程交互,或直接与上层块存储提供的硬件或软件交互,通过driver结构,他可以与中队的存储 
    提供者进行交互 
      Volume service: 该服务运行在存储节点上,管理存储空间。每个存储节点都有一个Volume Service,若干个这样的存储节点联合起来可以构成一个存储资源池。为了支持不同类型和型号的存储

  • OpenStack Image service (glance)

      glance 主要有三个部分构成:glance-api,glance-registry以及image store 
    glance-api:接受云系统镜像的创建,删除,读取请求 
    glance-registry:云系统的镜像注册服务

  • OpenStack 网络 (neutron)

    这里就不详细介绍了,后面会有详细的讲解

二、环境准备

2.1 准备机器

  本次实验使用的是VMvare虚拟机。详情如下

  • 控制节点 
    hostname:linux-node1.oldboyedu.com 
    ip地址:192.168.56.11 网卡NAT eth0 
    系统及硬件:CentOS 7.1 内存2G,硬盘50G
  • 计算节点: 
    hostname:linux-node2.oldboyedu.com 
    ip地址:192.168.56.12 网卡NAT eth0 
    系统及硬件:CentOS 7.1 内存2G,硬盘50G

2.2 OpenStack版本介绍

本文使用的是最新版L(Liberty)版,其他版本如下图 

2.3 安装组件服务

2.3.1 控制节点安装

  • Base
  1. yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -y
  2. yum install centos-release-openstack-liberty -y
  3. yum install python-openstackclient -y
  • MySQL
  1. yum install mariadb mariadb-server MySQL-python -y
  • RabbitMQ
  1. yum install rabbitmq-server -y
  • Keystone
  1. yum install openstack-keystone httpd mod_wsgi memcached python-memcached -y
  • Glance
  1. yum install openstack-glance python-glance python-glanceclient -y
  • Nova
  1. yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y
  • Neutron
  1. yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
  • Dashboard
  1. yum install openstack-dashboard -y

2.3.2 计算节点安装

  • Base
  1. yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
  2. yum install centos-release-openstack-liberty -y
  3. yum install python-openstackclient -y
  • Nova linux-node2.example.com
  1. yum install openstack-nova-compute sysfsutils -y
  • Neutron
  1. yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y

三、实战OpenStack之控制节点

3.1 CentOS7的时间同步服务器chrony

下载chrony

  1. [root@linux-node1 ~]# yum install -y chrony

修改其配置文件

  1. [root@linux-node1 ~]# vim /etc/chrony.conf
  2. allow 192.168/16

chrony开机自启动,并且启动

  1. [root@linux-node1 ~]#systemctl enable chronyd.service
  2. [root@linux-node1 ~]#systemctl start chronyd.service

设置Centos7的时区

  1. [root@linux-node1 ~]# timedatectl set-timezoneb Asia/Shanghai

查看时区和时间

  1. [root@linux-node1 ~]# timedatectl status
  2. Local time: Tue 2015-12-15 12:19:55 CST
  3. Universal time: Tue 2015-12-15 04:19:55 UTC
  4. RTC time: Sun 2015-12-13 15:35:33
  5. Timezone: Asia/Shanghai (CST, +0800)
  6. NTP enabled: yes
  7. NTP synchronized: no
  8. RTC in local TZ: no
  9. DST active: n/a
  10. [root@linux-node1 ~]# date
  11. Tue Dec 15 12:19:57 CST 2015

3.2 入手mysql

Openstack的所有组件除了Horizon,都要用到数据库,本文使用的是mysql,在CentOS7中,默认叫做MariaDB。 
拷贝配置文件

  1. [root[@linux-node1 ~]#cp /usr/share/mysql/my-medium.cnf /etc/my.cnf

修改mysql配置并启动

  1. [root@linux-node1 ~]# vim /etc/my.cnf(在mysqld模块下添加如下内容)
  2. [mysqld]
  3. default-storage-engine = innodb 默认的存储引擎
  4. innodb_file_per_table 使用独享的表空间
  5. collation-server = utf8_general_ci设置校对标准
  6. init-connect = 'SET NAMES utf8' 设置连接的字符集
  7. character-set-server = utf8 设置创建数据库时默认的字符集

开机自启和启动mysql

  1. [root@linux-node1 ~]# systemctl enable mariadb.service
  2. ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'
  3. [root@linux-node1 ~]# systemctl start mariadb.service

设置mysql的密码

  1. [root@linux-node1 ~]# mysql_secure_installation

创建所有组件的库并授权

  1. [root@linux-node1 ~]# mysql -uroot -p123456

执行sql

  1. CREATE DATABASE keystone;
  2. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
  3. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
  4. CREATE DATABASE glance;
  5. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
  6. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
  7. CREATE DATABASE nova;
  8. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
  9. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
  10. CREATE DATABASE neutron;
  11. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
  12. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
  13. CREATE DATABASE cinder;
  14. GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
  15. GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

3.3 Rabbit消息队列

  SOA架构:面向服务的体系结构是一个组件模型,它将应用程序的不同功能单元(称为服务)通过这些服务之间定义良好的接口和契约联系起来。接口是采用中立的方式进行定义的,它应该独立于实现服务的硬件平台、操作系统和编程语言。这使得构建在各种各样的系统中的服务可以使用一种统一和通用的方式进行交互。 
在这里Openstack采用了SOA架构方案,结合了SOA架构的松耦合特点,单独组件单独部署,每个组件之间可能互为消费者和提供者,通过消息队列(openstack 支持Rabbitmq,Zeromq,Qpid)进行通信,保证了当某个服务当掉的情况,不至于其他都当掉。

  1. 启动Rabbitmq
  2. [root@linux-node1 ~]# systemctl enable rabbitmq-server.service
  3. ln -s '/usr/lib/systemd/system/rabbitmq-server.service' '/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service'
  4. [root@linux-node1 ~]# systemctl start rabbitmq-server.service

新建Rabbitmq用户并授权

  1. [root@linux-node1 ~]# rabbitmqctl add_user openstack openstack
  2. [root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

启用Rabbitmq的web管理插件

  1. [root@linux-node1 ~]rabbitmq-plugins enable rabbitmq_management

重启Rabbitmq

  1. [root@linux-node1 ~]# systemctl restart rabbitmq-server.service

查看Rabbit的端口,其中5672是服务端口,15672是web管理端口,25672是做集群的端口

  1. [root@linux-node1 ~]# netstat -lntup |grep 5672
  2. tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 52448/beam
  3. tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 52448/beam
  4. tcp6 0 0 :::5672 :::* LISTEN 52448/beam

在web界面添加openstack用户,设置权限,首次登陆必须使用账号和密码必须都是guest 
 
role设置为administrator,并设置openstack的密码 
 
若想要监控Rabbit,即可使用下图中的API 

3.4 Keystone组件

修改keystone的配置文件

  1. [root@linux-node1 opt]# vim /etc/keystone/keystone.conf
  2. admin_token = 863d35676a5632e846d9
  3. 用作无用户时,创建用户来链接,此内容使用openssl随机产生
  4. connection = mysql://keystone:keystone@192.168.56.11/keystone
  5. 用作链接数据库,三个keysthone分别为keystone组件,keystone用户名,mysql中的keysthone库名

切换到keystone用户,导入keystoe数据库

  1. [root@linux-node1 opt]# su -s /bin/sh -c "keystone-manage db_sync" keystone
  2. [root@linux-node1 keystone]# cd /var/log/keystone/
  3. [root@linux-node1 keystone]# ll
  4. total 8
  5. -rw-r--r-- 1 keystone keystone 7064 Dec 15 14:43 keystone.log(通过切换到keystone用户下导入数据库,当启动的时候回把日志写入到该日志中,如果使用root执行倒库操作,则无法通过keysthone启动keystone程序)
  6. 31:verbose = true开启debug模式
  7. 1229:servers = 192.168.57.11:11211更改servers标签,填写memcache地址
  8. 1634:driver = sql开启默认sql驱动
  9. 1827:provider = uuid开启并使用唯一识别码
  10. 1832:driver = memcache(使用用户密码生成token时,存储到memcache中,高性能提供服务)

查看更改结果

  1. [root@linux-node1 keystone]# grep -n "^[a-Z]" /etc/keystone/keystone.conf
  2. 12:admin_token = 863d35676a5632e846d9
  3. 31:verbose = true
  4. 419:connection = mysql://keystone:keystone@192.168.56.11/keystone
  5. 1229:servers = 192.168.57.11:11211
  6. 1634:driver = sql
  7. 1827:provider = uuid
  8. 1832:driver = memcache

检查数据库导入结果

  1. MariaDB [keystone]> show tables;
  2. +------------------------+
  3. | Tables_in_keystone |
  4. +------------------------+
  5. | access_token |
  6. | assignment |
  7. | config_register |
  8. | consumer |
  9. | credential |
  10. | domain |
  11. | endpoint |
  12. | endpoint_group |
  13. | federation_protocol |
  14. | group |
  15. | id_mapping |
  16. | identity_provider |
  17. | idp_remote_ids |
  18. | mapping |
  19. | migrate_version |
  20. | policy |
  21. | policy_association |
  22. | project |
  23. | project_endpoint |
  24. | project_endpoint_group |
  25. | region |
  26. | request_token |
  27. | revocation_event |
  28. | role |
  29. | sensitive_config |
  30. | service |
  31. | service_provider |
  32. | token |
  33. | trust |
  34. | trust_role |
  35. | user |
  36. | user_group_membership |
  37. | whitelisted_config |
  38. +------------------------+
  39. 33 rows in set (0.00 sec)

添加一个apache的wsgi-keystone配置文件,其中5000端口是提供该服务的,35357是为admin提供管理用的

  1. [root@linux-node1 keystone]# cat /etc/httpd/conf.d/wsgi-keystone.conf
  2. Listen 5000
  3. Listen 35357
  4. <VirtualHost *:5000>
  5. WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  6. WSGIProcessGroup keystone-public
  7. WSGIScriptAlias / /usr/bin/keystone-wsgi-public
  8. WSGIApplicationGroup %{GLOBAL}
  9. WSGIPassAuthorization On
  10. <IfVersion >= 2.4>
  11. ErrorLogFormat "%{cu}t %M"
  12. </IfVersion>
  13. ErrorLog /var/log/httpd/keystone-error.log
  14. CustomLog /var/log/httpd/keystone-access.log combined
  15. <Directory /usr/bin>
  16. <IfVersion >= 2.4>
  17. Require all granted
  18. </IfVersion>
  19. <IfVersion < 2.4>
  20. Order allow,deny
  21. Allow from all
  22. </IfVersion>
  23. </Directory>
  24. </VirtualHost>
  25. <VirtualHost *:35357>
  26. WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  27. WSGIProcessGroup keystone-admin
  28. WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
  29. WSGIApplicationGroup %{GLOBAL}
  30. WSGIPassAuthorization On
  31. <IfVersion >= 2.4>
  32. ErrorLogFormat "%{cu}t %M"
  33. </IfVersion>
  34. ErrorLog /var/log/httpd/keystone-error.log
  35. CustomLog /var/log/httpd/keystone-access.log combined
  36. <Directory /usr/bin>
  37. <IfVersion >= 2.4>
  38. Require all granted
  39. </IfVersion>
  40. <IfVersion < 2.4>
  41. Order allow,deny
  42. Allow from all
  43. </IfVersion>
  44. </Directory>
  45. </VirtualHost>

配置apache的servername,如果不配置servername,会影响keystone服务

  1. [root@linux-node1 httpd]# vim conf/httpd.conf
  2. ServerName 192.168.56.11:80

启动memcached,httpd,keystone

  1. [root@linux-node1 httpd]# systemctl enable memcached httpd
  2. ln -s '/usr/lib/systemd/system/memcached.service' '/etc/systemd/system/multi-user.target.wants/memcached.service'
  3. ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
  4. [root@linux-node1 httpd]# systemctl start memcached httpd

查看httpd占用端口情况

  1. [root@linux-node1 httpd]# netstat -lntup|grep httpd
  2. tcp6 0 0 :::5000 :::* LISTEN 70482/httpd
  3. tcp6 0 0 :::80 :::* LISTEN 70482/httpd
  4. tcp6 0 0 :::35357 :::* LISTEN 70482/httpd

创建用户并连接keystone,在这里可以使用两种方式,通过keystone –help后家参数的方式,或者使用环境变量env的方式,下面就将使用环境变量的方式,分别设置了token,API及控制版本(SOA种很适用)

  1. [root@linux-node1 ~]# export OS_TOKEN=863d35676a5632e846d9
  2. [root@linux-node1 ~]# export OS_URL=http://192.168.56.11:35357/v3
  3. [root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3

创建admin项目(project)

  1. [root@linux-node1 httpd]# openstack project create --domain default --description "Admin Project" admin
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Admin Project |
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | 45ec9f72892c404897d0f7d6668d7a53 |
  9. | is_domain | False |
  10. | name | admin |
  11. | parent_id | None |
  12. +-------------+----------------------------------+

创建admin用户(user)并设置密码(生产环境一定设置一个复杂的)

  1. [root@linux-node1 httpd]# openstack user create --domain default --password-prompt admin
  2. User Password:
  3. Repeat User Password:
  4. +-----------+----------------------------------+
  5. | Field | Value |
  6. +-----------+----------------------------------+
  7. | domain_id | default |
  8. | enabled | True |
  9. | id | bb6d73c0b07246fb8f26025bb72c06a1 |
  10. | name | admin |
  11. +-----------+----------------------------------+

创建admin的角色(role)

  1. [root@linux-node1 httpd]# openstack role create admin
  2. +-------+----------------------------------+
  3. | Field | Value |
  4. +-------+----------------------------------+
  5. | id | b0bd00e6164243ceaa794db3250f267e |
  6. | name | admin |
  7. +-------+----------------------------------+

把admin用户加到admin项目,赋予admin角色,把角色,项目,用户关联起来

  1. [root@linux-node1 httpd]# openstack role add --project admin --user admin admin

创建一个普通用户demo,demo项目,角色为普通用户(uesr),并把它们关联起来

  1. [root@linux-node1 httpd]# openstack project create --domain default --description "Demo Project" demo
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Demo Project |
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | 4a213e53e4814685859679ff1dcb559f |
  9. | is_domain | False |
  10. | name | demo |
  11. | parent_id | None |
  12. +-------------+----------------------------------+
  13. [root@linux-node1 httpd]# openstack user create --domain default --password=demo demo
  14. +-----------+----------------------------------+
  15. | Field | Value |
  16. +-----------+----------------------------------+
  17. | domain_id | default |
  18. | enabled | True |
  19. | id | eb29c091e0ec490cbfa5d11dc2388766 |
  20. | name | demo |
  21. +-----------+----------------------------------+
  22. [root@linux-node1 httpd]# openstack role create user
  23. +-------+----------------------------------+
  24. | Field | Value |
  25. +-------+----------------------------------+
  26. | id | 4b36460ef1bd42daaf67feb19a8a55cf |
  27. | name | user |
  28. +-------+----------------------------------+
  29. [root@linux-node1 httpd]# openstack role add --project demo --user demo user

创建一个service的项目,此服务用来管理nova,neuturn,glance等组件的服务

  1. [root@linux-node1 httpd]# openstack project create --domain default --description "Service Project" service
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Service Project |
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | 0399778f38934986a923c96d8dc92073 |
  9. | is_domain | False |
  10. | name | service |
  11. | parent_id | None |
  12. +-------------+----------------------------------+

查看创建的用户,角色,项目

  1. [root@linux-node1 httpd]# openstack user list
  2. +----------------------------------+-------+
  3. | ID | Name |
  4. +----------------------------------+-------+
  5. | bb6d73c0b07246fb8f26025bb72c06a1 | admin |
  6. | eb29c091e0ec490cbfa5d11dc2388766 | demo |
  7. +----------------------------------+-------+
  8. [root@linux-node1 httpd]# openstack project list
  9. +----------------------------------+---------+
  10. | ID | Name |
  11. +----------------------------------+---------+
  12. | 0399778f38934986a923c96d8dc92073 | service |
  13. | 45ec9f72892c404897d0f7d6668d7a53 | admin |
  14. | 4a213e53e4814685859679ff1dcb559f | demo |
  15. +----------------------------------+---------+
  16. [root@linux-node1 httpd]# openstack role list
  17. +----------------------------------+-------+
  18. | ID | Name |
  19. +----------------------------------+-------+
  20. | 4b36460ef1bd42daaf67feb19a8a55cf | user |
  21. | b0bd00e6164243ceaa794db3250f267e | admin |
  22. +----------------------------------+-------+

注册keystone服务,虽然keystone本身是搞注册的,但是自己也需要注册服务 
创建keystone认证

  1. [root@linux-node1 httpd]# openstack service create --name keystone --description "OpenStack Identity" identity
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | OpenStack Identity |
  6. | enabled | True |
  7. | id | 46228b6dae2246008990040bbde371c3 |
  8. | name | keystone |
  9. | type | identity |
  10. +-------------+----------------------------------+

分别创建三种类型的endpoint,分别为public:对外可见,internal内部使用,admin管理使用

  1. [root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity public http://192.168.56.11:5000/v2.0
  2. +--------------+----------------------------------+
  3. | Field | Value |
  4. +--------------+----------------------------------+
  5. | enabled | True |
  6. | id | 1143dcd58b6848a1890c3f2b9bf101d5 |
  7. | interface | public |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | 46228b6dae2246008990040bbde371c3 |
  11. | service_name | keystone |
  12. | service_type | identity |
  13. | url | http://192.168.56.11:5000/v2.0 |
  14. +--------------+----------------------------------+
  15. [root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity internal http://192.168.56.11:5000/v2.0
  16. +--------------+----------------------------------+
  17. | Field | Value |
  18. +--------------+----------------------------------+
  19. | enabled | True |
  20. | id | 496f648007a04e5fbe99b62ed8a76acd |
  21. | interface | internal |
  22. | region | RegionOne |
  23. | region_id | RegionOne |
  24. | service_id | 46228b6dae2246008990040bbde371c3 |
  25. | service_name | keystone |
  26. | service_type | identity |
  27. | url | http://192.168.56.11:5000/v2.0 |
  28. +--------------+----------------------------------+
  29. [root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity admin http://192.168.56.11:35357/v2.0
  30. +--------------+----------------------------------+
  31. | Field | Value |
  32. +--------------+----------------------------------+
  33. | enabled | True |
  34. | id | 28283cbf90b5434ba7a8780fac9308df |
  35. | interface | admin |
  36. | region | RegionOne |
  37. | region_id | RegionOne |
  38. | service_id | 46228b6dae2246008990040bbde371c3 |
  39. | service_name | keystone |
  40. | service_type | identity |
  41. | url | http://192.168.56.11:35357/v2.0 |
  42. +--------------+----------------------------------+

查看创建的endpoint

  1. [root@linux-node1 httpd]# openstack endpoint list
  2. +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
  3. | ID | Region | Service Name | Service Type | Enabled | Interface | URL |
  4. +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
  5. | 1143dcd58b6848a1890c3f2b9bf101d5 | RegionOne | keystone | identity | True | public | http://192.168.56.11:5000/v2.0 |
  6. | 28283cbf90b5434ba7a8780fac9308df | RegionOne | keystone | identity | True | admin | http://192.168.56.11:35357/v2.0 |
  7. | 496f648007a04e5fbe99b62ed8a76acd | RegionOne | keystone | identity | True | internal | http://192.168.56.11:5000/v2.0 |
  8. +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+

链接到keystone,请求token,在这里由于已经添加了用户名和密码,就不在使用token,所有就一定要取消环境变量了

  1. [root@linux-node1 httpd]# unset OS_TOKEN
  2. [root@linux-node1 httpd]# unset OS_URL
  3. [root@linux-node1 httpd]#openstack --os-auth-url http://192.168.56.11:35357/v3
  4. --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue
  5. Password:
  6. +------------+----------------------------------+
  7. | Field | Value |
  8. +------------+----------------------------------+
  9. | expires | 2015-12-16T17:45:52.926050Z |
  10. | id | ba1d3c403bf34759b239176594001f8b |
  11. | project_id | 45ec9f72892c404897d0f7d6668d7a53 |
  12. | user_id | bb6d73c0b07246fb8f26025bb72c06a1 |
  13. +------------+----------------------------------+

配置admin和demo用户的环境变量,并添加执行权限,以后执行命令,直接source一下就行了

  1. [root@linux-node1 ~]# cat admin-openrc.sh
  2. export OS_PROJECT_DOMAIN_ID=default
  3. export OS_USER_DOMAIN_ID=default
  4. export OS_PROJECT_NAME=admin
  5. export OS_TENANT_NAME=admin
  6. export OS_USERNAME=admin
  7. export OS_PASSWORD=admin
  8. export OS_AUTH_URL=http://192.168.56.11:35357/v3
  9. export OS_IDENTITY_API_VERSION=3
  10. [root@linux-node1 ~]# cat demo-openrc.sh
  11. export OS_PROJECT_DOMAIN_ID=default
  12. export OS_USER_DOMAIN_ID=default
  13. export OS_PROJECT_NAME=demo
  14. export OS_TENANT_NAME=demo
  15. export OS_USERNAME=demo
  16. export OS_PASSWORD=demo
  17. export OS_AUTH_URL=http://192.168.56.11:5000/v3
  18. export OS_IDENTITY_API_VERSION=3
  19. [root@linux-node1 ~]# chmod +x demo-openrc.sh
  20. [root@linux-node1 ~]# chmod +x admin-openrc.sh
  21. [root@linux-node1 ~]# source admin-openrc.sh
  22. [root@linux-node1 ~]# openstack token issue
  23. +------------+----------------------------------+
  24. | Field | Value |
  25. +------------+----------------------------------+
  26. | expires | 2015-12-16T17:54:06.632906Z |
  27. | id | ade4b0c451b94255af1e96736555db75 |
  28. | project_id | 45ec9f72892c404897d0f7d6668d7a53 |
  29. | user_id | bb6d73c0b07246fb8f26025bb72c06a1 |
  30. +------------+----------------------------------+

3.5 Glance部署

修改glance-api和glance-registry的配置文件,同步数据库

  1. [root@linux-node1 glance]# vim glance-api.conf
  2. 538 connection=mysql://glance:glance@192.168.
  3. 56.11/glance
  4. [root@linux-node1 glance]# vim glance-registry.conf
  5. 363 connection=mysql://glance:glance@192.168.
  6. 56.11/glance
  7. [root@linux-node1 glance]# su -s /bin/sh -c "glance-manage db_sync" glance
  8. No handlers could be found for logger "oslo_config.cfg"(可以忽略)

检查导入glance库的表情况

  1. MariaDB [(none)]> use glance;
  2. Database changed
  3. MariaDB [glance]> show tables;
  4. +----------------------------------+
  5. | Tables_in_glance |
  6. +----------------------------------+
  7. | artifact_blob_locations |
  8. | artifact_blobs |
  9. | artifact_dependencies |
  10. | artifact_properties |
  11. | artifact_tags |
  12. | artifacts |
  13. | image_locations |
  14. | image_members |
  15. | image_properties |
  16. | image_tags |
  17. | images |
  18. | metadef_namespace_resource_types |
  19. | metadef_namespaces |
  20. | metadef_objects |
  21. | metadef_properties |
  22. | metadef_resource_types |
  23. | metadef_tags |
  24. | migrate_version |
  25. | task_info |
  26. | tasks |
  27. +----------------------------------+
  28. 20 rows in set (0.00 sec)

配置glance连接keystone,对于keystone,每个服务都要有一个用户连接keystone

  1. [root@linux-node1 ~]# source admin-openrc.sh
  2. [root@linux-node1 ~]# openstack user create --domain default --password=glance glance
  3. +-----------+----------------------------------+
  4. | Field | Value |
  5. +-----------+----------------------------------+
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | f4c340ba02bf44bf83d5c3ccfec77359 |
  9. | name | glance |
  10. +-----------+----------------------------------+
  11. [root@linux-node1 ~]# openstack role add --project service --user glance admin

修改glance-api配置文件,结合keystone和mysql

  1. [root@linux-node1 glance]# vim glance-api.conf
  2. 978 auth_uri = http://192.168.56.11:5000
  3. 979 auth_url = http://192.168.56.11:35357
  4. 980 auth_plugin = password
  5. 981 project_domain_id = default
  6. 982 user_domain_id = default
  7. 983 project_name = service
  8. 984 username = glance
  9. 985 password = glance
  10. 1485 flavor=keystone
  11. 491 notification_driver = noop 镜像服务不需要使用消息队列
  12. 642 default_store=file镜像存放成文件
  13. 701 filesystem_store_datadir=/var/lib/glance/images/
  14. 镜像存放位置
  15. 363 verbose=True 打开debug
  16. ```
  17. 修改glance-registry配置文件,结合keystone和mysql
  18. ```bash
  19. [root@linux-node1 glance]# vim glance-registry.conf
  20. 188:verbose=True
  21. 316:notification_driver =noop
  22. 767 auth_uri = http://192.168.56.11:5000
  23. 768 auth_url = http://192.168.56.11:35357
  24. 769 auth_plugin = password
  25. 770 project_domain_id = default
  26. 771 user_domain_id = default
  27. 772 project_name = service
  28. 773 username = glance
  29. 774 password = glance
  30. 1256:flavor=keystone
  31. ```
  32. 检查glance修改过的配置
  33. ```bash
  34. [root@linux-node1 ~]# grep -n '^[a-z]' /etc/glance/glance-api.conf
  35. 363:verbose=True
  36. 491:notification_driver = noop
  37. 538:connection=mysql://glance:glance@192.168.56.11/glance
  38. 642:default_store=file
  39. 701:filesystem_store_datadir=/var/lib/glance/images/
  40. 978:auth_uri = http://192.168.56.11:5000
  41. 979:auth_url = http://192.168.56.11:35357
  42. 980:auth_plugin = password
  43. 981:project_domain_id = default
  44. 982:user_domain_id = default
  45. 983:project_name = service
  46. 984:username = glance
  47. 985:password = glance
  48. 1485:flavor=keystone
  49. [root@linux-node1 ~]# grep -n '^[a-z]' /etc/glance/glance-registry.conf
  50. 188:verbose=True
  51. 316:notification_driver =noop
  52. 363:connection=mysql://glance:glance@192.168.56.11/glance
  53. 767:auth_uri = http://192.168.56.11:5000
  54. 768:auth_url = http://192.168.56.11:35357
  55. 769:auth_plugin = password
  56. 770:project_domain_id = default
  57. 771:user_domain_id = default
  58. 772:project_name = service
  59. 773:username = glance
  60. 774:password = glance
  61. 1256:flavor=keystone
  62. ```
  63. 对glance设置开机启动并启动glance服务
  64. ```bash
  65. [root@linux-node1 ~]# systemctl enable openstack-glance-api
  66. ln -s '/usr/lib/systemd/system/openstack-glance-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-api.service'
  67. [root@linux-node1 ~]# systemctl enable openstack-glance-registry
  68. ln -s '/usr/lib/systemd/system/openstack-glance-registry.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service'
  69. [root@linux-node1 ~]# systemctl start openstack-glance-api
  70. [root@linux-node1 ~]# systemctl start openstack-glance-registry

查看galnce占用端口情况,其中9191是registry占用端口,9292是api占用端口

  1. [root@linux-node1 ~]# netstat -lntup|egrep "9191|9292"
  2. tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 13180/python2
  3. tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 13162/python2
  4. ```bash
  5. 使glance服务在keystone上注册,才可以允许其他服务调用glance
  6. ```bash
  7. [root@linux-node1 ~]# source admin-openrc.sh
  8. [root@linux-node1 ~]# openstack service create --name glance --description "OpenStack Image service" image
  9. +-------------+----------------------------------+
  10. | Field | Value |
  11. +-------------+----------------------------------+
  12. | description | OpenStack Image service |
  13. | enabled | True |
  14. | id | cc8b4b4c712f47aa86e2d484c20a65c8 |
  15. | name | glance |
  16. | type | image |
  17. +-------------+----------------------------------+
  18. [root@linux-node1 ~]# openstack endpoint create --region RegionOne image public http://192.168.56.11:9292
  19. +--------------+----------------------------------+
  20. | Field | Value |
  21. +--------------+----------------------------------+
  22. | enabled | True |
  23. | id | 56cf6132fef14bfaa01c380338f485a6 |
  24. | interface | public |
  25. | region | RegionOne |
  26. | region_id | RegionOne |
  27. | service_id | cc8b4b4c712f47aa86e2d484c20a65c8 |
  28. | service_name | glance |
  29. | service_type | image |
  30. | url | http://192.168.56.11:9292 |
  31. +--------------+----------------------------------+
  32. [root@linux-node1 ~]# openstack endpoint create --region RegionOne image internal http://192.168.56.11:9292
  33. +--------------+----------------------------------+
  34. | Field | Value |
  35. +--------------+----------------------------------+
  36. | enabled | True |
  37. | id | 8005e8fcd85f4ea281eb9591c294e760 |
  38. | interface | internal |
  39. | region | RegionOne |
  40. | region_id | RegionOne |
  41. | service_id | cc8b4b4c712f47aa86e2d484c20a65c8 |
  42. | service_name | glance |
  43. | service_type | image |
  44. | url | http://192.168.56.11:9292 |
  45. +--------------+----------------------------------+
  46. [root@linux-node1 ~]# openstack endpoint create --region RegionOne image admin http://192.168.56.11:9292
  47. +--------------+----------------------------------+
  48. | Field | Value |
  49. +--------------+----------------------------------+
  50. | enabled | True |
  51. | id | 2b55d6db62eb47e9b8993d23e36111e0 |
  52. | interface | admin |
  53. | region | RegionOne |
  54. | region_id | RegionOne |
  55. | service_id | cc8b4b4c712f47aa86e2d484c20a65c8 |
  56. | service_name | glance |
  57. | service_type | image |
  58. | url | http://192.168.56.11:9292 |
  59. +--------------+----------------------------------+

在admin和demo中加入glance的环境变量,告诉其他服务glance使用的环境变量,一定要在admin-openrc.sh的路径下执行

  1. [root@linux-node1 ~]# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
  2. export OS_IMAGE_API_VERSION=2
  3. [root@linux-node1 ~]# tail -1 admin-openrc.sh
  4. export OS_IMAGE_API_VERSION=2
  5. [root@linux-node1 ~]# tail -1 demo-openrc.sh
  6. export OS_IMAGE_API_VERSION=2

如果出现以下情况,表示glance配置成功,由于没有镜像,所以看不到

  1. [root@linux-node1 ~]# glance image-list
  2. +----+------+
  3. | ID | Name |
  4. +----+------+
  5. +----+------+

下载一个镜像

  1. [root@linux-node1 ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
  2. --2015-12-17 02:12:55-- http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
  3. Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 69.163.241.114
  4. Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|69.163.241.114|:80... connected.
  5. HTTP request sent, awaiting response... 200 OK
  6. Length: 13287936 (13M) [text/plain]
  7. Saving to: cirros-0.3.4-x86_64-disk.img
  8. 100%[======================================>] 13,287,936 127KB/s in 71s
  9. 2015-12-17 02:14:08 (183 KB/s) - cirros-0.3.4-x86_64-disk.img saved [13287936/13287936]

上传镜像到glance,要在上一步所下载的镜像当前目录执行

  1. [root@linux-node1 ~]# glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
  2. [=============================>] 100%
  3. +------------------+--------------------------------------+
  4. | Property | Value |
  5. +------------------+--------------------------------------+
  6. | checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
  7. | container_format | bare |
  8. | created_at | 2015-12-16T18:16:46Z |
  9. | disk_format | qcow2 |
  10. | id | 4b36361f-1946-4026-b0cb-0f7073d48ade |
  11. | min_disk | 0 |
  12. | min_ram | 0 |
  13. | name | cirros |
  14. | owner | 45ec9f72892c404897d0f7d6668d7a53 |
  15. | protected | False |
  16. | size | 13287936 |
  17. | status | active |
  18. | tags | [] |
  19. | updated_at | 2015-12-16T18:16:47Z
  20. |
  21. | virtual_size | None |
  22. | visibility | public |
  23. +------------------+--------------------------------------+

查看上传镜像

  1. [root@linux-node1 ~]# glance image-list
  2. +--------------------------------------+--------+
  3. | ID | Name |
  4. +--------------------------------------+--------+
  5. | 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros |
  6. +--------------------------------------+--------+
  7. [root@linux-node1 ~]# cd /var/lib/glance/images/
  8. [root@linux-node1 images]# ls
  9. 4b36361f-1946-4026-b0cb-0f7073d48ade(和上述ID一致)

3.6 Nova控制节点的部署

创建nova用户,并加入到service项目中,赋予admin权限

  1. [root@linux-node1 ~]# source admin-openrc.sh .
  2. [root@linux-node1 ~]# openstack user create --domain default --password=nova nova
  3. +-----------+----------------------------------+
  4. | Field | Value |
  5. +-----------+----------------------------------+
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | 73659413d2a842dc82971a0fc531e7b9 |
  9. | name | nova |
  10. +-----------+----------------------------------+
  11. [root@linux-node1 ~]# openstack role add --project service --user nova admin

修改nova的配置文件,配置结果如下

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/nova/nova.conf
  2. 61:rpc_backend=rabbit 使用rabbitmq消息队列
  3. 124:my_ip=192.168.56.11 变量,方便调用
  4. 268:enabled_apis=osapi_compute,metadata 禁用ec2API
  5. 425:auth_strategy=keystone (使用keystone验证,分清处这个是default模块下的)
  6. 1053:network_api_class=nova.network.neutronv2.api.API 网络使用neutron的,中间的.代表目录结构
  7. 1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver (以前的类的名称LinuxBridgeInterfaceDriver,现在叫做NeutronLinuxBridgeInterfaceDriver
  8. 1331:security_group_api=neutron 设置安全组sgneutron
  9. 1370:debug=true
  10. 1374:verbose=True
  11. 1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver(关闭防火墙)
  12. 1828:vncserver_listen= $my_ip vnc监听地址
  13. 1832:vncserver_proxyclient_address= $my_ip 代理客户端地址
  14. 2213:connection=mysql://nova:nova@192.168.56.11/nova
  15. 2334:host=$my_ip glance的地址
  16. 2546:auth_uri = http://192.168.56.11:5000
  17. 2547:auth_url = http://192.168.56.11:35357
  18. 2548:auth_plugin = password
  19. 2549:project_domain_id = default
  20. 2550:user_domain_id = default
  21. 2551:project_name = service 使用service项目
  22. 2552:username = nova
  23. 2553:password = nova
  24. 3807:lock_path=/var/lib/nova/tmp 锁路径
  25. 3970:rabbit_host=192.168.56.11 指定rabbit主机
  26. 3974:rabbit_port=5672 rabbitmq端口
  27. 3986:rabbit_userid=openstack rabbitmq用户
  28. 3990:rabbit_password=openstack rabbitmq密码

同步数据库

  1. [root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
  2. MariaDB [nova]> use nova;
  3. Database changed
  4. MariaDB [nova]> show tables;
  5. +--------------------------------------------+
  6. | Tables_in_nova |
  7. +--------------------------------------------+
  8. | agent_builds |
  9. | aggregate_hosts |
  10. | aggregate_metadata |
  11. | aggregates |
  12. | block_device_mapping |
  13. | bw_usage_cache |
  14. | cells |
  15. | certificates |
  16. | compute_nodes |
  17. | console_pools |
  18. | consoles |
  19. | dns_domains |
  20. | fixed_ips |
  21. | floating_ips |
  22. | instance_actions |
  23. | instance_actions_events |
  24. | instance_extra |
  25. | instance_faults |
  26. | instance_group_member |
  27. | instance_group_policy |
  28. | instance_groups |
  29. | instance_id_mappings |
  30. | instance_info_caches |
  31. | instance_metadata |
  32. | instance_system_metadata |
  33. | instance_type_extra_specs |
  34. | instance_type_projects |
  35. | instance_types |
  36. | instances |
  37. | key_pairs |
  38. | migrate_version |
  39. | migrations |
  40. | networks |
  41. | pci_devices |
  42. | project_user_quotas |
  43. | provider_fw_rules |
  44. | quota_classes |
  45. | quota_usages |
  46. | quotas |
  47. | reservations |
  48. | s3_images |
  49. | security_group_default_rules |
  50. | security_group_instance_association |
  51. | security_group_rules |
  52. | security_groups |
  53. | services |
  54. | shadow_agent_builds |
  55. | shadow_aggregate_hosts |
  56. | shadow_aggregate_metadata |
  57. | shadow_aggregates |
  58. | shadow_block_device_mapping |
  59. | shadow_bw_usage_cache |
  60. | shadow_cells |
  61. | shadow_certificates |
  62. | shadow_compute_nodes |
  63. | shadow_console_pools |
  64. | shadow_consoles |
  65. | shadow_dns_domains |
  66. | shadow_fixed_ips |
  67. | shadow_floating_ips |
  68. | shadow_instance_actions |
  69. | shadow_instance_actions_events |
  70. | shadow_instance_extra |
  71. | shadow_instance_faults |
  72. | shadow_instance_group_member |
  73. | shadow_instance_group_policy |
  74. | shadow_instance_groups |
  75. | shadow_instance_id_mappings |
  76. | shadow_instance_info_caches |
  77. | shadow_instance_metadata |
  78. | shadow_instance_system_metadata |
  79. | shadow_instance_type_extra_specs |
  80. | shadow_instance_type_projects |
  81. | shadow_instance_types |
  82. | shadow_instances |
  83. | shadow_key_pairs |
  84. | shadow_migrate_version |
  85. | shadow_migrations |
  86. | shadow_networks |
  87. | shadow_pci_devices |
  88. | shadow_project_user_quotas |
  89. | shadow_provider_fw_rules |
  90. | shadow_quota_classes |
  91. | shadow_quota_usages |
  92. | shadow_quotas |
  93. | shadow_reservations |
  94. | shadow_s3_images |
  95. | shadow_security_group_default_rules |
  96. | shadow_security_group_instance_association |
  97. | shadow_security_group_rules |
  98. | shadow_security_groups |
  99. | shadow_services |
  100. | shadow_snapshot_id_mappings |
  101. | shadow_snapshots |
  102. | shadow_task_log |
  103. | shadow_virtual_interfaces |
  104. | shadow_volume_id_mappings |
  105. | shadow_volume_usage_cache |
  106. | snapshot_id_mappings |
  107. | snapshots |
  108. | tags |
  109. | task_log |
  110. | virtual_interfaces |
  111. | volume_id_mappings |
  112. | volume_usage_cache |
  113. +--------------------------------------------+
  114. 105 rows in set (0.01 sec)

启动nova的全部服务

  1. [root@linux-node1 ~]# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
  2. [root@linux-node1 ~]# systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

在keystone上注册nova,并检查控制节点的nova服务是否配置成功

  1. [root@linux-node1 ~]# openstack service create --name nova --description "OpenStack Compute" compute
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | OpenStack Compute |
  6. | enabled | True |
  7. | id | f5873e5f21994da882599c9866e28d55 |
  8. | name | nova |
  9. | type | compute |
  10. +-------------+----------------------------------+
  11. [root@linux-node1 ~]# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2/%\(tenant_id\)s
  12. +--------------+--------------------------------------------+
  13. | Field | Value |
  14. +--------------+--------------------------------------------+
  15. | enabled | True |
  16. | id | 23e9132aeb3a4dcb8689aa1933ad7301 |
  17. | interface | public |
  18. | region | RegionOne |
  19. | region_id | RegionOne |
  20. | service_id | f5873e5f21994da882599c9866e28d55 |
  21. | service_name | nova |
  22. | service_type | compute |
  23. | url | http://192.168.56.11:8774/v2/%(tenant_id)s |
  24. +--------------+--------------------------------------------+
  25. [root@linux-node1 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2/%\(tenant_id\)s
  26. +--------------+--------------------------------------------+
  27. | Field | Value |
  28. +--------------+--------------------------------------------+
  29. | enabled | True |
  30. | id | 1d67f3630a0f413e9d6ff53bcc657fb6 |
  31. | interface | internal |
  32. | region | RegionOne |
  33. | region_id | RegionOne |
  34. | service_id | f5873e5f21994da882599c9866e28d55 |
  35. | service_name | nova |
  36. | service_type | compute |
  37. | url | http://192.168.56.11:8774/v2/%(tenant_id)s |
  38. +--------------+--------------------------------------------+
  39. [root@linux-node1 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2/%\(tenant_id\)s
  40. +--------------+--------------------------------------------+
  41. | Field | Value |
  42. +--------------+--------------------------------------------+
  43. | enabled | True |
  44. | id | b7f7c210becc4e54b76bb454966582e4 |
  45. | interface | admin |
  46. | region | RegionOne |
  47. | region_id | RegionOne |
  48. | service_id | f5873e5f21994da882599c9866e28d55 |
  49. | service_name | nova |
  50. | service_type | compute |
  51. | url | http://192.168.56.11:8774/v2/%(tenant_id)s |
  52. +--------------+--------------------------------------------+
  53. [root@linux-node1 ~]# openstack host list
  54. +---------------------------+-------------+----------+
  55. | Host Name | Service | Zone |
  56. +---------------------------+-------------+----------+
  57. | linux-node1.oldboyedu.com | conductor | internal |
  58. | linux-node1.oldboyedu.com | consoleauth | internal |
  59. | linux-node1.oldboyedu.com | cert | internal |
  60. | linux-node1.oldboyedu.com | scheduler | internal |
  61. +---------------------------+-------------+----------+

3.7 Nova compute 计算节点的部署

  • 图解Nova cpmpute 
     
    nova-compute一般运行在计算节点上,通过Message Queue接收并管理VM的生命周期 
    nova-compute通过Libvirt管理KVN,通过XenAPI管理Xen等
  • 配置时间同步 
    修改其配置文件
  1. [root@linux-node1 ~]# vim /etc/chrony.conf
  2. server 192.168.56.11 iburst(只保留这一个server,也就是控制节点的时间)

chrony开机自启动,并且启动

  1. [root@linux-node1 ~]#systemctl enable chronyd.service
  2. [root@linux-node1 ~]#systemctl start chronyd.service

设置Centos7的时区

  1. [root@linux-node1 ~]# timedatectl set-timezone
  2. ``` Asia/Shanghai
  3. 查看时区和时间
  4. ```bash
  5. [root@linux-node ~]# timedatectl status
  6. Local time: Fri 2015-12-18 00:12:26 CST
  7. Universal time: Thu 2015-12-17 16:12:26 UTC
  8. RTC time: Sun 2015-12-13 15:32:36
  9. Timezone: Asia/Shanghai (CST, +0800)
  10. NTP enabled: yes
  11. NTP synchronized: no
  12. RTC in local TZ: no
  13. DST active: n/a
  14. [root@linux-node1 ~]# date
  15. Fri Dec 18 00:12:43 CST 2015
  • 开始部署计算节点 
    更改计算节点上的配置文件,直接使用控制节点的配置文件
  1. [root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/ (在控制节点上操作的scp)

更改配置文件后的过滤结果

  1. [root@linux-node ~]# grep -n '^[a-Z]' /etc/nova/nova.conf
  2. 61:rpc_backend=rabbit
  3. 124:my_ip=192.168.56.12 改成本机ip
  4. 268:enabled_apis=osapi_compute,metadata
  5. 425:auth_strategy=keystone
  6. 1053:network_api_class=nova.network.neutronv2.api.API
  7. 1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
  8. 1331:security_group_api=neutron
  9. 1370:debug=true
  10. 1374:verbose=True
  11. 1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver
  12. 1820:novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html 指定novncproxyIP地址和端口
  13. 1828:vncserver_listen=0.0.0.0 vnc监听0.0.0.0
  14. 1832:vncserver_proxyclient_address= $my_ip
  15. 1835:vnc_enabled=true 启用vnc
  16. 1838:vnc_keymap=en-us 英语键盘
  17. 2213:connection=mysql://nova:nova@192.168.56.11/nova
  18. 2334:host=192.168.56.11
  19. 2546:auth_uri = http://192.168.56.11:5000
  20. 2547:auth_url = http://192.168.56.11:35357
  21. 2548:auth_plugin = password
  22. 2549:project_domain_id = default
  23. 2550:user_domain_id = default
  24. 2551:project_name = service
  25. 2552:username = nova
  26. 2553:password = nova
  27. 2727:virt_type=kvm 使用kvm虚拟机,需要cpu支持,可通过grep "vmx" /proc/cpuinfo查看
  28. 3807:lock_path=/var/lib/nova/tmp
  29. 3970:rabbit_host=192.168.56.11
  30. 3974:rabbit_port=5672
  31. 3986:rabbit_userid=openstack
  32. 3990:rabbit_password=openstack

启动计算节点的libvirt和nova-compute

  1. [root@linux-node ~]# systemctl enable libvirtd openstack-nova-compute
  2. ln -s '/usr/lib/systemd/system/openstack-nova-compute.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service'
  3. [root@linux-node ~]# systemctl start libvirtd openstack-nova-compute
  • 在控制节点中查看注册的host,最后一个compute即是注册的host
  1. [root@linux-node1 ~]# openstack host list
  2. +---------------------------+-------------+----------+
  3. | Host Name | Service | Zone |
  4. +---------------------------+-------------+----------+
  5. | linux-node1.oldboyedu.com | conductor | internal |
  6. | linux-node1.oldboyedu.com | consoleauth | internal |
  7. | linux-node1.oldboyedu.com | cert | internal |
  8. | linux-node1.oldboyedu.com | scheduler | internal |
  9. | linux-node.oldboyedu.com | compute | nova |
  10. +---------------------------+-------------+----------+

在控制节点中测试nova和glance连接正常,nova链接keystone是否正常

  1. [root@linux-node1 ~]# nova image-list
  2. +--------------------------------------+--------+--------+--------+
  3. | ID | Name | Status | Server |
  4. +--------------------------------------+--------+--------+--------+
  5. | 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros | ACTIVE | |
  6. +--------------------------------------+--------+--------+--------+
  7. [root@linux-node1 ~]# nova endpoints
  8. WARNING: keystone has no endpoint in ! Available endpoints for this service:
  9. +-----------+----------------------------------+
  10. | keystone | Value |
  11. +-----------+----------------------------------+
  12. | id | 1143dcd58b6848a1890c3f2b9bf101d5 |
  13. | interface | public |
  14. | region | RegionOne |
  15. | region_id | RegionOne |
  16. | url | http://192.168.56.11:5000/v2.0 |
  17. +-----------+----------------------------------+
  18. +-----------+----------------------------------+
  19. | keystone | Value |
  20. +-----------+----------------------------------+
  21. | id | 28283cbf90b5434ba7a8780fac9308df |
  22. | interface | admin |
  23. | region | RegionOne |
  24. | region_id | RegionOne |
  25. | url | http://192.168.56.11:35357/v2.0 |
  26. +-----------+----------------------------------+
  27. +-----------+----------------------------------+
  28. | keystone | Value |
  29. +-----------+----------------------------------+
  30. | id | 496f648007a04e5fbe99b62ed8a76acd |
  31. | interface | internal |
  32. | region | RegionOne |
  33. | region_id | RegionOne |
  34. | url | http://192.168.56.11:5000/v2.0 |
  35. +-----------+----------------------------------+
  36. WARNING: nova has no endpoint in ! Available endpoints for this service:
  37. +-----------+---------------------------------------------------------------+
  38. | nova | Value |
  39. +-----------+---------------------------------------------------------------+
  40. | id | 1d67f3630a0f413e9d6ff53bcc657fb6 |
  41. | interface | internal |
  42. | region | RegionOne |
  43. | region_id | RegionOne |
  44. | url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53 |
  45. +-----------+---------------------------------------------------------------+
  46. +-----------+---------------------------------------------------------------+
  47. | nova | Value |
  48. +-----------+---------------------------------------------------------------+
  49. | id | 23e9132aeb3a4dcb8689aa1933ad7301 |
  50. | interface | public |
  51. | region | RegionOne |
  52. | region_id | RegionOne |
  53. | url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53 |
  54. +-----------+---------------------------------------------------------------+
  55. +-----------+---------------------------------------------------------------+
  56. | nova | Value |
  57. +-----------+---------------------------------------------------------------+
  58. | id | b7f7c210becc4e54b76bb454966582e4 |
  59. | interface | admin |
  60. | region | RegionOne |
  61. | region_id | RegionOne |
  62. | url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53 |
  63. +-----------+---------------------------------------------------------------+
  64. WARNING: glance has no endpoint in ! Available endpoints for this service:
  65. +-----------+----------------------------------+
  66. | glance | Value |
  67. +-----------+----------------------------------+
  68. | id | 2b55d6db62eb47e9b8993d23e36111e0 |
  69. | interface | admin |
  70. | region | RegionOne |
  71. | region_id | RegionOne |
  72. | url | http://192.168.56.11:9292 |
  73. +-----------+----------------------------------+
  74. +-----------+----------------------------------+
  75. | glance | Value |
  76. +-----------+----------------------------------+
  77. | id | 56cf6132fef14bfaa01c380338f485a6 |
  78. | interface | public |
  79. | region | RegionOne |
  80. | region_id | RegionOne |
  81. | url | http://192.168.56.11:9292 |
  82. +-----------+----------------------------------+
  83. +-----------+----------------------------------+
  84. | glance | Value |
  85. +-----------+----------------------------------+
  86. | id | 8005e8fcd85f4ea281eb9591c294e760 |
  87. | interface | internal |
  88. | region | RegionOne |
  89. | region_id | RegionOne |
  90. | url | http://192.168.56.11:9292 |
  91. +-----------+----------------------------------+

3.8 Neturn 服务部署

注册neutron服务

  1. [root@linux-node1 ~]# source admin-openrc.sh
  2. [root@linux-node1 ~]# openstack service create --name neutron --description "OpenStack Networking" network
  3. +-------------+----------------------------------+
  4. | Field | Value |
  5. +-------------+----------------------------------+
  6. | description | OpenStack Networking |
  7. | enabled | True |
  8. | id | e698fc8506634b05b250e9fdd8205565 |
  9. | name | neutron |
  10. | type | network |
  11. +-------------+----------------------------------+
  12. [root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696
  13. +--------------+----------------------------------+
  14. | Field | Value |
  15. +--------------+----------------------------------+
  16. | enabled | True |
  17. | id | 3cf4a13ec1b94e66a47e27bfccd95318 |
  18. | interface | public |
  19. | region | RegionOne |
  20. | region_id | RegionOne |
  21. | service_id | e698fc8506634b05b250e9fdd8205565 |
  22. | service_name | neutron |
  23. | service_type | network |
  24. | url | http://192.168.56.11:9696 |
  25. +--------------+----------------------------------+
  26. [root@linux-node1 ~]# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696
  27. +--------------+----------------------------------+
  28. | Field | Value |
  29. +--------------+----------------------------------+
  30. | enabled | True |
  31. | id | 5cd1e54d14f046dda2f7bf45b418f54c |
  32. | interface | internal |
  33. | region | RegionOne |
  34. | region_id | RegionOne |
  35. | service_id | e698fc8506634b05b250e9fdd8205565 |
  36. | service_name | neutron |
  37. | service_type | network |
  38. | url | http://192.168.56.11:9696 |
  39. +--------------+----------------------------------+
  40. [root@linux-node1 ~]# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696
  41. +--------------+----------------------------------+
  42. | Field | Value |
  43. +--------------+----------------------------------+
  44. | enabled | True |
  45. | id | 2c68cb45730d470691e6a3f0656eff03 |
  46. | interface | admin |
  47. | region | RegionOne |
  48. | region_id | RegionOne |
  49. | service_id | e698fc8506634b05b250e9fdd8205565 |
  50. | service_name | neutron |
  51. | service_type | network |
  52. | url | http://192.168.56.11:9696 |
  53. +--------------+----------------------------------+
  54. 创建neutron用户,并添加大service项目,给予admin权限
  55. [root@linux-node1 config]# openstack user create --domain default --password=neutron neutron
  56. +-----------+----------------------------------+
  57. | Field | Value |
  58. +-----------+----------------------------------+
  59. | domain_id | default |
  60. | enabled | True |
  61. | id | 5143854f317541d68efb8bba8b2539fc |
  62. | name | neutron |
  63. +-----------+----------------------------------+
  64. [root@linux-node1 config]# openstack role add --project service --user neutron admin

修改neturn配置文件

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/neutron.conf
  2. 20:state_path = /var/lib/neutron
  3. 60:core_plugin = ml2 核心插件为ml2
  4. 77:service_plugins = router 服务插件为router
  5. 92:auth_strategy = keystone
  6. 360:notify_nova_on_port_status_changes = True
  7. 端口改变需通知nova
  8. 364:notify_nova_on_port_data_changes = True
  9. 367:nova_url = http://192.168.56.11:8774/v2
  10. 573:rpc_backend=rabbit
  11. 717:auth_uri = http://192.168.56.11:5000
  12. 718:auth_url = http://192.168.56.11:35357
  13. 719:auth_plugin = password
  14. 720:project_domain_id = default
  15. 721:user_domain_id = default
  16. 722:project_name = service
  17. 723:username = neutron
  18. 724:password = neutron
  19. 737:connection = mysql://neutron:neutron@192.168.56.11:3306/neutron
  20. 780:auth_url = http://192.168.56.11:35357
  21. 781:auth_plugin = password
  22. 782:project_domain_id = default
  23. 783:user_domain_id = default
  24. 784:region_name = RegionOne
  25. 785:project_name = service
  26. 786:username = nova
  27. 787:password = nova
  28. 818:lock_path = $state_path/lock
  29. 998:rabbit_host = 192.168.56.11
  30. 1002:rabbit_port = 5672
  31. 1014:rabbit_userid = openstack
  32. 1018:rabbit_password = openstack

修改ml2的配置文件,ml2后续会有详细说明

  1. [root@linux-node1 ~]# grep "^[a-Z]" /etc/neutron/plugins/ml2/ml2_conf.ini
  2. type_drivers = flat,vlan,gre,vxlan,geneve 各种驱动
  3. tenant_network_types = vlan,gre,vxlan,geneve 网络类型
  4. mechanism_drivers = openvswitch,linuxbridge 支持的底层驱动
  5. extension_drivers = port_security 端口安全
  6. flat_networks = physnet1 使用单一扁平网络(和host一个网络)
  7. enable_ipset = True

修改的linuxbridge配置文件、

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  2. 9:physical_interface_mappings = physnet1:eth0 网卡映射eth
  3. 16:enable_vxlan = false 关闭vxlan
  4. 51:prevent_arp_spoofing = True
  5. 57:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  6. 61:enable_security_group = True

修改dhcp的配置文件

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/dhcp_agent.ini
  2. 27:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  3. 31:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq 使用Dnsmasq作为dhcp服务
  4. 52:enable_isolated_metadata = true

修改metadata_agent.ini配置文件

  1. [root@linux-node1 config]# grep -n "^[a-Z]" /etc/neutron/metadata_agent.ini
  2. 4:auth_uri = http://192.168.56.11:5000
  3. 5:auth_url = http://192.168.56.11:35357
  4. 6:auth_region = RegionOne
  5. 7:auth_plugin = password
  6. 8:project_domain_id = default
  7. 9:user_domain_id = default
  8. 10:project_name = service
  9. 11:username = neutron
  10. 12:password = neutron
  11. 29:nova_metadata_ip = 192.168.56.11
  12. 52:metadata_proxy_shared_secret = neutron

在控制节点的nova中添加关于neutron的配置,`添加如下内容到neutron模块即可

  1. 3033:url = http://192.168.56.11:9696
  2. 3034:auth_url = http://192.168.56.11:35357
  3. 3035:auth_plugin = password
  4. 3036:project_domain_id = default
  5. 3037:user_domain_id = default
  6. 3038:region_name = RegionOne
  7. 3039:project_name = service
  8. 3040:username = neutron
  9. 3041:password = neutron
  10. 3043:service_metadata_proxy = True
  11. 3044:metadata_proxy_shared_secret = neutron
  12. ````
  13. 创建ml2的软连接
  14. ```bash
  15. [root@linux-node1 config]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步neutron数据库,并检查结果

  1. [root@linux-node1 config]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  2. MariaDB [(none)]> use neutron;
  3. Database changed
  4. MariaDB [neutron]> show tables;
  5. +-----------------------------------------+
  6. | Tables_in_neutron |
  7. +-----------------------------------------+
  8. | address_scopes |
  9. | agents |
  10. | alembic_version |
  11. | allowedaddresspairs |
  12. | arista_provisioned_nets |
  13. | arista_provisioned_tenants |
  14. | arista_provisioned_vms |
  15. | brocadenetworks |
  16. | brocadeports |
  17. | cisco_csr_identifier_map |
  18. | cisco_hosting_devices |
  19. | cisco_ml2_apic_contracts |
  20. | cisco_ml2_apic_host_links |
  21. | cisco_ml2_apic_names |
  22. | cisco_ml2_n1kv_network_bindings |
  23. | cisco_ml2_n1kv_network_profiles |
  24. | cisco_ml2_n1kv_policy_profiles |
  25. | cisco_ml2_n1kv_port_bindings |
  26. | cisco_ml2_n1kv_profile_bindings |
  27. | cisco_ml2_n1kv_vlan_allocations |
  28. | cisco_ml2_n1kv_vxlan_allocations |
  29. | cisco_ml2_nexus_nve |
  30. | cisco_ml2_nexusport_bindings |
  31. | cisco_port_mappings |
  32. | cisco_router_mappings |
  33. | consistencyhashes |
  34. | csnat_l3_agent_bindings |
  35. | default_security_group |
  36. | dnsnameservers |
  37. | dvr_host_macs |
  38. | embrane_pool_port |
  39. | externalnetworks |
  40. | extradhcpopts |
  41. | firewall_policies |
  42. | firewall_rules |
  43. | firewalls |
  44. | flavors |
  45. | flavorserviceprofilebindings |
  46. | floatingips |
  47. | ha_router_agent_port_bindings |
  48. | ha_router_networks |
  49. | ha_router_vrid_allocations |
  50. | healthmonitors |
  51. | ikepolicies |
  52. | ipallocationpools |
  53. | ipallocations |
  54. | ipamallocationpools |
  55. | ipamallocations |
  56. | ipamavailabilityranges |
  57. | ipamsubnets |
  58. | ipavailabilityranges |
  59. | ipsec_site_connections |
  60. | ipsecpeercidrs |
  61. | ipsecpolicies |
  62. | lsn |
  63. | lsn_port |
  64. | maclearningstates |
  65. | members |
  66. | meteringlabelrules |
  67. | meteringlabels |
  68. | ml2_brocadenetworks |
  69. | ml2_brocadeports |
  70. | ml2_dvr_port_bindings |
  71. | ml2_flat_allocations |
  72. | ml2_geneve_allocations |
  73. | ml2_geneve_endpoints |
  74. | ml2_gre_allocations |
  75. | ml2_gre_endpoints |
  76. | ml2_network_segments |
  77. | ml2_nexus_vxlan_allocations |
  78. | ml2_nexus_vxlan_mcast_groups |
  79. | ml2_port_binding_levels |
  80. | ml2_port_bindings |
  81. | ml2_ucsm_port_profiles |
  82. | ml2_vlan_allocations |
  83. | ml2_vxlan_allocations |
  84. | ml2_vxlan_endpoints |
  85. | multi_provider_networks |
  86. | networkconnections |
  87. | networkdhcpagentbindings |
  88. | networkgatewaydevicereferences |
  89. | networkgatewaydevices |
  90. | networkgateways |
  91. | networkqueuemappings |
  92. | networkrbacs |
  93. | networks |
  94. | networksecuritybindings |
  95. | neutron_nsx_network_mappings |
  96. | neutron_nsx_port_mappings |
  97. | neutron_nsx_router_mappings |
  98. | neutron_nsx_security_group_mappings |
  99. | nexthops |
  100. | nsxv_edge_dhcp_static_bindings |
  101. | nsxv_edge_vnic_bindings |
  102. | nsxv_firewall_rule_bindings |
  103. | nsxv_internal_edges |
  104. | nsxv_internal_networks |
  105. | nsxv_port_index_mappings |
  106. | nsxv_port_vnic_mappings |
  107. | nsxv_router_bindings |
  108. | nsxv_router_ext_attributes |
  109. | nsxv_rule_mappings |
  110. | nsxv_security_group_section_mappings |
  111. | nsxv_spoofguard_policy_network_mappings |
  112. | nsxv_tz_network_bindings |
  113. | nsxv_vdr_dhcp_bindings |
  114. | nuage_net_partition_router_mapping |
  115. | nuage_net_partitions |
  116. | nuage_provider_net_bindings |
  117. | nuage_subnet_l2dom_mapping |
  118. | ofcfiltermappings |
  119. | ofcnetworkmappings |
  120. | ofcportmappings |
  121. | ofcroutermappings |
  122. | ofctenantmappings |
  123. | packetfilters |
  124. | poolloadbalanceragentbindings |
  125. | poolmonitorassociations |
  126. | pools |
  127. | poolstatisticss |
  128. | portbindingports |
  129. | portinfos |
  130. | portqueuemappings |
  131. | ports |
  132. | portsecuritybindings |
  133. | providerresourceassociations |
  134. | qos_bandwidth_limit_rules |
  135. | qos_network_policy_bindings |
  136. | qos_policies |
  137. | qos_port_policy_bindings |
  138. | qosqueues |
  139. | quotas |
  140. | quotausages |
  141. | reservations |
  142. | resourcedeltas |
  143. | router_extra_attributes |
  144. | routerl3agentbindings |
  145. | routerports |
  146. | routerproviders |
  147. | routerroutes |
  148. | routerrules |
  149. | routers |
  150. | securitygroupportbindings |
  151. | securitygrouprules |
  152. | securitygroups |
  153. | serviceprofiles |
  154. | sessionpersistences |
  155. | subnetpoolprefixes |
  156. | subnetpools |
  157. | subnetroutes |
  158. | subnets |
  159. | tz_network_bindings |
  160. | vcns_router_bindings |
  161. | vips |
  162. | vpnservices |
  163. +-----------------------------------------+
  164. 155 rows in set (0.00 sec)

重启nova-api,并启动neutron服务

  1. [root@linux-node1 config]# systemctl restart openstack-nova-api
  2. [root@linux-node1 config]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  3. [root@linux-node1 config]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

检查neutron-agent结果

  1. [root@linux-node1 config]# neutron agent-list
  2. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
  3. | id | agent_type | host | alive | admin_state_up | binary |
  4. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
  5. | 5a9a522f-e2dc-42dc-ab37-b26da0bfe416 | Metadata agent | linux-node1.oldboyedu.com | :-) | True | neutron-metadata-agent |
  6. | 8ba06bd7-896c-47aa-a733-8a9a9822361c | DHCP agent | linux-node1.oldboyedu.com | :-) | True | neutron-dhcp-agent |
  7. | f16eef03-4592-4352-8d5e-c08fb91dc983 | Linux bridge agent | linux-node1.oldboyedu.com | :-) | True | neutron-linuxbridge-agent |
  8. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+

开始部署neutron的计算节点,在这里直接scp过去,不需要做任何更改

  1. [root@linux-node1 config]# scp /etc/neutron/neutron.conf 192.168.56.12:/etc/neutron/
  2. [root@linux-node1 config]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/

修改计算节点的nova配置,添加如下内容到neutron模块即可

  1. 3033:url = http://192.168.56.11:9696
  2. 3034:auth_url = http://192.168.56.11:35357
  3. 3035:auth_plugin = password
  4. 3036:project_domain_id = default
  5. 3037:user_domain_id = default
  6. 3038:region_name = RegionOne
  7. 3039:project_name = service
  8. 3040:username = neutron
  9. 3041:password = neutron
  10. 3043:service_metadata_proxy = True
  11. 3044:metadata_proxy_shared_secret = neutron
  12. ````
  13. 复制linuxbridge_agent
  14. 文件,无需更改,并创建ml2软连接
  15. ```bash
  16. [root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/
  17. [root@linux-node ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

重启计算节点的nova-computer

  1. [root@linux-node ml2]# systemctl restart openstack-nova-compute.service

计算机点上启动linuxbridge_agent服务

  1. [root@linux-node ml2]# systemctl restart openstack-nova-compute.service
  2. [root@linux-node ml2]# systemctl enable neutron-linuxbridge-agent.service
  3. ln -s '/usr/lib/systemd/system/neutron-linuxbridge-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service'
  4. [root@linux-node ml2]# systemctl start neutron-linuxbridge-agent.service

检查neutron的结果,有四个(控制节点一个,计算节点两个)结果代表正确

  1. [root@linux-node1 config]# neutron agent-list
  2. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
  3. | id | agent_type | host | alive | admin_state_up | binary |
  4. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
  5. | 5a9a522f-e2dc-42dc-ab37-b26da0bfe416 | Metadata agent | linux-node1.oldboyedu.com | :-) | True | neutron-metadata-agent |
  6. | 7d81019e-ca3b-4b32-ae32-c3de9452ef9d | Linux bridge agent | linux-node.oldboyedu.com | :-) | True | neutron-linuxbridge-agent |
  7. | 8ba06bd7-896c-47aa-a733-8a9a9822361c | DHCP agent | linux-node1.oldboyedu.com | :-) | True | neutron-dhcp-agent |
  8. | f16eef03-4592-4352-8d5e-c08fb91dc983 | Linux bridge agent | linux-node1.oldboyedu.com | :-) | True | neutron-linuxbridge-agent |
  9. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+

四、创建一台虚拟机

图解网络,并创建一个真实的桥接网络 
 
 
创建一个单一扁平网络(名字:flat),网络类型为flat,网络适共享的(share),网络提供者:physnet1,它是和eth0关联起来的

  1. [root@linux-node1 ~]# source admin-openrc.sh
  2. [root@linux-node1 ~]# neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flat
  3. Created a new network:
  4. +---------------------------+--------------------------------------+
  5. | Field | Value |
  6. +---------------------------+--------------------------------------+
  7. | admin_state_up | True |
  8. | id | 7a3c7391-cea7-47eb-a0ef-f7b18010c984 |
  9. | mtu | 0 |
  10. | name | flat |
  11. | port_security_enabled | True |
  12. | provider:network_type | flat |
  13. | provider:physical_network | physnet1 |
  14. | provider:segmentation_id | |
  15. | router:external | False |
  16. | shared | True |
  17. | status | ACTIVE |
  18. | subnets | |
  19. | tenant_id | 45ec9f72892c404897d0f7d6668d7a53 |
  20. +---------------------------+--------------------------------------+

对上一步创建的网络创建一个子网,名字为:subnet-create flat,设置dns和网关

  1. [root@linux-node1 ~]# neutron subnet-create flat 192.168.56.0/24 --name flat-subnet --allocation-pool start=192.168.56.100,end=192.168.56.200 --dns-nameserver 192.168.56.2 --gateway 192.168.56.2
  2. Created a new subnet:
  3. +-------------------+------------------------------------------------------+
  4. | Field | Value |
  5. +-------------------+------------------------------------------------------+
  6. | allocation_pools | {"start": "192.168.56.100", "end": "192.168.56.200"} |
  7. | cidr | 192.168.56.0/24 |
  8. | dns_nameservers | 192.168.56.2 |
  9. | enable_dhcp | True |
  10. | gateway_ip | 192.168.56.2 |
  11. | host_routes | |
  12. | id | 6841c8ae-78f6-44e2-ab74-7411108574c2 |
  13. | ip_version | 4 |
  14. | ipv6_address_mode | |
  15. | ipv6_ra_mode | |
  16. | name | flat-subnet |
  17. | network_id | 7a3c7391-cea7-47eb-a0ef-f7b18010c984 |
  18. | subnetpool_id | |
  19. | tenant_id | 45ec9f72892c404897d0f7d6668d7a53 |
  20. +-------------------+------------------------------------------------------+

查看创建的网络和子网

  1. [root@linux-node1 ~]# neutron net-list
  2. +--------------------------------------+------+------------------------------------------------------+
  3. | id | name | subnets |
  4. +--------------------------------------+------+------------------------------------------------------+
  5. | 7a3c7391-cea7-47eb-a0ef-f7b18010c984 | flat | 6841c8ae-78f6-44e2-ab74-7411108574c2 192.168.56.0/24 |
  6. +--------------------------------------+------+------------------------------------------------------+

注:创建虚拟机之前,由于一个网络下不能存在多个dhcp,所以一定关闭其他的dhcp选项 
下面开始正式创建虚拟机,为了可以连上所创建的虚拟机,在这里要创建一对公钥和私钥,并添加到openstack中

  1. [root@linux-node1 ~]# source demo-openrc.sh
  2. [root@linux-node1 ~]# ssh-keygen -q -N ""
  3. Enter file in which to save the key (/root/.ssh/id_rsa):
  4. [root@linux-node1 ~]# nova keypair-add --pub-key .ssh/id_rsa.pub mykey
  5. [root@linux-node1 ~]# nova keypair-list
  6. +-------+-------------------------------------------------+
  7. | Name | Fingerprint |
  8. +-------+-------------------------------------------------+
  9. | mykey | 9f:25:57:44:45:a3:6d:0d:4b:e7:ca:3a:9c:67:32:6f |
  10. +-------+-------------------------------------------------+
  11. [root@linux-node1 ~]# ls .ssh/
  12. id_rsa id_rsa.pub known_hosts

创建一个安全组,打开icmp和开放22端口

  1. [root@linux-node1 ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
  2. +-------------+-----------+---------+-----------+--------------+
  3. | IP Protocol | From Port | To Port | IP Range | Source Group |
  4. +-------------+-----------+---------+-----------+--------------+
  5. | icmp | -1 | -1 | 0.0.0.0/0 | |
  6. +-------------+-----------+---------+-----------+--------------+
  7. [root@linux-node1 ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
  8. +-------------+-----------+---------+-----------+--------------+
  9. | IP Protocol | From Port | To Port | IP Range | Source Group |
  10. +-------------+-----------+---------+-----------+--------------+
  11. | tcp | 22 | 22 | 0.0.0.0/0 | |
  12. +-------------+-----------+---------+-----------+--------------+

创建虚拟机之前要进行的确认虚拟机类型flavor(相当于EC2的intance的type)、需要的镜像(EC2的AMI),需要的网络(EC2的VPC),安全组(EC2的sg)

  1. [root@linux-node1 ~]# nova flavor-list
  2. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  3. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  4. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  5. | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
  6. | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
  7. | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
  8. | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
  9. | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
  10. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  11. [root@linux-node1 ~]# nova image-list
  12. +--------------------------------------+--------+--------+--------+
  13. | ID | Name | Status | Server |
  14. +--------------------------------------+--------+--------+--------+
  15. | 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros | ACTIVE | |
  16. +--------------------------------------+--------+--------+--------+
  17. [root@linux-node1 ~]# neutron net-list
  18. +--------------------------------------+------+------------------------------------------------------+
  19. | id | name | subnets |
  20. +--------------------------------------+------+------------------------------------------------------+
  21. | 7a3c7391-cea7-47eb-a0ef-f7b18010c984 | flat | 6841c8ae-78f6-44e2-ab74-7411108574c2 192.168.56.0/24 |
  22. +--------------------------------------+------+------------------------------------------------------+
  23. [root@linux-node1 ~]# nova secgroup-list
  24. +--------------------------------------+---------+------------------------+
  25. | Id | Name | Description |
  26. +--------------------------------------+---------+------------------------+
  27. | 2946cecd-0933-45d0-a6e2-0606abe418ee | default | Default security group |
  28. +--------------------------------------+---------+------------------------+

创建一台虚拟机,类型为m1.tiny,镜像为cirros(上文wget的),网络id为neutron net-list出来的,安全组就是默认的,选择刚开的创建的key-pair,虚拟机的名字为hello-instance

  1. [root@linux-node1 ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=7a3c7391-cea7-47eb-a0ef-f7b18010c984 --security-group default --key-name mykey hello-instance
  2. +--------------------------------------+-----------------------------------------------+
  3. | Property | Value |
  4. +--------------------------------------+-----------------------------------------------+
  5. | OS-DCF:diskConfig | MANUAL |
  6. | OS-EXT-AZ:availability_zone | |
  7. | OS-EXT-STS:power_state | 0 |
  8. | OS-EXT-STS:task_state | scheduling |
  9. | OS-EXT-STS:vm_state | building |
  10. | OS-SRV-USG:launched_at | - |
  11. | OS-SRV-USG:terminated_at | - |
  12. | accessIPv4 | |
  13. | accessIPv6 | |
  14. | adminPass | JPp9rX5UBYcW |
  15. | config_drive | |
  16. | created | 2015-12-17T02:03:38Z |
  17. | flavor | m1.tiny (1) |
  18. | hostId | |
  19. | id | bb71867c-4078-4984-bf5a-f10bd84ba72b |
  20. | image | cirros (4b36361f-1946-4026-b0cb-0f7073d48ade) |
  21. | key_name | mykey |
  22. | metadata | {} |
  23. | name | hello-instance |
  24. | os-extended-volumes:volumes_attached | [] |
  25. | progress | 0 |
  26. | security_groups | default |
  27. | status | BUILD |
  28. | tenant_id | 4a213e53e4814685859679ff1dcb559f |
  29. | updated | 2015-12-17T02:03:41Z |
  30. | user_id | eb29c091e0ec490cbfa5d11dc2388766 |
  31. +--------------------------------------+-----------------------------------------------+

查看所创建的虚拟机状态、

  1. [root@linux-node1 ~]# nova list
  2. +--------------------------------------+----------------+--------+------------+-------------+---------------------+
  3. | ID | Name | Status | Task State | Power State | Networks |
  4. +--------------------------------------+----------------+--------+------------+-------------+---------------------+
  5. | bb71867c-4078-4984-bf5a-f10bd84ba72b | hello-instance | ACTIVE | - | Running | flat=192.168.56.101 |
  6. +--------------------------------------+----------------+--------+------------+-------------+---------------------+

ssh连接到所创建的虚拟机

  1. [root@linux-node1 ~]# ssh cirros@192.168.56.101

通过vnc生成URL在web界面上链接虚拟机

  1. [root@linux-node1 ~]# nova get-vnc-console hello-instance novnc
  2. +-------+------------------------------------------------------------------------------------+
  3. | Type | Url |
  4. +-------+------------------------------------------------------------------------------------+
  5. | novnc | http://192.168.56.11:6080/vnc_auto.html?token=1af18bea-5a64-490e-8251-29c8bed36125 |
  6. +------

五、深入Neutron讲解

5.1 虚拟机网卡和网桥

  1. [root@linux-node1 ~]# ifconfig
  2. brq7a3c7391-ce: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  3. inet 192.168.56.11 netmask 255.255.255.0 broadcast 192.168.56.255
  4. inet6 fe80::a812:a1ff:fe7b:b829 prefixlen 64 scopeid 0x20<link>
  5. ether 00:0c:29:34:98:f2 txqueuelen 0 (Ethernet)
  6. RX packets 60177 bytes 17278837 (16.4 MiB)
  7. RX errors 0 dropped 0 overruns 0 frame 0
  8. TX packets 52815 bytes 14671641 (13.9 MiB)
  9. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  10. eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  11. inet6 fe80::20c:29ff:fe34:98f2 prefixlen 64 scopeid 0x20<link>
  12. ether 00:0c:29:34:98:f2 txqueuelen 1000 (Ethernet)
  13. RX packets 67008 bytes 19169606 (18.2 MiB)
  14. RX errors 0 dropped 0 overruns 0 frame 0
  15. TX packets 56855 bytes 17779848 (16.9 MiB)
  16. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  17. lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
  18. inet 127.0.0.1 netmask 255.0.0.0
  19. inet6 ::1 prefixlen 128 scopeid 0x10<host>
  20. loop txqueuelen 0 (Local Loopback)
  21. RX packets 432770 bytes 161810178 (154.3 MiB)
  22. RX errors 0 dropped 0 overruns 0 frame 0
  23. TX packets 432770 bytes 161810178 (154.3 MiB)
  24. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  25. tap34ea740c-a6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  26. inet6 fe80::6c67:5fff:fe56:58a4 prefixlen 64 scopeid 0x20<link>
  27. ether 6e:67:5f:56:58:a4 txqueuelen 1000 (Ethernet)
  28. RX packets 75 bytes 8377 (8.1 KiB)
  29. RX errors 0 dropped 0 overruns 0 frame 0
  30. TX packets 1495 bytes 139421 (136.1 KiB)
  31. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

查看网桥状态

  1. [root@linux-node1 ~]# brctl show
  2. bridge name bridge id STP enabled interfaces
  3. brq7a3c7391-ce 8000.000c293498f2 no eth0
  4. tap34ea740c-a6

brq7a3c7391-ce(网桥):可以理解为一个小交换机,网桥上的设备都和eth0能通(数据链路层),其中tap34ea740c-a6作为虚拟机的网卡,从而实现通信

5.2 不同场景网络类型和OpenStack网络分层

5.2.1 Openstack网络分类

5.2.2Openstack网络分层

首先网络分层肯定是基于OSI七层模型的,在这里就不在赘述,只对Openstack的网络进行分层讲解

  • 网络:在实际的物理环境下,我们使用交换机或者集线器把多个计算机连接起来形成了网络,在Neutron的世界里,网络也是将多个不同的云主机连接起来。
  • 子网:在实际的物理环境下,在一个网络中,我们可以将网络划分成多个逻辑子网,在Neutron的世界里,子网也是路属于网络下的
  • 端口:在实际的物理环境下,每个字子网或者每个网络,都有很多的端口,比如交换机端口来供计算机链接,在Neutron的世界里端口也是隶属于子网下,云主机的网卡会对应到一个端口上。
  • 路由器:在实际的网络环境下,不同网络或者不同逻辑子网之间如果需要进行通信,需要通过路由器进行路由,在Neutron的实际里路由也是这个作用,用来连接不同的网络或者子网。

5.3 五种neutron常见的模型

  • 单一平面网络(也叫大二层网络,最初的 nova-network 网络模型) 
    单一平面网络的缺点: 
    a.存在单一网络瓶颈,缺乏可伸缩性。 
    b.缺乏合适的多租户隔离。 
    c.容易发生广播风暴,而且不能使用keepalived(vrrp组播) 
  • 多平面网络 
  • 混合平面私有网络 
  • 通过私有网络实现运营商路由功能 
  • 通过私有网络实现每个租户创建自己专属的网络区段 

5.4 图解Neutron服务的几大组件

  • ML2(The Modular Layer2):提供一个新的插件ML2,这个插件可以作为一个框架同时支持不同的2层网络,类似于中间协调在作用,通过ml2 
    调用linuxbridge、openvswitch和其他商业的插件,保证了可以同时使用linuxbridge、openvswitch和其他商业的插件。
  • DHCP-Agent:为虚拟机分配IP地址,在创建虚拟机之前,创建了一个IP地址池,就是为了给虚拟机分配IP地址的。具体如下 
    27 interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver Dhcp-agent需要配置与plugin对应的interface_driver 
    31 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq当启动一个实例时,分配和配置(ip)的程序包含一个在dnsmasq config中储存ip地址的进程,接着启动或重载dnsmasq。通常,OpenStack在每个网络中只有一个neutron-dhcp-agent负责spawn一个dnsmasq,所以一个庞大的网络(包含所有子网)中只会有一个dnsmasq提供服务。理论上,并且根据实用的实验室测试,dnsmasq应该能每秒处理1000个DHCP请求 
    52 enable_isolated_metadata = true 启用独立的metadata,后续会有说明
  • L3-agent:名字为neutron-l3-agent,为客户机访问外部网络提供3层转发服务。也部署在网络节点上。
  • LBaas:负载均衡及服务。后续会有说明

六、虚拟机知多少

虚拟机对于宿主机来说,知识一个进程,通过libvirt调用kvm进行管理虚拟机,当然也可以使用virsh工具来管理虚拟机

查看所虚拟机的真实内容

切换到虚拟机默认的存放路径

  1. [root@linux-node ~]# cd /var/lib/nova/instances/
  2. [root@linux-node instances]# ls
  3. _base bb71867c-4078-4984-bf5a-f10bd84ba72b compute_nodes locks
  • bb71867c-4078-4984-bf5a-f10bd84ba72b目录为虚拟机的ID(可通过nova list查看),详细内容如下 
    console.log 终端输出到此文件中 
    disk 虚拟磁盘,后端文件/var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f2545,使用的是copy on write模式,基础镜像就是这里的后端文件,只有变动的内容才放到disk文件中
  1. [root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# file disk
  2. disk: QEMU QCOW Image (v3), has backing file (path /var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f2545), 1073741824 bytes
  3. [root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# qemu-img info disk
  4. image: disk
  5. file format: qcow2
  6. virtual size: 1.0G (1073741824 bytes)
  7. disk size: 2.3M
  8. cluster_size: 65536
  9. backing file: /var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f254516fee9c
  10. Format specific information:
  11. compat: 1.1
  12. lazy refcounts: false
  • disk.info disk的详情
  1. [root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# qemu-img info disk.info
  2. image: disk.info
  3. file format: raw
  4. virtual size: 512 (512 bytes)
  5. disk size: 4.0K

libvirt.xml 就是libvirt自动生成的xml,不可以改动此xml,因为改了也没什么卵用,此xml是启动虚拟机时动态生成的

  • compute_nodes记录了主机名和时间戳
  1. [root@linux-node instances]# cat compute_nodes
  2. {"linux-node.oldboyedu.com": 1450560590.116144}
  • locks目录:类似于写shell脚本时的lock文件

学习metadata

  • metadata(元数据) 
    在创建虚拟机时可以添加或者修改虚拟机的默认属性,例如主机名,key-pair,ip地址等 
    在新创建的虚拟机上查看metadata的数据,这些都是可以通过metadata生成
  1. $ curl http://169.254.169.254/2009-04-04/meta
  2. -data
  3. ami-id
  4. ami-launch-index
  5. ami-manifest-path
  6. block-device-mapping/
  7. hostname
  8. instance-action
  9. instance-id
  10. instance-type
  11. local-hostname
  12. local-ipv4
  13. placement/
  14. public-hostname
  15. public-ipv4
  16. public-keys/
  17. reservation-id
  18. security-groups
  • 查看路由
  1. $ ip ro li
  2. default via 192.168.56.2 dev eth0
  3. 169.254.169.254 via 192.168.56.100 dev eth0
  4. 192.168.56.0/24 dev eth0 src 192.168.56.101
  • 在控制节点查看网络的命令空间ns
  1. [root@linux-node1 ~]# ip netns li
  2. qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984
  • 查看上述ns的具体网卡情况,也就是在命名空间中使用ip ad li并查看端口占用情况
  1. [root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 ip ad li
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 ::1/128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ns-34ea740c-a6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  9. link/ether fa:16:3e:93:01:0e brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.56.100/24 brd 192.168.56.255 scope global ns-34ea740c-a6
  11. valid_lft forever preferred_lft forever
  12. inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-34ea740c-a6
  13. valid_lft forever preferred_lft forever
  14. inet6 fe80::f816:3eff:fe93:10e/64 scope link
  15. valid_lft forever preferred_lft forever
  16. valid_lft forever preferred_lft forever
  17. [root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 netstat -lntup
  18. Active Internet connections (only servers)
  19. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  20. tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3875/python2
  21. tcp 0 0 192.168.56.100:53 0.0.0.0:* LISTEN 3885/dnsmasq
  22. tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN 3885/dnsmasq
  23. tcp6 0 0 fe80::f816:3eff:fe93:53 :::* LISTEN 3885/dnsmasq
  24. udp 0 0 192.168.56.100:53 0.0.0.0:* 3885/dnsmasq
  25. udp 0 0 169.254.169.254:53 0.0.0.0:* 3885/dnsmasq
  26. udp 0 0 0.0.0.0:67 0.0.0.0:* 3885/dnsmasq
  27. udp6 0 0 fe80::f816:3eff:fe93:53 :::* 3885/dnsmasq
  • 总结 
    命名空间ns的ip地址dhcp服务分配的192.168.56.100而且还有一个169.254.169.254的ip,并在此启用了一个http服务(不仅提供http,还提供dns解析等),命名空间在neutron的dhcp-agent配置文件中启用了service_metadata_proxy = True而生效, 
    所以虚拟机的路由是命名空间通过dhcp推送(ip ro li查看出来的)的,key-pair就是通过命名空间在虚拟机生成时在/etc/rc.local中写一个curl的脚本把key-pair定位到.ssh目录下,并且改名即可,其他同理

七、Dashboard演示

7.1 编辑dashboard的配置文件

  1. [root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings
  2. 29 ALLOWED_HOSTS = ['*', 'localhost']哪些主机可以访问,localhost代表列表
  3. 138 OPENSTACK_HOST = "192.168.56.11"改成keystone的地址
  4. 140 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"keystone之前创建的
  5. 108 CACHES = {
  6. 109 'default': {
  7. 110 'BACKEND': 'django.core.cache.backends.memcached.Memcac
  8. hedCache',
  9. 111 'LOCATION': '192.168.56.11:11211',
  10. 112 }
  11. 113 } 打开使用memcached
  12. 320 TIME_ZONE = "Asia/ShangHai" 设置市区

重启apache 
[root@linux-node1 ~]# systemctl restart httpd

7.2 操作dashboard

7.2.1 登录dashboard

使用keystone的demo用户登录(只有在管理员admin权限下才能看到所有instance) 

7.2.2 删除之前的虚拟机并重新创建一台虚拟机

了解针对虚拟机的各个状态操作 

  • 绑定浮动ip:Eip
  • 绑定/解绑接口:绑定或者解绑API
  • 编辑云主机:修改云主机的参数
  • 编辑安全组:修改secrity group的参数
  • 控制台:novnc控制台
  • 查看日志:查看console.log
  • 中止实例:stop虚拟机
  • 挂起实例:save 状态
  • 废弃实例:将实例暂时留存
  • 调整云主机大小: 调整其type
  • 锁定/解锁实例:锁定/解锁这个云主机
  • 软重启实例:正常重启,先stop后start
  • 硬重启实例:类似于断电重启
  • 关闭实例: shutdown该实例
  • 重建云主机:重新build一个同样的云主机
  • 终止实例: 删除云主机

7.2.3 launch instance

八、cinder

8.1存储的三大分类

块存储:硬盘,磁盘阵列DAS,SAN存储 
文件存储:nfs,GluserFS,Ceph(PB级分布式文件系统),MooserFS(缺点Metadata数据丢失,虚拟机就毁了)

11.2网络类型选择

对象存储:swift,S3

8.2 cinder控制节点的部署

安装cinder

  1. [root@linux-node1 ~]# yum install openstack-cinder python-cinderclient -y

修改cinder配置文件,修改后结果如下 
修改结果如下

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/cinder/cinder.conf
  2. 421:glance_host = 192.168.56.11
  3. 536:auth_strategy = keystone 配置glance服务的主机
  4. 2294:rpc_backend = rabbit 使用rabbirmq消息队列
  5. 2516:connection = mysql://cinder:cinder@192.168.56.11/cinder 配置mysql地址
  6. 2641:auth_uri = http://192.168.56.11:5000
  7. 2642:auth_url = http://192.168.56.11:35357
  8. 2643:auth_plugin = password
  9. 2644:project_domain_id = default
  10. 2645:user_domain_id = default
  11. 2646:project_name = service
  12. 2647:username = cinder
  13. 2648:password = cinder
  14. 2873:lock_path = /var/lib/cinder/tmp 锁路径
  15. 3172:rabbit_host = 192.168.56.11 rabbitmq的主机
  16. 3176:rabbit_port = 5672 rabbitmq的端口
  17. 3188:rabbit_userid = openstack rabbitmq的用户
  18. 3192:rabbit_password = openstack rabbitmq的密码

修改nov的配置文件

  1. [root@linux-node1 ~]# vim /etc/nova/nova.conf
  2. 2145 os_region_name = RegionOne 通知nova使用cinder

执行同步数据库操作

  1. [root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

检查倒入数据库结果

  1. [root@linux-node1 ~]# mysql -ucinder -pcinder -e "use cinder;show tables;"
  2. +----------------------------+
  3. | Tables_in_cinder |
  4. +----------------------------+
  5. | backups |
  6. | cgsnapshots |
  7. | consistencygroups |
  8. | driver_initiator_data |
  9. | encryption |
  10. | image_volume_cache_entries |
  11. | iscsi_targets |
  12. | migrate_version |
  13. | quality_of_service_specs |
  14. | quota_classes |
  15. | quota_usages |
  16. | quotas |
  17. | reservations |
  18. | services |
  19. | snapshot_metadata |
  20. | snapshots |
  21. | transfers |
  22. | volume_admin_metadata |
  23. | volume_attachment |
  24. | volume_glance_metadata |
  25. | volume_metadata |
  26. | volume_type_extra_specs |
  27. | volume_type_projects |
  28. | volume_types |
  29. | volumes |
  30. +----------------------------+

创建一个cinder用户,加入service项目,给予admin角色

  1. [root@linux-node1 ~]# openstack user create --domain default --password-prompt cinder
  2. User Password:
  3. Repeat User Password:(密码就是配置文件中配置的2648行)
  4. +-----------+----------------------------------+
  5. | Field | Value |
  6. +-----------+----------------------------------+
  7. | domain_id | default |
  8. | enabled | True |
  9. | id | 096964bd44124624ba7da2e13a4ebd92 |
  10. | name | cinder |
  11. +-----------+----------------------------------+
  12. [root@linux-node1 ~]# openstack role add --project service --user cinder admin

重启nova-api服务和启动cinder服务

  1. [root@linux-node1 ~]# systemctl restart openstack-nova-api.service
  2. [root@linux-node1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
  3. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
  4. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
  5. [root@linux-node1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

创建服务(包含V1和V2)

  1. [root@linux-node1 ~]# openstack service create --name cinder --description "OpenStack Block Storage" volume
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | OpenStack Block Storage |
  6. | enabled | True |
  7. | id | 57d5d78509dd4ed8b9878d312b8be26d |
  8. | name | cinder |
  9. | type | volume |
  10. +-------------+----------------------------------+
  11. [root@linux-node1 ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
  12. +-------------+----------------------------------+
  13. | Field | Value |
  14. +-------------+----------------------------------+
  15. | description | OpenStack Block Storage |
  16. | enabled | True |
  17. | id | bac129a7b6494e73947e83e56145c1c4 |
  18. | name | cinderv2 |
  19. | type | volumev2 |
  20. +-------------+----------------------------------+

分别对V1和V2创建三个环境(admin,internal,public)的endpoint

  1. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volume public http://192.168.56.11:8776/v1/%\(tenant_id\)s
  2. +--------------+--------------------------------------------+
  3. | Field | Value |
  4. +--------------+--------------------------------------------+
  5. | enabled | True |
  6. | id | 151da63772d7444297c3e0321264eabe |
  7. | interface | public |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | 57d5d78509dd4ed8b9878d312b8be26d |
  11. | service_name | cinder |
  12. | service_type | volume |
  13. | url | http://192.168.56.11:8776/v1/%(tenant_id)s |
  14. +--------------+--------------------------------------------+
  15. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volume internal http://192.168.56.11:8776/v1/%\(tenant_id\)s
  16. +--------------+--------------------------------------------+
  17. | Field | Value |
  18. +--------------+--------------------------------------------+
  19. | enabled | True |
  20. | id | 67b5a787d6784184a296a46e46c66d7a |
  21. | interface | internal |
  22. | region | RegionOne |
  23. | region_id | RegionOne |
  24. | service_id | 57d5d78509dd4ed8b9878d312b8be26d |
  25. | service_name | cinder |
  26. | service_type | volume |
  27. | url | http://192.168.56.11:8776/v1/%(tenant_id)s |
  28. +--------------+--------------------------------------------+
  29. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volume admin http://192.168.56.11:8776/v1/%\(tenant_id\)s
  30. +--------------+--------------------------------------------+
  31. | Field | Value |
  32. +--------------+--------------------------------------------+
  33. | enabled | True |
  34. | id | 719d5f3b1b034d7fb4fe577ff8f0f9ff |
  35. | interface | admin |
  36. | region | RegionOne |
  37. | region_id | RegionOne |
  38. | service_id | 57d5d78509dd4ed8b9878d312b8be26d |
  39. | service_name | cinder |
  40. | service_type | volume |
  41. | url | http://192.168.56.11:8776/v1/%(tenant_id)s |
  42. +--------------+--------------------------------------------+
  43. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 public http://192.168.56.11:8776/v2/%\(tenant_id\)s
  44. +--------------+--------------------------------------------+
  45. | Field | Value |
  46. +--------------+--------------------------------------------+
  47. | enabled | True |
  48. | id | 140ea418e1c842c8ba2669d0eda47577 |
  49. | interface | public |
  50. | region | RegionOne |
  51. | region_id | RegionOne |
  52. | service_id | bac129a7b6494e73947e83e56145c1c4 |
  53. | service_name | cinderv2 |
  54. | service_type | volumev2 |
  55. | url | http://192.168.56.11:8776/v2/%(tenant_id)s |
  56. +--------------+--------------------------------------------+
  57. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 internal http://192.168.56.11:8776/v2/%\(tenant_id\)s
  58. +--------------+--------------------------------------------+
  59. | Field | Value |
  60. +--------------+--------------------------------------------+
  61. | enabled | True |
  62. | id | e1871461053449a0a9ed1dd93e2de002 |
  63. | interface | internal |
  64. | region | RegionOne |
  65. | region_id | RegionOne |
  66. | service_id | bac129a7b6494e73947e83e56145c1c4 |
  67. | service_name | cinderv2 |
  68. | service_type | volumev2 |
  69. | url | http://192.168.56.11:8776/v2/%(tenant_id)s |
  70. +--------------+--------------------------------------------+
  71. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 admin http://192.168.56.11:8776/v2/%\(tenant_id\)s
  72. +--------------+--------------------------------------------+
  73. | Field | Value |
  74. +--------------+--------------------------------------------+
  75. | enabled | True |
  76. | id | 1b4f7495b4c5423fa8d541e6d917d3b9 |
  77. | interface | admin |
  78. | region | RegionOne |
  79. | region_id | RegionOne |
  80. | service_id | bac129a7b6494e73947e83e56145c1c4 |
  81. | service_name | cinderv2 |
  82. | service_type | volumev2 |
  83. | url | http://192.168.56.11:8776/v2/%(tenant_id)s |
  84. +--------------+--------------------------------------------+

8.3 cinder存储节点的部署(此处使用nova的计算节点)

  本文中cinder后端存储使用ISCSI(类似于nova-computer使用的kvm),ISCSI使用LVM,在定义好的VG中,每创建一个云硬盘,就会增加一个LV,使用ISCSI发布。 
在存储节点上加一个硬盘 
 
查看磁盘添加情况

  1. [root@linux-node ~]# fdisk -l
  2. Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
  3. Units = sectors of 1 * 512 = 512 bytes
  4. Sector size (logical/physical): 512 bytes / 512 bytes
  5. I/O size (minimum/optimal): 512 bytes / 512 bytes
  6. Disk label type: dos
  7. Disk identifier: 0x000bd159
  8. Device Boot Start End Blocks Id System
  9. /dev/sda1 * 2048 2099199 1048576 83 Linux
  10. /dev/sda2 2099200 35653631 16777216 82 Linux swap / Solaris
  11. /dev/sda3 35653632 104857599 34601984 83 Linux
  12. Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
  13. Units = sectors of 1 * 512 = 512 bytes
  14. Sector size (logical/physical): 512 bytes / 512 bytes
  15. I/O size (minimum/optimal): 512 bytes / 512 bytes

创建一个pv和vg(名为cinder-volumes)

  1. [root@linux-node ~]# pvcreate /dev/sdb
  2. Physical volume "/dev/sdb" successfully created
  3. [root@linux-node ~]# vgcreate cinder-volumes /dev/sdb
  4. Volume group "cinder-volumes" successfully created

修改lvm的配置文件中添加filter,只有instance可以访问

  1. [root@linux-node ~]# vim /etc/lvm/lvm.conf
  2. 107 filter = [ "a/sdb/", "r/.*/"]

存储节点安装

  1. [root@linux-node ~]# yum install openstack-cinder targetcli python-oslo-policy -y

修改存储节点的配置文件,在这里直接拷贝控制节点的文件

  1. [root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.56.12:/etc/cinder/cinder.conf
  2. [root@linux-node ~]# grep -n "^[a-Z]" /etc/cinder/cinder.conf
  3. 421:glance_host = 192.168.56.11
  4. 536:auth_strategy = keystone
  5. 540:enabled_backends = lvm 使用的后端是lvm,要对应添加的[lvm],当然使用hehe也可
  6. 2294:rpc_backend = rabbit
  7. 2516:connection = mysql://cinder:cinder@192.168.56.11/cinder
  8. 2641:auth_uri = http://192.168.56.11:5000
  9. 2642:auth_url = http://192.168.56.11:35357
  10. 2643:auth_plugin = password
  11. 2644:project_domain_id = default
  12. 2645:user_domain_id = default
  13. 2646:project_name = service
  14. 2647:username = cinder
  15. 2648:password = cinder
  16. 2873:lock_path = /var/lib/cinder/tmp
  17. 3172:rabbit_host = 192.168.56.11
  18. 3176:rabbit_port = 5672
  19. 3188:rabbit_userid = openstack
  20. 3192:rabbit_password = openstack
  21. 3414:[lvm] 此行不是grep过滤出来的,因为是在配置文件最后添加上的,其对应的是540行的lvm
  22. 3415:volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver 使用lvm后端存储
  23. 3416:volume_group = cinder-volumes vg的名称:刚才创建的
  24. 3417:iscsi_protocol = iscsi 使用iscsi协议
  25. 3418:iscsi_helper = lioadm

启动存储节点的cinder

  1. [root@linux-node ~]# systemctl enable openstack-cinder-volume.service target.service
  2. ln -s '/usr/lib/systemd/system/openstack-cinder-volume.service' '/etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service'
  3. ln -s '/usr/lib/systemd/system/target.service' '/etc/systemd/system/multi-user.target.wants/target.service'
  4. [root@linux-node ~]# systemctl start openstack-cinder-volume.service target.service

查看云硬盘服务状态(如果是虚拟机作为宿主机,时间不同步,会产生问题)

  1. [root@linux-node1 ~]# cinder service-list
  2. +------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
  3. | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  4. +------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
  5. | cinder-scheduler | linux-node1.oldboyedu.com | nova | enabled | up | 2015-12-25T03:17:31.000000 | - |
  6. | cinder-volume | linux-node.oldboyedu.com@lvm | nova | enabled | up | 2015-12-25T03:17:29.000000 | - |
  7. +------------------+------------------------------+------+---------+-------+----------------------------+-----------------+

创建一个云硬盘 

将云硬盘挂载到虚拟机上,在虚拟机实例详情可以查看到 
 
在虚拟机中对挂载的硬盘进行分区格式化,如果有时不想挂载这个云硬盘了,一定不要删掉,生产环境一定要注意,否则虚拟机会出现error,应使用umont,确定卸载了,再使用dashboard进行删除云硬盘

  1. $ sudo fdisk -l
  2. Disk /dev/vda: 1073 MB, 1073741824 bytes
  3. 255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
  4. Units = sectors of 1 * 512 = 512 bytes
  5. Sector size (logical/physical): 512 bytes / 512 bytes
  6. I/O size (minimum/optimal): 512 bytes / 512 bytes
  7. Disk identifier: 0x00000000
  8. Device Boot Start End Blocks Id System
  9. /dev/vda1 * 16065 2088449 1036192+ 83 Linux
  10. Disk /dev/vdb: 3221 MB, 3221225472 bytes
  11. 16 heads, 63 sectors/track, 6241 cylinders, total 6291456 sectors
  12. Units = sectors of 1 * 512 = 512 bytes
  13. Sector size (logical/physical): 512 bytes / 512 bytes
  14. I/O size (minimum/optimal): 512 bytes / 512 bytes
  15. Disk identifier: 0x00000000
  16. Disk /dev/vdb doesn't contain a valid partition table
  17. $ sudo fdisk /dev/vdb
  18. Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
  19. Building a new DOS disklabel with disk identifier 0xfb4dbd94.
  20. Changes will remain in memory only, until you decide to write them.
  21. After that, of course, the previous content won't be recoverable.
  22. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
  23. Command (m for help): n
  24. Partition type:
  25. p primary (0 primary, 0 extended, 4 free)
  26. e extended
  27. Select (default p): p
  28. Partition number (1-4, default 1):
  29. Using default value 1
  30. First sector (2048-6291455, default 2048):
  31. Using default value 2048
  32. Last sector, +sectors or +size{K,M,G} (2048-6291455, default 6291455):
  33. Using default value 6291455
  34. Command (m for help): w
  35. The partition table has been altered!
  36. Calling ioctl() to re-read partition table.
  37. Syncing disks.
  38. $ sudo mkfs.ext4 /dev/vdb\1
  39. mke2fs 1.42.2 (27-Mar-2012)
  40. Filesystem label=
  41. OS type: Linux
  42. Block size=4096 (log=2)
  43. Fragment size=4096 (log=2)
  44. Stride=0 blocks, Stripe width=0 blocks
  45. 196608 inodes, 786176 blocks
  46. 39308 blocks (5.00%) reserved for the super user
  47. First data block=0
  48. Maximum filesystem blocks=805306368
  49. 24 block groups
  50. 32768 blocks per group, 32768 fragments per group
  51. 8192 inodes per group
  52. Superblock backups stored on blocks:
  53. 32768, 98304, 163840, 229376, 294912
  54. Allocating group tables: done
  55. Writing inode tables: done
  56. Creating journal (16384 blocks): done
  57. Writing superblocks and filesystem accounting information: done
  58. $ sudo mkfs.ext4 /dev/vdbb1
  59. mke2fs 1.42.2 (27-Mar-2012)
  60. Could not stat /dev/vdbb1 --- No such file or directory
  61. The device apparently does not exist; did you specify it correctly?
  62. $ sudo mkfs.ext4 /dev/vdb\1
  63. mke2fs 1.42.2 (27-Mar-2012)
  64. Filesystem label=
  65. OS type: Linux
  66. Block size=4096 (log=2)
  67. Fragment size=4096 (log=2)
  68. Stride=0 blocks, Stripe width=0 blocks
  69. 196608 inodes, 786176 blocks
  70. 39308 blocks (5.00%) reserved for the super user
  71. First data block=0
  72. Maximum filesystem blocks=805306368
  73. 24 block groups
  74. 32768 blocks per group, 32768 fragments per group
  75. 8192 inodes per group
  76. Superblock backups stored on blocks:
  77. 32768, 98304, 163840, 229376, 294912
  78. Allocating group tables: done
  79. Writing inode tables: done
  80. Creating journal (16384 blocks): done
  81. Writing superblocks and filesystem accounting information: done
  82. $ sudo mkdir /data
  83. $ sudo mount /dev/vdb1 /data
  84. $ df -h
  85. Filesystem Size Used Available Use% Mounted on
  86. /dev 242.3M 0 242.3M 0% /dev
  87. /dev/vda1 23.2M 18.0M 4.0M 82% /
  88. tmpfs 245.8M 0 245.8M 0% /dev/shm
  89. tmpfs 200.0K 72.0K 128.0K 36% /run
  90. /dev/vdb1 3.0G 68.5M 2.7G 2% /dat

从云硬盘启动一个虚拟机,先创建一个demo2的云硬盘 
 

九、虚拟机创建流程:


 
第一阶段:用户操作 
1)用户使用Dashboard或者CLI连接keystone,发送用户名和密码,待keystone验证通过,keystone会返回给dashboard一个authtoken 
2)Dashboard会带着上述的authtoken访问nova-api进行创建虚拟机请求 
3)nova-api会通过keytoken确认dashboard的authtoken认证消息。 
第二阶段:nova内组件交互阶段 
4)nova-api把用户要创建的虚拟机的信息记录到数据库中. 
5)nova-api使用rpc-call的方式发送请求给消息队列 
6)nova-scheduler获取消息队列中的消息 
7)nova-scheduler和查看数据库中要创建的虚拟机信息和计算节点的信息,进行调度 
8)nova-scheduler把调度后的信息发送给消息队列 
9)nova-computer获取nova-schedur发送给queue的消息 
10)nova-computer通过消息队列发送消息给nova-conudctor,想要获取数据库中的要创建虚拟机信息 
11)nova-conductor获取消息队列的消息 
12)nova-conductor读取数据库中要创建虚拟机的信息 
13)nova-conductor把从数据库获取的消息返回给消息队列 
14)nova-computer获取nova-conducter返回给消息队列的信息 
第三阶段:nova和其他组件进行交互 
15)nova-computer通过authtoken和数据库返回的镜像id请求glance服务 
16)glance会通过keystone进行认证 
17)glance验证通过后把镜像返回给nova-computer 
18)nova-computer通过authtoken和数据库返回的网络id请求neutron服务 
19)neutron会通过keystone进行认证 
20)neutron验证通过后把网络分配情况返回给nova-computer 
21)nova-computer通过authtoken和数据库返回的云硬盘请求cinder服务 
22)cinder会通过keystone进行认证 
23)cinder验证通过后把云硬盘分配情况返回给nova-computer 
第四阶段:nova创建虚拟机 
24)nova-compute通过libvirt调用kvm根据已有的信息创建虚拟机,动态生成xml 
25)nova-api会不断的在数据库中查询信息并在dashboard显示虚拟机的状态 
生产场景注意事项: 
1、新加的一个计算节点,创建虚拟机时间会很长,因为第一次使用计算节点,没有镜像,计算节点要把glance的镜像放在后端文件(/var/lib/nova/instance/_base)下, 
镜像如果很大,自然会需要很长时间,然后才会在后端文件的基础上创建虚拟机(写时复制copy on write)。 
2、创建虚拟机失败的原因之一:创建网桥失败。要保证eth0网卡配置文件的BOOTPROTE是static而不是dhcp状态。

十、负载均衡及服务LBaas

10.1 使用neutron-lbaas

  1. [root@linux-node1 ~]# yum install openstack-neutron-lbaas python-neutron-lbaas -y

安装haproxy,openstack默认使用haproxy作为代理

  1. [root@linux-node1 ~]# yum install haproxy -y

修改lbaas-agent和neutron配置文件,并重启neutron服务

  1. [root@linux-node1 ~]# vim /etc/neutron/lbaas_agent.ini
  2. 16 interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  3. 31 device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
  4. [root@linux-node1 ~]# vim /etc/neutron/neutron.conf
  5. 77 service_plugins = router,lbaas
  6. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/neutron_lbaas.conf
  7. 64:service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
  8. [root@linux-node1 ~]# systemctl restart neutron-server
  9. [root@linux-node1 ~]# systemctl enable neutron-lbaas-agent.service
  10. [root@linux-node1 ~]# systemctl start neutron-lbaas-agent.service

使用lbaas创建一个http的负载均衡 
 
在此负载均衡下加一个http节点(此节点使用的不是cirros镜像) 
 
查看命名空间和命名空间

  1. [root@linux-node1 ~]# ip netns li
  2. qlbaas-1f6d0ac9-32ee-496b-a183-7eaa85aeb2db
  3. qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984
  4. [root@linux-node1 ~]# ip netns li
  5. qlbaas-244327fe-a339-4cfd-a7a8-1be95903d3de
  6. qlbaas-1f6d0ac9-32ee-496b-a183-7eaa85aeb2db
  7. qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984
  8. [root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 netstat -lntup
  9. Active Internet connections (only servers)
  10. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  11. tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3875/python2
  12. tcp 0 0 192.168.56.100:53 0.0.0.0:* LISTEN 31752/dnsmasq
  13. tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN 31752/dnsmasq
  14. tcp6 0 0 fe80::f816:3eff:fe93:53 :::* LISTEN 31752/dnsmasq
  15. udp 0 0 192.168.56.100:53 0.0.0.0:* 31752/dnsmasq
  16. udp 0 0 169.254.169.254:53 0.0.0.0:* 31752/dnsmasq
  17. udp 0 0 0.0.0.0:67 0.0.0.0:* 31752/dnsmasq
  18. udp6 0 0 fe80::f816:3eff:fe93:53 :::* 31752/dnsmasq

查看控制节点自动生成的haproxy配置文件

  1. [root@linux-node1 ~]# cat /var/lib/neutron/lbaas/244327fe-a339-4cfd-a7a8-1be95903d3de/conf
  2. global
  3. daemon
  4. user nobody
  5. group haproxy
  6. log /dev/log local0
  7. log /dev/log local1 notice
  8. stats socket /var/lib/neutron/lbaas/244327fe-a339-4cfd-a7a8-1be95903d3de/sock mode 0666 level user
  9. defaults
  10. log global
  11. retries 3
  12. option redispatch
  13. timeout connect 5000
  14. timeout client 50000
  15. timeout server 50000
  16. frontend c16c7cf0-089f-4610-9fe2-724abb1bd145
  17. option tcplog
  18. bind 192.168.56.200:80
  19. mode http
  20. default_backend 244327fe-a339-4cfd-a7a8-1be95903d3de
  21. maxconn 2
  22. option forwardfor
  23. backend 244327fe-a339-4cfd-a7a8-1be95903d3de
  24. mode http
  25. balance roundrobin
  26. option forwardfor
  27. timeout check 30s
  28. option httpchk GET /
  29. http-check expect rstatus 200
  30. stick-table type ip size 10k
  31. stick on src
  32. server b6e8f6cc-9b3c-4936-9932-21330536e2fe 192.168.56.108:80 weight 5 check inter 30s fall 10

添加vip,关联浮动ip,搞定! 
 

十一、扩展

11.1 所加镜像不知道密码,需要修改

修改dashboard的配置文件,重启服务

  1. [root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings
  2. 201 OPENSTACK_HYPERVISOR_FEATURES = {
  3. 202 'can_set_mount_point': True,
  4. 203 'can_set_password': True,
  5. 204 'requires_keypair': True,

修改计算节点的nova配置文件,重启服务

  1. [root@linux-node ~]# vim /etc/nova/nova.conf
  2. 2735 inject_password=true
  3. [root@linux-node ~]# systemctl restart openstack-nova-compute.service

11.2openstack网络类型选择

1、Flat :主机数量限制(253),自己做私有云足够了 
2、VLAN :受Vlan4096限制 
3、GRE:三层隧道协议,使用封装和解封装的技术来进行传输的稳定,只能使用openvswitch,不支持linuxbridge。缺点:二层协议上升到三层,效率降低 
4、vxlan:vmvare的技术,解决了vlan不足的问题,克服GRE点对点扩展性差,把二层的数据进行封装通过UDP协议传输,突破了Vlan的限制,要使用上文所说的L3-Agent

11.3 私有云上线

1)开发测试云,用二手机器即可 
2)生产私有云, 
3)实现桌面虚拟化

原文链接:http://www.chuck-blog.com/chuck/294.html

一、OpenStack初探

1.1 OpenStack简介

 OpenStack是一整套开源软件项目的综合,它允许企业或服务提供者建立、运行自己的云计算和存储设施。Rackspace与NASA是最初重要的两个贡献者,前者提供了“云文件”平台代码,该平台增强了OpenStack对象存储部分的功能,而后者带来了“Nebula”平台形成了OpenStack其余的部分。而今,OpenStack基金会已经有150多个会员,包括很多知名公司如“Canonical、DELL、Citrix”等。

1.2 OpenStack的几大组件

1.2.1 图解各大组件之间关系

1.2.2 谈谈openstack的组件

  • OpenStack 认证(keystone)

      Keystone为所有的OpenStack组件提供认证和访问策略服务,它依赖自身REST(基于Identity API)系统进行工作,主要对(但不限于)Swift、Glance、Nova等进行认证与授权。事实上,授权通过对动作消息来源者请求的合法性进行鉴定 
      Keystone采用两种授权方式,一种基于用户名/密码,另一种基于令牌(Token)。除此之外,Keystone提供以下三种服务: 
    a.令牌服务:含有授权用户的授权信息 
    b.目录服务:含有用户合法操作的可用服务列表 
    c.策略服务:利用Keystone具体指定用户或群组某些访问权限 

认证服务组件

1)通过宾馆对比keystone 
User 住宾馆的人 
Credentials 身份证 
Authentication 认证你的身份证 
Token 房卡 
project 组间 
Service 宾馆可以提供的服务类别,比如,饮食类,娱乐类 
Endpoint 具体的一种服务,比如吃烧烤,打羽毛球 
Role VIP 等级,VIP越高,享有越高的权限 
2)keystone组件详细说明 
a.服务入口endpoint:如Nova、Swift和Glance一样每个OpenStack服务都拥有一个指定的端口和专属的URL,我们称其为入口(endpoints)。 
b.用户user:Keystone授权使用者 
注:代表一个个体,OpenStack以用户的形式来授权服务给它们。用户拥有证书(credentials),且可能分配给一个或多个租户。经过验证后,会为每个单独的租户提供一个特定的令牌。 
c.服务service:总体而言,任何通过Keystone进行连接或管理的组件都被称为服务。举个例子,我们可以称Glance为Keystone的服务。 
d.角色role:为了维护安全限定,就内特定用户可执行的操作而言,该用户关联的角色是非常重要的。注:一个角色是应是某个租户的使用权限集合,以允许某个指定用户访问或使用特定操作。角色是使用权限的逻辑分组,它使得通用的权限可以简单地分组并绑定到与某个指定租户相关的用户。 
e.租间project:租间指的是具有全部服务入口并配有特定成员角色的一个项目。注:一个租间映射到一个Nova的“project-id”,在对象存储中,一个租间可以有多个容器。根据不同的安装方式,一个租间可以代表一个客户、帐号、组织或项目。

  • OpenStack Dashboard界面 (horizon)

      Horizon是一个用以管理、控制OpenStack服务的Web控制面板,它可以管理实例、镜像、创建密匙对,对实例添加卷、操作Swift容器等。除此之外,用户还可以在控制面板中使用终端(console)或VNC直接访问实例。总之,Horizon具有如下一些特点: 
    a.实例管理:创建、终止实例,查看终端日志,VNC连接,添加卷等 
    b.访问与安全管理:创建安全群组,管理密匙对,设置浮动IP等 
    c.偏好设定:对虚拟硬件模板可以进行不同偏好设定 
    d.镜像管理:编辑或删除镜像 
    e.查看服务目录 
    f.管理用户、配额及项目用途 
    g.用户管理:创建用户等 
    h.卷管理:创建卷和快照 
    i.对象存储处理:创建、删除容器和对象 
    j.为项目下载环境变量

  • OpenStack nova

图解nova

API:负责接收和响应外部请求,支持OpenStackAPI,EC2API

nova-api 组件实现了RESTfulAPI功能,是外部访问Nova的唯一途径,接收外部的请求并通过Message Queue将请求发送给其他服务组件,同时也兼容EC2API,所以可以用EC2的管理工具对nova进行日常管理

Cert:负责身份认证
Scheduler:用于云主机调度

Nova Scheduler模块在openstack中的作用是决策虚拟机创建在哪个主机(计算节点),一般会根据过滤计算节点或者通过加权的方法调度计算节点来创建虚拟机。 
1)过滤 
首先得到未经过过滤的主机列表,然后根据过滤属性,选择服务条件的计算节点主机 
 
2)调度 
经过过滤后,需要对主机进行权值的计算,根据策略选择相应的某一台主机(对于每一个要创建的虚拟机而言) 
 
注:Openstack默认不支持指定的计算节点创建虚拟机 
你可以得到更多nova的知识==>>Nova过滤调度器

Conductor:计算节点访问,数据的中间件
Consloeauth:用于控制台的授权认证
Novncproxy:VNC代理
  • OpenStack 对象存储 (swift)

      Swift为OpenStack提供一种分布式、持续虚拟对象存储,它类似于Amazon Web Service的S3简单存储服务。Swift具有跨节点百级对象的存储能力。Swift内建冗余和失效备援管理,也能够处理归档和媒体流,特别是对大数据(千兆字节)和大容量(多对象数量)的测度非常高效。

swift功能及特点
  • 海量对象存储
  • 大文件(对象)存储
  • 数据冗余管理
  • 归档能力—–处理大数据集
  • 为虚拟机和云应用提供数据容器
  • 处理流媒体
  • 对象安全存储
  • 备份与归档
  • 良好的可伸缩性
Swift的组件
  • Swift账户
  • Swift容器
  • Swift对象
  • Swift代理
  • Swift RING
Swift代理服务器

  用户都是通过Swift-API与代理服务器进行交互,代理服务器正是接收外界请求的门卫,它检测合法的实体位置并路由它们的请求。 
此外,代理服务器也同时处理实体失效而转移时,故障切换的实体重复路由请求。

Swift对象服务器

  对象服务器是一种二进制存储,它负责处理本地存储中的对象数据的存储、检索和删除。对象都是文件系统中存放的典型的二进制文件,具有扩展文件属性的元数据(xattr)。注:xattr格式被Linux中的ext3/4,XFS,Btrfs,JFS和ReiserFS所支持,但是并没有有效测试证明在XFS,JFS,ReiserFS,Reiser4和ZFS下也同样能运行良好。不过,XFS被认为是当前最好的选择。

Swift容器服务器

  容器服务器将列出一个容器中的所有对象,默认对象列表将存储为SQLite文件(译者注:也可以修改为MySQL,安装中就是以MySQL为例)。容器服务器也会统计容器中包含的对象数量及容器的存储空间耗费。

Swift账户服务器

  账户服务器与容器服务器类似,将列出容器中的对象。

Ring(索引环)

  Ring容器记录着Swift中物理存储对象的位置信息,它是真实物理存储位置的实体名的虚拟映射,类似于查找及定位不同集群的实体真实物理位置的索引服务。这里所谓的实体指账户、容器、对象,它们都拥有属于自己的不同的Rings。

  • OpenStack 块存储(cinder)

      API service:负责接受和处理Rest请求,并将请求放入RabbitMQ队列。Cinder提供Volume API V2 
      Scheduler service:响应请求,读取或写向块存储数据库为维护状态,通过消息队列机制与其他进程交互,或直接与上层块存储提供的硬件或软件交互,通过driver结构,他可以与中队的存储 
    提供者进行交互 
      Volume service: 该服务运行在存储节点上,管理存储空间。每个存储节点都有一个Volume Service,若干个这样的存储节点联合起来可以构成一个存储资源池。为了支持不同类型和型号的存储

  • OpenStack Image service (glance)

      glance 主要有三个部分构成:glance-api,glance-registry以及image store 
    glance-api:接受云系统镜像的创建,删除,读取请求 
    glance-registry:云系统的镜像注册服务

  • OpenStack 网络 (neutron)

    这里就不详细介绍了,后面会有详细的讲解

二、环境准备

2.1 准备机器

  本次实验使用的是VMvare虚拟机。详情如下

  • 控制节点 
    hostname:linux-node1.oldboyedu.com 
    ip地址:192.168.56.11 网卡NAT eth0 
    系统及硬件:CentOS 7.1 内存2G,硬盘50G
  • 计算节点: 
    hostname:linux-node2.oldboyedu.com 
    ip地址:192.168.56.12 网卡NAT eth0 
    系统及硬件:CentOS 7.1 内存2G,硬盘50G

2.2 OpenStack版本介绍

本文使用的是最新版L(Liberty)版,其他版本如下图 

2.3 安装组件服务

2.3.1 控制节点安装

  • Base
  1. yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -y
  2. yum install centos-release-openstack-liberty -y
  3. yum install python-openstackclient -y
  • MySQL
  1. yum install mariadb mariadb-server MySQL-python -y
  • RabbitMQ
  1. yum install rabbitmq-server -y
  • Keystone
  1. yum install openstack-keystone httpd mod_wsgi memcached python-memcached -y
  • Glance
  1. yum install openstack-glance python-glance python-glanceclient -y
  • Nova
  1. yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y
  • Neutron
  1. yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
  • Dashboard
  1. yum install openstack-dashboard -y

2.3.2 计算节点安装

  • Base
  1. yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
  2. yum install centos-release-openstack-liberty -y
  3. yum install python-openstackclient -y
  • Nova linux-node2.example.com
  1. yum install openstack-nova-compute sysfsutils -y
  • Neutron
  1. yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y

三、实战OpenStack之控制节点

3.1 CentOS7的时间同步服务器chrony

下载chrony

  1. [root@linux-node1 ~]# yum install -y chrony

修改其配置文件

  1. [root@linux-node1 ~]# vim /etc/chrony.conf
  2. allow 192.168/16

chrony开机自启动,并且启动

  1. [root@linux-node1 ~]#systemctl enable chronyd.service
  2. [root@linux-node1 ~]#systemctl start chronyd.service

设置Centos7的时区

  1. [root@linux-node1 ~]# timedatectl set-timezoneb Asia/Shanghai

查看时区和时间

  1. [root@linux-node1 ~]# timedatectl status
  2. Local time: Tue 2015-12-15 12:19:55 CST
  3. Universal time: Tue 2015-12-15 04:19:55 UTC
  4. RTC time: Sun 2015-12-13 15:35:33
  5. Timezone: Asia/Shanghai (CST, +0800)
  6. NTP enabled: yes
  7. NTP synchronized: no
  8. RTC in local TZ: no
  9. DST active: n/a
  10. [root@linux-node1 ~]# date
  11. Tue Dec 15 12:19:57 CST 2015

3.2 入手mysql

Openstack的所有组件除了Horizon,都要用到数据库,本文使用的是mysql,在CentOS7中,默认叫做MariaDB。 
拷贝配置文件

  1. [root[@linux-node1 ~]#cp /usr/share/mysql/my-medium.cnf /etc/my.cnf

修改mysql配置并启动

  1. [root@linux-node1 ~]# vim /etc/my.cnf(在mysqld模块下添加如下内容)
  2. [mysqld]
  3. default-storage-engine = innodb 默认的存储引擎
  4. innodb_file_per_table 使用独享的表空间
  5. collation-server = utf8_general_ci设置校对标准
  6. init-connect = 'SET NAMES utf8' 设置连接的字符集
  7. character-set-server = utf8 设置创建数据库时默认的字符集

开机自启和启动mysql

  1. [root@linux-node1 ~]# systemctl enable mariadb.service
  2. ln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'
  3. [root@linux-node1 ~]# systemctl start mariadb.service

设置mysql的密码

  1. [root@linux-node1 ~]# mysql_secure_installation

创建所有组件的库并授权

  1. [root@linux-node1 ~]# mysql -uroot -p123456

执行sql

  1. CREATE DATABASE keystone;
  2. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
  3. GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
  4. CREATE DATABASE glance;
  5. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
  6. GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
  7. CREATE DATABASE nova;
  8. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
  9. GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
  10. CREATE DATABASE neutron;
  11. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
  12. GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
  13. CREATE DATABASE cinder;
  14. GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
  15. GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

3.3 Rabbit消息队列

  SOA架构:面向服务的体系结构是一个组件模型,它将应用程序的不同功能单元(称为服务)通过这些服务之间定义良好的接口和契约联系起来。接口是采用中立的方式进行定义的,它应该独立于实现服务的硬件平台、操作系统和编程语言。这使得构建在各种各样的系统中的服务可以使用一种统一和通用的方式进行交互。 
在这里Openstack采用了SOA架构方案,结合了SOA架构的松耦合特点,单独组件单独部署,每个组件之间可能互为消费者和提供者,通过消息队列(openstack 支持Rabbitmq,Zeromq,Qpid)进行通信,保证了当某个服务当掉的情况,不至于其他都当掉。

  1. 启动Rabbitmq
  2. [root@linux-node1 ~]# systemctl enable rabbitmq-server.service
  3. ln -s '/usr/lib/systemd/system/rabbitmq-server.service' '/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service'
  4. [root@linux-node1 ~]# systemctl start rabbitmq-server.service

新建Rabbitmq用户并授权

  1. [root@linux-node1 ~]# rabbitmqctl add_user openstack openstack
  2. [root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

启用Rabbitmq的web管理插件

  1. [root@linux-node1 ~]rabbitmq-plugins enable rabbitmq_management

重启Rabbitmq

  1. [root@linux-node1 ~]# systemctl restart rabbitmq-server.service

查看Rabbit的端口,其中5672是服务端口,15672是web管理端口,25672是做集群的端口

  1. [root@linux-node1 ~]# netstat -lntup |grep 5672
  2. tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 52448/beam
  3. tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 52448/beam
  4. tcp6 0 0 :::5672 :::* LISTEN 52448/beam

在web界面添加openstack用户,设置权限,首次登陆必须使用账号和密码必须都是guest 
 
role设置为administrator,并设置openstack的密码 
 
若想要监控Rabbit,即可使用下图中的API 

3.4 Keystone组件

修改keystone的配置文件

  1. [root@linux-node1 opt]# vim /etc/keystone/keystone.conf
  2. admin_token = 863d35676a5632e846d9
  3. 用作无用户时,创建用户来链接,此内容使用openssl随机产生
  4. connection = mysql://keystone:keystone@192.168.56.11/keystone
  5. 用作链接数据库,三个keysthone分别为keystone组件,keystone用户名,mysql中的keysthone库名

切换到keystone用户,导入keystoe数据库

  1. [root@linux-node1 opt]# su -s /bin/sh -c "keystone-manage db_sync" keystone
  2. [root@linux-node1 keystone]# cd /var/log/keystone/
  3. [root@linux-node1 keystone]# ll
  4. total 8
  5. -rw-r--r-- 1 keystone keystone 7064 Dec 15 14:43 keystone.log(通过切换到keystone用户下导入数据库,当启动的时候回把日志写入到该日志中,如果使用root执行倒库操作,则无法通过keysthone启动keystone程序)
  6. 31:verbose = true开启debug模式
  7. 1229:servers = 192.168.57.11:11211更改servers标签,填写memcache地址
  8. 1634:driver = sql开启默认sql驱动
  9. 1827:provider = uuid开启并使用唯一识别码
  10. 1832:driver = memcache(使用用户密码生成token时,存储到memcache中,高性能提供服务)

查看更改结果

  1. [root@linux-node1 keystone]# grep -n "^[a-Z]" /etc/keystone/keystone.conf
  2. 12:admin_token = 863d35676a5632e846d9
  3. 31:verbose = true
  4. 419:connection = mysql://keystone:keystone@192.168.56.11/keystone
  5. 1229:servers = 192.168.57.11:11211
  6. 1634:driver = sql
  7. 1827:provider = uuid
  8. 1832:driver = memcache

检查数据库导入结果

  1. MariaDB [keystone]> show tables;
  2. +------------------------+
  3. | Tables_in_keystone |
  4. +------------------------+
  5. | access_token |
  6. | assignment |
  7. | config_register |
  8. | consumer |
  9. | credential |
  10. | domain |
  11. | endpoint |
  12. | endpoint_group |
  13. | federation_protocol |
  14. | group |
  15. | id_mapping |
  16. | identity_provider |
  17. | idp_remote_ids |
  18. | mapping |
  19. | migrate_version |
  20. | policy |
  21. | policy_association |
  22. | project |
  23. | project_endpoint |
  24. | project_endpoint_group |
  25. | region |
  26. | request_token |
  27. | revocation_event |
  28. | role |
  29. | sensitive_config |
  30. | service |
  31. | service_provider |
  32. | token |
  33. | trust |
  34. | trust_role |
  35. | user |
  36. | user_group_membership |
  37. | whitelisted_config |
  38. +------------------------+
  39. 33 rows in set (0.00 sec)

添加一个apache的wsgi-keystone配置文件,其中5000端口是提供该服务的,35357是为admin提供管理用的

  1. [root@linux-node1 keystone]# cat /etc/httpd/conf.d/wsgi-keystone.conf
  2. Listen 5000
  3. Listen 35357
  4. <VirtualHost *:5000>
  5. WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  6. WSGIProcessGroup keystone-public
  7. WSGIScriptAlias / /usr/bin/keystone-wsgi-public
  8. WSGIApplicationGroup %{GLOBAL}
  9. WSGIPassAuthorization On
  10. <IfVersion >= 2.4>
  11. ErrorLogFormat "%{cu}t %M"
  12. </IfVersion>
  13. ErrorLog /var/log/httpd/keystone-error.log
  14. CustomLog /var/log/httpd/keystone-access.log combined
  15. <Directory /usr/bin>
  16. <IfVersion >= 2.4>
  17. Require all granted
  18. </IfVersion>
  19. <IfVersion < 2.4>
  20. Order allow,deny
  21. Allow from all
  22. </IfVersion>
  23. </Directory>
  24. </VirtualHost>
  25. <VirtualHost *:35357>
  26. WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
  27. WSGIProcessGroup keystone-admin
  28. WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
  29. WSGIApplicationGroup %{GLOBAL}
  30. WSGIPassAuthorization On
  31. <IfVersion >= 2.4>
  32. ErrorLogFormat "%{cu}t %M"
  33. </IfVersion>
  34. ErrorLog /var/log/httpd/keystone-error.log
  35. CustomLog /var/log/httpd/keystone-access.log combined
  36. <Directory /usr/bin>
  37. <IfVersion >= 2.4>
  38. Require all granted
  39. </IfVersion>
  40. <IfVersion < 2.4>
  41. Order allow,deny
  42. Allow from all
  43. </IfVersion>
  44. </Directory>
  45. </VirtualHost>

配置apache的servername,如果不配置servername,会影响keystone服务

  1. [root@linux-node1 httpd]# vim conf/httpd.conf
  2. ServerName 192.168.56.11:80

启动memcached,httpd,keystone

  1. [root@linux-node1 httpd]# systemctl enable memcached httpd
  2. ln -s '/usr/lib/systemd/system/memcached.service' '/etc/systemd/system/multi-user.target.wants/memcached.service'
  3. ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'
  4. [root@linux-node1 httpd]# systemctl start memcached httpd

查看httpd占用端口情况

  1. [root@linux-node1 httpd]# netstat -lntup|grep httpd
  2. tcp6 0 0 :::5000 :::* LISTEN 70482/httpd
  3. tcp6 0 0 :::80 :::* LISTEN 70482/httpd
  4. tcp6 0 0 :::35357 :::* LISTEN 70482/httpd

创建用户并连接keystone,在这里可以使用两种方式,通过keystone –help后家参数的方式,或者使用环境变量env的方式,下面就将使用环境变量的方式,分别设置了token,API及控制版本(SOA种很适用)

  1. [root@linux-node1 ~]# export OS_TOKEN=863d35676a5632e846d9
  2. [root@linux-node1 ~]# export OS_URL=http://192.168.56.11:35357/v3
  3. [root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3

创建admin项目(project)

  1. [root@linux-node1 httpd]# openstack project create --domain default --description "Admin Project" admin
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Admin Project |
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | 45ec9f72892c404897d0f7d6668d7a53 |
  9. | is_domain | False |
  10. | name | admin |
  11. | parent_id | None |
  12. +-------------+----------------------------------+

创建admin用户(user)并设置密码(生产环境一定设置一个复杂的)

  1. [root@linux-node1 httpd]# openstack user create --domain default --password-prompt admin
  2. User Password:
  3. Repeat User Password:
  4. +-----------+----------------------------------+
  5. | Field | Value |
  6. +-----------+----------------------------------+
  7. | domain_id | default |
  8. | enabled | True |
  9. | id | bb6d73c0b07246fb8f26025bb72c06a1 |
  10. | name | admin |
  11. +-----------+----------------------------------+

创建admin的角色(role)

  1. [root@linux-node1 httpd]# openstack role create admin
  2. +-------+----------------------------------+
  3. | Field | Value |
  4. +-------+----------------------------------+
  5. | id | b0bd00e6164243ceaa794db3250f267e |
  6. | name | admin |
  7. +-------+----------------------------------+

把admin用户加到admin项目,赋予admin角色,把角色,项目,用户关联起来

  1. [root@linux-node1 httpd]# openstack role add --project admin --user admin admin

创建一个普通用户demo,demo项目,角色为普通用户(uesr),并把它们关联起来

  1. [root@linux-node1 httpd]# openstack project create --domain default --description "Demo Project" demo
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Demo Project |
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | 4a213e53e4814685859679ff1dcb559f |
  9. | is_domain | False |
  10. | name | demo |
  11. | parent_id | None |
  12. +-------------+----------------------------------+
  13. [root@linux-node1 httpd]# openstack user create --domain default --password=demo demo
  14. +-----------+----------------------------------+
  15. | Field | Value |
  16. +-----------+----------------------------------+
  17. | domain_id | default |
  18. | enabled | True |
  19. | id | eb29c091e0ec490cbfa5d11dc2388766 |
  20. | name | demo |
  21. +-----------+----------------------------------+
  22. [root@linux-node1 httpd]# openstack role create user
  23. +-------+----------------------------------+
  24. | Field | Value |
  25. +-------+----------------------------------+
  26. | id | 4b36460ef1bd42daaf67feb19a8a55cf |
  27. | name | user |
  28. +-------+----------------------------------+
  29. [root@linux-node1 httpd]# openstack role add --project demo --user demo user

创建一个service的项目,此服务用来管理nova,neuturn,glance等组件的服务

  1. [root@linux-node1 httpd]# openstack project create --domain default --description "Service Project" service
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | Service Project |
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | 0399778f38934986a923c96d8dc92073 |
  9. | is_domain | False |
  10. | name | service |
  11. | parent_id | None |
  12. +-------------+----------------------------------+

查看创建的用户,角色,项目

  1. [root@linux-node1 httpd]# openstack user list
  2. +----------------------------------+-------+
  3. | ID | Name |
  4. +----------------------------------+-------+
  5. | bb6d73c0b07246fb8f26025bb72c06a1 | admin |
  6. | eb29c091e0ec490cbfa5d11dc2388766 | demo |
  7. +----------------------------------+-------+
  8. [root@linux-node1 httpd]# openstack project list
  9. +----------------------------------+---------+
  10. | ID | Name |
  11. +----------------------------------+---------+
  12. | 0399778f38934986a923c96d8dc92073 | service |
  13. | 45ec9f72892c404897d0f7d6668d7a53 | admin |
  14. | 4a213e53e4814685859679ff1dcb559f | demo |
  15. +----------------------------------+---------+
  16. [root@linux-node1 httpd]# openstack role list
  17. +----------------------------------+-------+
  18. | ID | Name |
  19. +----------------------------------+-------+
  20. | 4b36460ef1bd42daaf67feb19a8a55cf | user |
  21. | b0bd00e6164243ceaa794db3250f267e | admin |
  22. +----------------------------------+-------+

注册keystone服务,虽然keystone本身是搞注册的,但是自己也需要注册服务 
创建keystone认证

  1. [root@linux-node1 httpd]# openstack service create --name keystone --description "OpenStack Identity" identity
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | OpenStack Identity |
  6. | enabled | True |
  7. | id | 46228b6dae2246008990040bbde371c3 |
  8. | name | keystone |
  9. | type | identity |
  10. +-------------+----------------------------------+

分别创建三种类型的endpoint,分别为public:对外可见,internal内部使用,admin管理使用

  1. [root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity public http://192.168.56.11:5000/v2.0
  2. +--------------+----------------------------------+
  3. | Field | Value |
  4. +--------------+----------------------------------+
  5. | enabled | True |
  6. | id | 1143dcd58b6848a1890c3f2b9bf101d5 |
  7. | interface | public |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | 46228b6dae2246008990040bbde371c3 |
  11. | service_name | keystone |
  12. | service_type | identity |
  13. | url | http://192.168.56.11:5000/v2.0 |
  14. +--------------+----------------------------------+
  15. [root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity internal http://192.168.56.11:5000/v2.0
  16. +--------------+----------------------------------+
  17. | Field | Value |
  18. +--------------+----------------------------------+
  19. | enabled | True |
  20. | id | 496f648007a04e5fbe99b62ed8a76acd |
  21. | interface | internal |
  22. | region | RegionOne |
  23. | region_id | RegionOne |
  24. | service_id | 46228b6dae2246008990040bbde371c3 |
  25. | service_name | keystone |
  26. | service_type | identity |
  27. | url | http://192.168.56.11:5000/v2.0 |
  28. +--------------+----------------------------------+
  29. [root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity admin http://192.168.56.11:35357/v2.0
  30. +--------------+----------------------------------+
  31. | Field | Value |
  32. +--------------+----------------------------------+
  33. | enabled | True |
  34. | id | 28283cbf90b5434ba7a8780fac9308df |
  35. | interface | admin |
  36. | region | RegionOne |
  37. | region_id | RegionOne |
  38. | service_id | 46228b6dae2246008990040bbde371c3 |
  39. | service_name | keystone |
  40. | service_type | identity |
  41. | url | http://192.168.56.11:35357/v2.0 |
  42. +--------------+----------------------------------+

查看创建的endpoint

  1. [root@linux-node1 httpd]# openstack endpoint list
  2. +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
  3. | ID | Region | Service Name | Service Type | Enabled | Interface | URL |
  4. +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
  5. | 1143dcd58b6848a1890c3f2b9bf101d5 | RegionOne | keystone | identity | True | public | http://192.168.56.11:5000/v2.0 |
  6. | 28283cbf90b5434ba7a8780fac9308df | RegionOne | keystone | identity | True | admin | http://192.168.56.11:35357/v2.0 |
  7. | 496f648007a04e5fbe99b62ed8a76acd | RegionOne | keystone | identity | True | internal | http://192.168.56.11:5000/v2.0 |
  8. +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+

链接到keystone,请求token,在这里由于已经添加了用户名和密码,就不在使用token,所有就一定要取消环境变量了

  1. [root@linux-node1 httpd]# unset OS_TOKEN
  2. [root@linux-node1 httpd]# unset OS_URL
  3. [root@linux-node1 httpd]#openstack --os-auth-url http://192.168.56.11:35357/v3
  4. --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue
  5. Password:
  6. +------------+----------------------------------+
  7. | Field | Value |
  8. +------------+----------------------------------+
  9. | expires | 2015-12-16T17:45:52.926050Z |
  10. | id | ba1d3c403bf34759b239176594001f8b |
  11. | project_id | 45ec9f72892c404897d0f7d6668d7a53 |
  12. | user_id | bb6d73c0b07246fb8f26025bb72c06a1 |
  13. +------------+----------------------------------+

配置admin和demo用户的环境变量,并添加执行权限,以后执行命令,直接source一下就行了

  1. [root@linux-node1 ~]# cat admin-openrc.sh
  2. export OS_PROJECT_DOMAIN_ID=default
  3. export OS_USER_DOMAIN_ID=default
  4. export OS_PROJECT_NAME=admin
  5. export OS_TENANT_NAME=admin
  6. export OS_USERNAME=admin
  7. export OS_PASSWORD=admin
  8. export OS_AUTH_URL=http://192.168.56.11:35357/v3
  9. export OS_IDENTITY_API_VERSION=3
  10. [root@linux-node1 ~]# cat demo-openrc.sh
  11. export OS_PROJECT_DOMAIN_ID=default
  12. export OS_USER_DOMAIN_ID=default
  13. export OS_PROJECT_NAME=demo
  14. export OS_TENANT_NAME=demo
  15. export OS_USERNAME=demo
  16. export OS_PASSWORD=demo
  17. export OS_AUTH_URL=http://192.168.56.11:5000/v3
  18. export OS_IDENTITY_API_VERSION=3
  19. [root@linux-node1 ~]# chmod +x demo-openrc.sh
  20. [root@linux-node1 ~]# chmod +x admin-openrc.sh
  21. [root@linux-node1 ~]# source admin-openrc.sh
  22. [root@linux-node1 ~]# openstack token issue
  23. +------------+----------------------------------+
  24. | Field | Value |
  25. +------------+----------------------------------+
  26. | expires | 2015-12-16T17:54:06.632906Z |
  27. | id | ade4b0c451b94255af1e96736555db75 |
  28. | project_id | 45ec9f72892c404897d0f7d6668d7a53 |
  29. | user_id | bb6d73c0b07246fb8f26025bb72c06a1 |
  30. +------------+----------------------------------+

3.5 Glance部署

修改glance-api和glance-registry的配置文件,同步数据库

  1. [root@linux-node1 glance]# vim glance-api.conf
  2. 538 connection=mysql://glance:glance@192.168.
  3. 56.11/glance
  4. [root@linux-node1 glance]# vim glance-registry.conf
  5. 363 connection=mysql://glance:glance@192.168.
  6. 56.11/glance
  7. [root@linux-node1 glance]# su -s /bin/sh -c "glance-manage db_sync" glance
  8. No handlers could be found for logger "oslo_config.cfg"(可以忽略)

检查导入glance库的表情况

  1. MariaDB [(none)]> use glance;
  2. Database changed
  3. MariaDB [glance]> show tables;
  4. +----------------------------------+
  5. | Tables_in_glance |
  6. +----------------------------------+
  7. | artifact_blob_locations |
  8. | artifact_blobs |
  9. | artifact_dependencies |
  10. | artifact_properties |
  11. | artifact_tags |
  12. | artifacts |
  13. | image_locations |
  14. | image_members |
  15. | image_properties |
  16. | image_tags |
  17. | images |
  18. | metadef_namespace_resource_types |
  19. | metadef_namespaces |
  20. | metadef_objects |
  21. | metadef_properties |
  22. | metadef_resource_types |
  23. | metadef_tags |
  24. | migrate_version |
  25. | task_info |
  26. | tasks |
  27. +----------------------------------+
  28. 20 rows in set (0.00 sec)

配置glance连接keystone,对于keystone,每个服务都要有一个用户连接keystone

  1. [root@linux-node1 ~]# source admin-openrc.sh
  2. [root@linux-node1 ~]# openstack user create --domain default --password=glance glance
  3. +-----------+----------------------------------+
  4. | Field | Value |
  5. +-----------+----------------------------------+
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | f4c340ba02bf44bf83d5c3ccfec77359 |
  9. | name | glance |
  10. +-----------+----------------------------------+
  11. [root@linux-node1 ~]# openstack role add --project service --user glance admin

修改glance-api配置文件,结合keystone和mysql

  1. [root@linux-node1 glance]# vim glance-api.conf
  2. 978 auth_uri = http://192.168.56.11:5000
  3. 979 auth_url = http://192.168.56.11:35357
  4. 980 auth_plugin = password
  5. 981 project_domain_id = default
  6. 982 user_domain_id = default
  7. 983 project_name = service
  8. 984 username = glance
  9. 985 password = glance
  10. 1485 flavor=keystone
  11. 491 notification_driver = noop 镜像服务不需要使用消息队列
  12. 642 default_store=file镜像存放成文件
  13. 701 filesystem_store_datadir=/var/lib/glance/images/
  14. 镜像存放位置
  15. 363 verbose=True 打开debug
  16. ```
  17. 修改glance-registry配置文件,结合keystone和mysql
  18. ```bash
  19. [root@linux-node1 glance]# vim glance-registry.conf
  20. 188:verbose=True
  21. 316:notification_driver =noop
  22. 767 auth_uri = http://192.168.56.11:5000
  23. 768 auth_url = http://192.168.56.11:35357
  24. 769 auth_plugin = password
  25. 770 project_domain_id = default
  26. 771 user_domain_id = default
  27. 772 project_name = service
  28. 773 username = glance
  29. 774 password = glance
  30. 1256:flavor=keystone
  31. ```
  32. 检查glance修改过的配置
  33. ```bash
  34. [root@linux-node1 ~]# grep -n '^[a-z]' /etc/glance/glance-api.conf
  35. 363:verbose=True
  36. 491:notification_driver = noop
  37. 538:connection=mysql://glance:glance@192.168.56.11/glance
  38. 642:default_store=file
  39. 701:filesystem_store_datadir=/var/lib/glance/images/
  40. 978:auth_uri = http://192.168.56.11:5000
  41. 979:auth_url = http://192.168.56.11:35357
  42. 980:auth_plugin = password
  43. 981:project_domain_id = default
  44. 982:user_domain_id = default
  45. 983:project_name = service
  46. 984:username = glance
  47. 985:password = glance
  48. 1485:flavor=keystone
  49. [root@linux-node1 ~]# grep -n '^[a-z]' /etc/glance/glance-registry.conf
  50. 188:verbose=True
  51. 316:notification_driver =noop
  52. 363:connection=mysql://glance:glance@192.168.56.11/glance
  53. 767:auth_uri = http://192.168.56.11:5000
  54. 768:auth_url = http://192.168.56.11:35357
  55. 769:auth_plugin = password
  56. 770:project_domain_id = default
  57. 771:user_domain_id = default
  58. 772:project_name = service
  59. 773:username = glance
  60. 774:password = glance
  61. 1256:flavor=keystone
  62. ```
  63. 对glance设置开机启动并启动glance服务
  64. ```bash
  65. [root@linux-node1 ~]# systemctl enable openstack-glance-api
  66. ln -s '/usr/lib/systemd/system/openstack-glance-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-api.service'
  67. [root@linux-node1 ~]# systemctl enable openstack-glance-registry
  68. ln -s '/usr/lib/systemd/system/openstack-glance-registry.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service'
  69. [root@linux-node1 ~]# systemctl start openstack-glance-api
  70. [root@linux-node1 ~]# systemctl start openstack-glance-registry

查看galnce占用端口情况,其中9191是registry占用端口,9292是api占用端口

  1. [root@linux-node1 ~]# netstat -lntup|egrep "9191|9292"
  2. tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 13180/python2
  3. tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 13162/python2
  4. ```bash
  5. 使glance服务在keystone上注册,才可以允许其他服务调用glance
  6. ```bash
  7. [root@linux-node1 ~]# source admin-openrc.sh
  8. [root@linux-node1 ~]# openstack service create --name glance --description "OpenStack Image service" image
  9. +-------------+----------------------------------+
  10. | Field | Value |
  11. +-------------+----------------------------------+
  12. | description | OpenStack Image service |
  13. | enabled | True |
  14. | id | cc8b4b4c712f47aa86e2d484c20a65c8 |
  15. | name | glance |
  16. | type | image |
  17. +-------------+----------------------------------+
  18. [root@linux-node1 ~]# openstack endpoint create --region RegionOne image public http://192.168.56.11:9292
  19. +--------------+----------------------------------+
  20. | Field | Value |
  21. +--------------+----------------------------------+
  22. | enabled | True |
  23. | id | 56cf6132fef14bfaa01c380338f485a6 |
  24. | interface | public |
  25. | region | RegionOne |
  26. | region_id | RegionOne |
  27. | service_id | cc8b4b4c712f47aa86e2d484c20a65c8 |
  28. | service_name | glance |
  29. | service_type | image |
  30. | url | http://192.168.56.11:9292 |
  31. +--------------+----------------------------------+
  32. [root@linux-node1 ~]# openstack endpoint create --region RegionOne image internal http://192.168.56.11:9292
  33. +--------------+----------------------------------+
  34. | Field | Value |
  35. +--------------+----------------------------------+
  36. | enabled | True |
  37. | id | 8005e8fcd85f4ea281eb9591c294e760 |
  38. | interface | internal |
  39. | region | RegionOne |
  40. | region_id | RegionOne |
  41. | service_id | cc8b4b4c712f47aa86e2d484c20a65c8 |
  42. | service_name | glance |
  43. | service_type | image |
  44. | url | http://192.168.56.11:9292 |
  45. +--------------+----------------------------------+
  46. [root@linux-node1 ~]# openstack endpoint create --region RegionOne image admin http://192.168.56.11:9292
  47. +--------------+----------------------------------+
  48. | Field | Value |
  49. +--------------+----------------------------------+
  50. | enabled | True |
  51. | id | 2b55d6db62eb47e9b8993d23e36111e0 |
  52. | interface | admin |
  53. | region | RegionOne |
  54. | region_id | RegionOne |
  55. | service_id | cc8b4b4c712f47aa86e2d484c20a65c8 |
  56. | service_name | glance |
  57. | service_type | image |
  58. | url | http://192.168.56.11:9292 |
  59. +--------------+----------------------------------+

在admin和demo中加入glance的环境变量,告诉其他服务glance使用的环境变量,一定要在admin-openrc.sh的路径下执行

  1. [root@linux-node1 ~]# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
  2. export OS_IMAGE_API_VERSION=2
  3. [root@linux-node1 ~]# tail -1 admin-openrc.sh
  4. export OS_IMAGE_API_VERSION=2
  5. [root@linux-node1 ~]# tail -1 demo-openrc.sh
  6. export OS_IMAGE_API_VERSION=2

如果出现以下情况,表示glance配置成功,由于没有镜像,所以看不到

  1. [root@linux-node1 ~]# glance image-list
  2. +----+------+
  3. | ID | Name |
  4. +----+------+
  5. +----+------+

下载一个镜像

  1. [root@linux-node1 ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
  2. --2015-12-17 02:12:55-- http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
  3. Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 69.163.241.114
  4. Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|69.163.241.114|:80... connected.
  5. HTTP request sent, awaiting response... 200 OK
  6. Length: 13287936 (13M) [text/plain]
  7. Saving to: cirros-0.3.4-x86_64-disk.img
  8. 100%[======================================>] 13,287,936 127KB/s in 71s
  9. 2015-12-17 02:14:08 (183 KB/s) - cirros-0.3.4-x86_64-disk.img saved [13287936/13287936]

上传镜像到glance,要在上一步所下载的镜像当前目录执行

  1. [root@linux-node1 ~]# glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
  2. [=============================>] 100%
  3. +------------------+--------------------------------------+
  4. | Property | Value |
  5. +------------------+--------------------------------------+
  6. | checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
  7. | container_format | bare |
  8. | created_at | 2015-12-16T18:16:46Z |
  9. | disk_format | qcow2 |
  10. | id | 4b36361f-1946-4026-b0cb-0f7073d48ade |
  11. | min_disk | 0 |
  12. | min_ram | 0 |
  13. | name | cirros |
  14. | owner | 45ec9f72892c404897d0f7d6668d7a53 |
  15. | protected | False |
  16. | size | 13287936 |
  17. | status | active |
  18. | tags | [] |
  19. | updated_at | 2015-12-16T18:16:47Z
  20. |
  21. | virtual_size | None |
  22. | visibility | public |
  23. +------------------+--------------------------------------+

查看上传镜像

  1. [root@linux-node1 ~]# glance image-list
  2. +--------------------------------------+--------+
  3. | ID | Name |
  4. +--------------------------------------+--------+
  5. | 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros |
  6. +--------------------------------------+--------+
  7. [root@linux-node1 ~]# cd /var/lib/glance/images/
  8. [root@linux-node1 images]# ls
  9. 4b36361f-1946-4026-b0cb-0f7073d48ade(和上述ID一致)

3.6 Nova控制节点的部署

创建nova用户,并加入到service项目中,赋予admin权限

  1. [root@linux-node1 ~]# source admin-openrc.sh .
  2. [root@linux-node1 ~]# openstack user create --domain default --password=nova nova
  3. +-----------+----------------------------------+
  4. | Field | Value |
  5. +-----------+----------------------------------+
  6. | domain_id | default |
  7. | enabled | True |
  8. | id | 73659413d2a842dc82971a0fc531e7b9 |
  9. | name | nova |
  10. +-----------+----------------------------------+
  11. [root@linux-node1 ~]# openstack role add --project service --user nova admin

修改nova的配置文件,配置结果如下

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/nova/nova.conf
  2. 61:rpc_backend=rabbit 使用rabbitmq消息队列
  3. 124:my_ip=192.168.56.11 变量,方便调用
  4. 268:enabled_apis=osapi_compute,metadata 禁用ec2API
  5. 425:auth_strategy=keystone (使用keystone验证,分清处这个是default模块下的)
  6. 1053:network_api_class=nova.network.neutronv2.api.API 网络使用neutron的,中间的.代表目录结构
  7. 1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver (以前的类的名称LinuxBridgeInterfaceDriver,现在叫做NeutronLinuxBridgeInterfaceDriver
  8. 1331:security_group_api=neutron 设置安全组sgneutron
  9. 1370:debug=true
  10. 1374:verbose=True
  11. 1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver(关闭防火墙)
  12. 1828:vncserver_listen= $my_ip vnc监听地址
  13. 1832:vncserver_proxyclient_address= $my_ip 代理客户端地址
  14. 2213:connection=mysql://nova:nova@192.168.56.11/nova
  15. 2334:host=$my_ip glance的地址
  16. 2546:auth_uri = http://192.168.56.11:5000
  17. 2547:auth_url = http://192.168.56.11:35357
  18. 2548:auth_plugin = password
  19. 2549:project_domain_id = default
  20. 2550:user_domain_id = default
  21. 2551:project_name = service 使用service项目
  22. 2552:username = nova
  23. 2553:password = nova
  24. 3807:lock_path=/var/lib/nova/tmp 锁路径
  25. 3970:rabbit_host=192.168.56.11 指定rabbit主机
  26. 3974:rabbit_port=5672 rabbitmq端口
  27. 3986:rabbit_userid=openstack rabbitmq用户
  28. 3990:rabbit_password=openstack rabbitmq密码

同步数据库

  1. [root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
  2. MariaDB [nova]> use nova;
  3. Database changed
  4. MariaDB [nova]> show tables;
  5. +--------------------------------------------+
  6. | Tables_in_nova |
  7. +--------------------------------------------+
  8. | agent_builds |
  9. | aggregate_hosts |
  10. | aggregate_metadata |
  11. | aggregates |
  12. | block_device_mapping |
  13. | bw_usage_cache |
  14. | cells |
  15. | certificates |
  16. | compute_nodes |
  17. | console_pools |
  18. | consoles |
  19. | dns_domains |
  20. | fixed_ips |
  21. | floating_ips |
  22. | instance_actions |
  23. | instance_actions_events |
  24. | instance_extra |
  25. | instance_faults |
  26. | instance_group_member |
  27. | instance_group_policy |
  28. | instance_groups |
  29. | instance_id_mappings |
  30. | instance_info_caches |
  31. | instance_metadata |
  32. | instance_system_metadata |
  33. | instance_type_extra_specs |
  34. | instance_type_projects |
  35. | instance_types |
  36. | instances |
  37. | key_pairs |
  38. | migrate_version |
  39. | migrations |
  40. | networks |
  41. | pci_devices |
  42. | project_user_quotas |
  43. | provider_fw_rules |
  44. | quota_classes |
  45. | quota_usages |
  46. | quotas |
  47. | reservations |
  48. | s3_images |
  49. | security_group_default_rules |
  50. | security_group_instance_association |
  51. | security_group_rules |
  52. | security_groups |
  53. | services |
  54. | shadow_agent_builds |
  55. | shadow_aggregate_hosts |
  56. | shadow_aggregate_metadata |
  57. | shadow_aggregates |
  58. | shadow_block_device_mapping |
  59. | shadow_bw_usage_cache |
  60. | shadow_cells |
  61. | shadow_certificates |
  62. | shadow_compute_nodes |
  63. | shadow_console_pools |
  64. | shadow_consoles |
  65. | shadow_dns_domains |
  66. | shadow_fixed_ips |
  67. | shadow_floating_ips |
  68. | shadow_instance_actions |
  69. | shadow_instance_actions_events |
  70. | shadow_instance_extra |
  71. | shadow_instance_faults |
  72. | shadow_instance_group_member |
  73. | shadow_instance_group_policy |
  74. | shadow_instance_groups |
  75. | shadow_instance_id_mappings |
  76. | shadow_instance_info_caches |
  77. | shadow_instance_metadata |
  78. | shadow_instance_system_metadata |
  79. | shadow_instance_type_extra_specs |
  80. | shadow_instance_type_projects |
  81. | shadow_instance_types |
  82. | shadow_instances |
  83. | shadow_key_pairs |
  84. | shadow_migrate_version |
  85. | shadow_migrations |
  86. | shadow_networks |
  87. | shadow_pci_devices |
  88. | shadow_project_user_quotas |
  89. | shadow_provider_fw_rules |
  90. | shadow_quota_classes |
  91. | shadow_quota_usages |
  92. | shadow_quotas |
  93. | shadow_reservations |
  94. | shadow_s3_images |
  95. | shadow_security_group_default_rules |
  96. | shadow_security_group_instance_association |
  97. | shadow_security_group_rules |
  98. | shadow_security_groups |
  99. | shadow_services |
  100. | shadow_snapshot_id_mappings |
  101. | shadow_snapshots |
  102. | shadow_task_log |
  103. | shadow_virtual_interfaces |
  104. | shadow_volume_id_mappings |
  105. | shadow_volume_usage_cache |
  106. | snapshot_id_mappings |
  107. | snapshots |
  108. | tags |
  109. | task_log |
  110. | virtual_interfaces |
  111. | volume_id_mappings |
  112. | volume_usage_cache |
  113. +--------------------------------------------+
  114. 105 rows in set (0.01 sec)

启动nova的全部服务

  1. [root@linux-node1 ~]# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
  2. [root@linux-node1 ~]# systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

在keystone上注册nova,并检查控制节点的nova服务是否配置成功

  1. [root@linux-node1 ~]# openstack service create --name nova --description "OpenStack Compute" compute
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | OpenStack Compute |
  6. | enabled | True |
  7. | id | f5873e5f21994da882599c9866e28d55 |
  8. | name | nova |
  9. | type | compute |
  10. +-------------+----------------------------------+
  11. [root@linux-node1 ~]# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2/%\(tenant_id\)s
  12. +--------------+--------------------------------------------+
  13. | Field | Value |
  14. +--------------+--------------------------------------------+
  15. | enabled | True |
  16. | id | 23e9132aeb3a4dcb8689aa1933ad7301 |
  17. | interface | public |
  18. | region | RegionOne |
  19. | region_id | RegionOne |
  20. | service_id | f5873e5f21994da882599c9866e28d55 |
  21. | service_name | nova |
  22. | service_type | compute |
  23. | url | http://192.168.56.11:8774/v2/%(tenant_id)s |
  24. +--------------+--------------------------------------------+
  25. [root@linux-node1 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2/%\(tenant_id\)s
  26. +--------------+--------------------------------------------+
  27. | Field | Value |
  28. +--------------+--------------------------------------------+
  29. | enabled | True |
  30. | id | 1d67f3630a0f413e9d6ff53bcc657fb6 |
  31. | interface | internal |
  32. | region | RegionOne |
  33. | region_id | RegionOne |
  34. | service_id | f5873e5f21994da882599c9866e28d55 |
  35. | service_name | nova |
  36. | service_type | compute |
  37. | url | http://192.168.56.11:8774/v2/%(tenant_id)s |
  38. +--------------+--------------------------------------------+
  39. [root@linux-node1 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2/%\(tenant_id\)s
  40. +--------------+--------------------------------------------+
  41. | Field | Value |
  42. +--------------+--------------------------------------------+
  43. | enabled | True |
  44. | id | b7f7c210becc4e54b76bb454966582e4 |
  45. | interface | admin |
  46. | region | RegionOne |
  47. | region_id | RegionOne |
  48. | service_id | f5873e5f21994da882599c9866e28d55 |
  49. | service_name | nova |
  50. | service_type | compute |
  51. | url | http://192.168.56.11:8774/v2/%(tenant_id)s |
  52. +--------------+--------------------------------------------+
  53. [root@linux-node1 ~]# openstack host list
  54. +---------------------------+-------------+----------+
  55. | Host Name | Service | Zone |
  56. +---------------------------+-------------+----------+
  57. | linux-node1.oldboyedu.com | conductor | internal |
  58. | linux-node1.oldboyedu.com | consoleauth | internal |
  59. | linux-node1.oldboyedu.com | cert | internal |
  60. | linux-node1.oldboyedu.com | scheduler | internal |
  61. +---------------------------+-------------+----------+

3.7 Nova compute 计算节点的部署

  • 图解Nova cpmpute 
     
    nova-compute一般运行在计算节点上,通过Message Queue接收并管理VM的生命周期 
    nova-compute通过Libvirt管理KVN,通过XenAPI管理Xen等
  • 配置时间同步 
    修改其配置文件
  1. [root@linux-node1 ~]# vim /etc/chrony.conf
  2. server 192.168.56.11 iburst(只保留这一个server,也就是控制节点的时间)

chrony开机自启动,并且启动

  1. [root@linux-node1 ~]#systemctl enable chronyd.service
  2. [root@linux-node1 ~]#systemctl start chronyd.service

设置Centos7的时区

  1. [root@linux-node1 ~]# timedatectl set-timezone
  2. ``` Asia/Shanghai
  3. 查看时区和时间
  4. ```bash
  5. [root@linux-node ~]# timedatectl status
  6. Local time: Fri 2015-12-18 00:12:26 CST
  7. Universal time: Thu 2015-12-17 16:12:26 UTC
  8. RTC time: Sun 2015-12-13 15:32:36
  9. Timezone: Asia/Shanghai (CST, +0800)
  10. NTP enabled: yes
  11. NTP synchronized: no
  12. RTC in local TZ: no
  13. DST active: n/a
  14. [root@linux-node1 ~]# date
  15. Fri Dec 18 00:12:43 CST 2015
  • 开始部署计算节点 
    更改计算节点上的配置文件,直接使用控制节点的配置文件
  1. [root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/ (在控制节点上操作的scp)

更改配置文件后的过滤结果

  1. [root@linux-node ~]# grep -n '^[a-Z]' /etc/nova/nova.conf
  2. 61:rpc_backend=rabbit
  3. 124:my_ip=192.168.56.12 改成本机ip
  4. 268:enabled_apis=osapi_compute,metadata
  5. 425:auth_strategy=keystone
  6. 1053:network_api_class=nova.network.neutronv2.api.API
  7. 1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
  8. 1331:security_group_api=neutron
  9. 1370:debug=true
  10. 1374:verbose=True
  11. 1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver
  12. 1820:novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html 指定novncproxyIP地址和端口
  13. 1828:vncserver_listen=0.0.0.0 vnc监听0.0.0.0
  14. 1832:vncserver_proxyclient_address= $my_ip
  15. 1835:vnc_enabled=true 启用vnc
  16. 1838:vnc_keymap=en-us 英语键盘
  17. 2213:connection=mysql://nova:nova@192.168.56.11/nova
  18. 2334:host=192.168.56.11
  19. 2546:auth_uri = http://192.168.56.11:5000
  20. 2547:auth_url = http://192.168.56.11:35357
  21. 2548:auth_plugin = password
  22. 2549:project_domain_id = default
  23. 2550:user_domain_id = default
  24. 2551:project_name = service
  25. 2552:username = nova
  26. 2553:password = nova
  27. 2727:virt_type=kvm 使用kvm虚拟机,需要cpu支持,可通过grep "vmx" /proc/cpuinfo查看
  28. 3807:lock_path=/var/lib/nova/tmp
  29. 3970:rabbit_host=192.168.56.11
  30. 3974:rabbit_port=5672
  31. 3986:rabbit_userid=openstack
  32. 3990:rabbit_password=openstack

启动计算节点的libvirt和nova-compute

  1. [root@linux-node ~]# systemctl enable libvirtd openstack-nova-compute
  2. ln -s '/usr/lib/systemd/system/openstack-nova-compute.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service'
  3. [root@linux-node ~]# systemctl start libvirtd openstack-nova-compute
  • 在控制节点中查看注册的host,最后一个compute即是注册的host
  1. [root@linux-node1 ~]# openstack host list
  2. +---------------------------+-------------+----------+
  3. | Host Name | Service | Zone |
  4. +---------------------------+-------------+----------+
  5. | linux-node1.oldboyedu.com | conductor | internal |
  6. | linux-node1.oldboyedu.com | consoleauth | internal |
  7. | linux-node1.oldboyedu.com | cert | internal |
  8. | linux-node1.oldboyedu.com | scheduler | internal |
  9. | linux-node.oldboyedu.com | compute | nova |
  10. +---------------------------+-------------+----------+

在控制节点中测试nova和glance连接正常,nova链接keystone是否正常

  1. [root@linux-node1 ~]# nova image-list
  2. +--------------------------------------+--------+--------+--------+
  3. | ID | Name | Status | Server |
  4. +--------------------------------------+--------+--------+--------+
  5. | 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros | ACTIVE | |
  6. +--------------------------------------+--------+--------+--------+
  7. [root@linux-node1 ~]# nova endpoints
  8. WARNING: keystone has no endpoint in ! Available endpoints for this service:
  9. +-----------+----------------------------------+
  10. | keystone | Value |
  11. +-----------+----------------------------------+
  12. | id | 1143dcd58b6848a1890c3f2b9bf101d5 |
  13. | interface | public |
  14. | region | RegionOne |
  15. | region_id | RegionOne |
  16. | url | http://192.168.56.11:5000/v2.0 |
  17. +-----------+----------------------------------+
  18. +-----------+----------------------------------+
  19. | keystone | Value |
  20. +-----------+----------------------------------+
  21. | id | 28283cbf90b5434ba7a8780fac9308df |
  22. | interface | admin |
  23. | region | RegionOne |
  24. | region_id | RegionOne |
  25. | url | http://192.168.56.11:35357/v2.0 |
  26. +-----------+----------------------------------+
  27. +-----------+----------------------------------+
  28. | keystone | Value |
  29. +-----------+----------------------------------+
  30. | id | 496f648007a04e5fbe99b62ed8a76acd |
  31. | interface | internal |
  32. | region | RegionOne |
  33. | region_id | RegionOne |
  34. | url | http://192.168.56.11:5000/v2.0 |
  35. +-----------+----------------------------------+
  36. WARNING: nova has no endpoint in ! Available endpoints for this service:
  37. +-----------+---------------------------------------------------------------+
  38. | nova | Value |
  39. +-----------+---------------------------------------------------------------+
  40. | id | 1d67f3630a0f413e9d6ff53bcc657fb6 |
  41. | interface | internal |
  42. | region | RegionOne |
  43. | region_id | RegionOne |
  44. | url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53 |
  45. +-----------+---------------------------------------------------------------+
  46. +-----------+---------------------------------------------------------------+
  47. | nova | Value |
  48. +-----------+---------------------------------------------------------------+
  49. | id | 23e9132aeb3a4dcb8689aa1933ad7301 |
  50. | interface | public |
  51. | region | RegionOne |
  52. | region_id | RegionOne |
  53. | url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53 |
  54. +-----------+---------------------------------------------------------------+
  55. +-----------+---------------------------------------------------------------+
  56. | nova | Value |
  57. +-----------+---------------------------------------------------------------+
  58. | id | b7f7c210becc4e54b76bb454966582e4 |
  59. | interface | admin |
  60. | region | RegionOne |
  61. | region_id | RegionOne |
  62. | url | http://192.168.56.11:8774/v2/45ec9f72892c404897d0f7d6668d7a53 |
  63. +-----------+---------------------------------------------------------------+
  64. WARNING: glance has no endpoint in ! Available endpoints for this service:
  65. +-----------+----------------------------------+
  66. | glance | Value |
  67. +-----------+----------------------------------+
  68. | id | 2b55d6db62eb47e9b8993d23e36111e0 |
  69. | interface | admin |
  70. | region | RegionOne |
  71. | region_id | RegionOne |
  72. | url | http://192.168.56.11:9292 |
  73. +-----------+----------------------------------+
  74. +-----------+----------------------------------+
  75. | glance | Value |
  76. +-----------+----------------------------------+
  77. | id | 56cf6132fef14bfaa01c380338f485a6 |
  78. | interface | public |
  79. | region | RegionOne |
  80. | region_id | RegionOne |
  81. | url | http://192.168.56.11:9292 |
  82. +-----------+----------------------------------+
  83. +-----------+----------------------------------+
  84. | glance | Value |
  85. +-----------+----------------------------------+
  86. | id | 8005e8fcd85f4ea281eb9591c294e760 |
  87. | interface | internal |
  88. | region | RegionOne |
  89. | region_id | RegionOne |
  90. | url | http://192.168.56.11:9292 |
  91. +-----------+----------------------------------+

3.8 Neturn 服务部署

注册neutron服务

  1. [root@linux-node1 ~]# source admin-openrc.sh
  2. [root@linux-node1 ~]# openstack service create --name neutron --description "OpenStack Networking" network
  3. +-------------+----------------------------------+
  4. | Field | Value |
  5. +-------------+----------------------------------+
  6. | description | OpenStack Networking |
  7. | enabled | True |
  8. | id | e698fc8506634b05b250e9fdd8205565 |
  9. | name | neutron |
  10. | type | network |
  11. +-------------+----------------------------------+
  12. [root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696
  13. +--------------+----------------------------------+
  14. | Field | Value |
  15. +--------------+----------------------------------+
  16. | enabled | True |
  17. | id | 3cf4a13ec1b94e66a47e27bfccd95318 |
  18. | interface | public |
  19. | region | RegionOne |
  20. | region_id | RegionOne |
  21. | service_id | e698fc8506634b05b250e9fdd8205565 |
  22. | service_name | neutron |
  23. | service_type | network |
  24. | url | http://192.168.56.11:9696 |
  25. +--------------+----------------------------------+
  26. [root@linux-node1 ~]# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696
  27. +--------------+----------------------------------+
  28. | Field | Value |
  29. +--------------+----------------------------------+
  30. | enabled | True |
  31. | id | 5cd1e54d14f046dda2f7bf45b418f54c |
  32. | interface | internal |
  33. | region | RegionOne |
  34. | region_id | RegionOne |
  35. | service_id | e698fc8506634b05b250e9fdd8205565 |
  36. | service_name | neutron |
  37. | service_type | network |
  38. | url | http://192.168.56.11:9696 |
  39. +--------------+----------------------------------+
  40. [root@linux-node1 ~]# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696
  41. +--------------+----------------------------------+
  42. | Field | Value |
  43. +--------------+----------------------------------+
  44. | enabled | True |
  45. | id | 2c68cb45730d470691e6a3f0656eff03 |
  46. | interface | admin |
  47. | region | RegionOne |
  48. | region_id | RegionOne |
  49. | service_id | e698fc8506634b05b250e9fdd8205565 |
  50. | service_name | neutron |
  51. | service_type | network |
  52. | url | http://192.168.56.11:9696 |
  53. +--------------+----------------------------------+
  54. 创建neutron用户,并添加大service项目,给予admin权限
  55. [root@linux-node1 config]# openstack user create --domain default --password=neutron neutron
  56. +-----------+----------------------------------+
  57. | Field | Value |
  58. +-----------+----------------------------------+
  59. | domain_id | default |
  60. | enabled | True |
  61. | id | 5143854f317541d68efb8bba8b2539fc |
  62. | name | neutron |
  63. +-----------+----------------------------------+
  64. [root@linux-node1 config]# openstack role add --project service --user neutron admin

修改neturn配置文件

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/neutron.conf
  2. 20:state_path = /var/lib/neutron
  3. 60:core_plugin = ml2 核心插件为ml2
  4. 77:service_plugins = router 服务插件为router
  5. 92:auth_strategy = keystone
  6. 360:notify_nova_on_port_status_changes = True
  7. 端口改变需通知nova
  8. 364:notify_nova_on_port_data_changes = True
  9. 367:nova_url = http://192.168.56.11:8774/v2
  10. 573:rpc_backend=rabbit
  11. 717:auth_uri = http://192.168.56.11:5000
  12. 718:auth_url = http://192.168.56.11:35357
  13. 719:auth_plugin = password
  14. 720:project_domain_id = default
  15. 721:user_domain_id = default
  16. 722:project_name = service
  17. 723:username = neutron
  18. 724:password = neutron
  19. 737:connection = mysql://neutron:neutron@192.168.56.11:3306/neutron
  20. 780:auth_url = http://192.168.56.11:35357
  21. 781:auth_plugin = password
  22. 782:project_domain_id = default
  23. 783:user_domain_id = default
  24. 784:region_name = RegionOne
  25. 785:project_name = service
  26. 786:username = nova
  27. 787:password = nova
  28. 818:lock_path = $state_path/lock
  29. 998:rabbit_host = 192.168.56.11
  30. 1002:rabbit_port = 5672
  31. 1014:rabbit_userid = openstack
  32. 1018:rabbit_password = openstack

修改ml2的配置文件,ml2后续会有详细说明

  1. [root@linux-node1 ~]# grep "^[a-Z]" /etc/neutron/plugins/ml2/ml2_conf.ini
  2. type_drivers = flat,vlan,gre,vxlan,geneve 各种驱动
  3. tenant_network_types = vlan,gre,vxlan,geneve 网络类型
  4. mechanism_drivers = openvswitch,linuxbridge 支持的底层驱动
  5. extension_drivers = port_security 端口安全
  6. flat_networks = physnet1 使用单一扁平网络(和host一个网络)
  7. enable_ipset = True

修改的linuxbridge配置文件、

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
  2. 9:physical_interface_mappings = physnet1:eth0 网卡映射eth
  3. 16:enable_vxlan = false 关闭vxlan
  4. 51:prevent_arp_spoofing = True
  5. 57:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  6. 61:enable_security_group = True

修改dhcp的配置文件

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/dhcp_agent.ini
  2. 27:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
  3. 31:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq 使用Dnsmasq作为dhcp服务
  4. 52:enable_isolated_metadata = true

修改metadata_agent.ini配置文件

  1. [root@linux-node1 config]# grep -n "^[a-Z]" /etc/neutron/metadata_agent.ini
  2. 4:auth_uri = http://192.168.56.11:5000
  3. 5:auth_url = http://192.168.56.11:35357
  4. 6:auth_region = RegionOne
  5. 7:auth_plugin = password
  6. 8:project_domain_id = default
  7. 9:user_domain_id = default
  8. 10:project_name = service
  9. 11:username = neutron
  10. 12:password = neutron
  11. 29:nova_metadata_ip = 192.168.56.11
  12. 52:metadata_proxy_shared_secret = neutron

在控制节点的nova中添加关于neutron的配置,`添加如下内容到neutron模块即可

  1. 3033:url = http://192.168.56.11:9696
  2. 3034:auth_url = http://192.168.56.11:35357
  3. 3035:auth_plugin = password
  4. 3036:project_domain_id = default
  5. 3037:user_domain_id = default
  6. 3038:region_name = RegionOne
  7. 3039:project_name = service
  8. 3040:username = neutron
  9. 3041:password = neutron
  10. 3043:service_metadata_proxy = True
  11. 3044:metadata_proxy_shared_secret = neutron
  12. ````
  13. 创建ml2的软连接
  14. ```bash
  15. [root@linux-node1 config]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步neutron数据库,并检查结果

  1. [root@linux-node1 config]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  2. MariaDB [(none)]> use neutron;
  3. Database changed
  4. MariaDB [neutron]> show tables;
  5. +-----------------------------------------+
  6. | Tables_in_neutron |
  7. +-----------------------------------------+
  8. | address_scopes |
  9. | agents |
  10. | alembic_version |
  11. | allowedaddresspairs |
  12. | arista_provisioned_nets |
  13. | arista_provisioned_tenants |
  14. | arista_provisioned_vms |
  15. | brocadenetworks |
  16. | brocadeports |
  17. | cisco_csr_identifier_map |
  18. | cisco_hosting_devices |
  19. | cisco_ml2_apic_contracts |
  20. | cisco_ml2_apic_host_links |
  21. | cisco_ml2_apic_names |
  22. | cisco_ml2_n1kv_network_bindings |
  23. | cisco_ml2_n1kv_network_profiles |
  24. | cisco_ml2_n1kv_policy_profiles |
  25. | cisco_ml2_n1kv_port_bindings |
  26. | cisco_ml2_n1kv_profile_bindings |
  27. | cisco_ml2_n1kv_vlan_allocations |
  28. | cisco_ml2_n1kv_vxlan_allocations |
  29. | cisco_ml2_nexus_nve |
  30. | cisco_ml2_nexusport_bindings |
  31. | cisco_port_mappings |
  32. | cisco_router_mappings |
  33. | consistencyhashes |
  34. | csnat_l3_agent_bindings |
  35. | default_security_group |
  36. | dnsnameservers |
  37. | dvr_host_macs |
  38. | embrane_pool_port |
  39. | externalnetworks |
  40. | extradhcpopts |
  41. | firewall_policies |
  42. | firewall_rules |
  43. | firewalls |
  44. | flavors |
  45. | flavorserviceprofilebindings |
  46. | floatingips |
  47. | ha_router_agent_port_bindings |
  48. | ha_router_networks |
  49. | ha_router_vrid_allocations |
  50. | healthmonitors |
  51. | ikepolicies |
  52. | ipallocationpools |
  53. | ipallocations |
  54. | ipamallocationpools |
  55. | ipamallocations |
  56. | ipamavailabilityranges |
  57. | ipamsubnets |
  58. | ipavailabilityranges |
  59. | ipsec_site_connections |
  60. | ipsecpeercidrs |
  61. | ipsecpolicies |
  62. | lsn |
  63. | lsn_port |
  64. | maclearningstates |
  65. | members |
  66. | meteringlabelrules |
  67. | meteringlabels |
  68. | ml2_brocadenetworks |
  69. | ml2_brocadeports |
  70. | ml2_dvr_port_bindings |
  71. | ml2_flat_allocations |
  72. | ml2_geneve_allocations |
  73. | ml2_geneve_endpoints |
  74. | ml2_gre_allocations |
  75. | ml2_gre_endpoints |
  76. | ml2_network_segments |
  77. | ml2_nexus_vxlan_allocations |
  78. | ml2_nexus_vxlan_mcast_groups |
  79. | ml2_port_binding_levels |
  80. | ml2_port_bindings |
  81. | ml2_ucsm_port_profiles |
  82. | ml2_vlan_allocations |
  83. | ml2_vxlan_allocations |
  84. | ml2_vxlan_endpoints |
  85. | multi_provider_networks |
  86. | networkconnections |
  87. | networkdhcpagentbindings |
  88. | networkgatewaydevicereferences |
  89. | networkgatewaydevices |
  90. | networkgateways |
  91. | networkqueuemappings |
  92. | networkrbacs |
  93. | networks |
  94. | networksecuritybindings |
  95. | neutron_nsx_network_mappings |
  96. | neutron_nsx_port_mappings |
  97. | neutron_nsx_router_mappings |
  98. | neutron_nsx_security_group_mappings |
  99. | nexthops |
  100. | nsxv_edge_dhcp_static_bindings |
  101. | nsxv_edge_vnic_bindings |
  102. | nsxv_firewall_rule_bindings |
  103. | nsxv_internal_edges |
  104. | nsxv_internal_networks |
  105. | nsxv_port_index_mappings |
  106. | nsxv_port_vnic_mappings |
  107. | nsxv_router_bindings |
  108. | nsxv_router_ext_attributes |
  109. | nsxv_rule_mappings |
  110. | nsxv_security_group_section_mappings |
  111. | nsxv_spoofguard_policy_network_mappings |
  112. | nsxv_tz_network_bindings |
  113. | nsxv_vdr_dhcp_bindings |
  114. | nuage_net_partition_router_mapping |
  115. | nuage_net_partitions |
  116. | nuage_provider_net_bindings |
  117. | nuage_subnet_l2dom_mapping |
  118. | ofcfiltermappings |
  119. | ofcnetworkmappings |
  120. | ofcportmappings |
  121. | ofcroutermappings |
  122. | ofctenantmappings |
  123. | packetfilters |
  124. | poolloadbalanceragentbindings |
  125. | poolmonitorassociations |
  126. | pools |
  127. | poolstatisticss |
  128. | portbindingports |
  129. | portinfos |
  130. | portqueuemappings |
  131. | ports |
  132. | portsecuritybindings |
  133. | providerresourceassociations |
  134. | qos_bandwidth_limit_rules |
  135. | qos_network_policy_bindings |
  136. | qos_policies |
  137. | qos_port_policy_bindings |
  138. | qosqueues |
  139. | quotas |
  140. | quotausages |
  141. | reservations |
  142. | resourcedeltas |
  143. | router_extra_attributes |
  144. | routerl3agentbindings |
  145. | routerports |
  146. | routerproviders |
  147. | routerroutes |
  148. | routerrules |
  149. | routers |
  150. | securitygroupportbindings |
  151. | securitygrouprules |
  152. | securitygroups |
  153. | serviceprofiles |
  154. | sessionpersistences |
  155. | subnetpoolprefixes |
  156. | subnetpools |
  157. | subnetroutes |
  158. | subnets |
  159. | tz_network_bindings |
  160. | vcns_router_bindings |
  161. | vips |
  162. | vpnservices |
  163. +-----------------------------------------+
  164. 155 rows in set (0.00 sec)

重启nova-api,并启动neutron服务

  1. [root@linux-node1 config]# systemctl restart openstack-nova-api
  2. [root@linux-node1 config]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  3. [root@linux-node1 config]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

检查neutron-agent结果

  1. [root@linux-node1 config]# neutron agent-list
  2. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
  3. | id | agent_type | host | alive | admin_state_up | binary |
  4. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
  5. | 5a9a522f-e2dc-42dc-ab37-b26da0bfe416 | Metadata agent | linux-node1.oldboyedu.com | :-) | True | neutron-metadata-agent |
  6. | 8ba06bd7-896c-47aa-a733-8a9a9822361c | DHCP agent | linux-node1.oldboyedu.com | :-) | True | neutron-dhcp-agent |
  7. | f16eef03-4592-4352-8d5e-c08fb91dc983 | Linux bridge agent | linux-node1.oldboyedu.com | :-) | True | neutron-linuxbridge-agent |
  8. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+

开始部署neutron的计算节点,在这里直接scp过去,不需要做任何更改

  1. [root@linux-node1 config]# scp /etc/neutron/neutron.conf 192.168.56.12:/etc/neutron/
  2. [root@linux-node1 config]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/

修改计算节点的nova配置,添加如下内容到neutron模块即可

  1. 3033:url = http://192.168.56.11:9696
  2. 3034:auth_url = http://192.168.56.11:35357
  3. 3035:auth_plugin = password
  4. 3036:project_domain_id = default
  5. 3037:user_domain_id = default
  6. 3038:region_name = RegionOne
  7. 3039:project_name = service
  8. 3040:username = neutron
  9. 3041:password = neutron
  10. 3043:service_metadata_proxy = True
  11. 3044:metadata_proxy_shared_secret = neutron
  12. ````
  13. 复制linuxbridge_agent
  14. 文件,无需更改,并创建ml2软连接
  15. ```bash
  16. [root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/
  17. [root@linux-node ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

重启计算节点的nova-computer

  1. [root@linux-node ml2]# systemctl restart openstack-nova-compute.service

计算机点上启动linuxbridge_agent服务

  1. [root@linux-node ml2]# systemctl restart openstack-nova-compute.service
  2. [root@linux-node ml2]# systemctl enable neutron-linuxbridge-agent.service
  3. ln -s '/usr/lib/systemd/system/neutron-linuxbridge-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service'
  4. [root@linux-node ml2]# systemctl start neutron-linuxbridge-agent.service

检查neutron的结果,有四个(控制节点一个,计算节点两个)结果代表正确

  1. [root@linux-node1 config]# neutron agent-list
  2. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
  3. | id | agent_type | host | alive | admin_state_up | binary |
  4. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
  5. | 5a9a522f-e2dc-42dc-ab37-b26da0bfe416 | Metadata agent | linux-node1.oldboyedu.com | :-) | True | neutron-metadata-agent |
  6. | 7d81019e-ca3b-4b32-ae32-c3de9452ef9d | Linux bridge agent | linux-node.oldboyedu.com | :-) | True | neutron-linuxbridge-agent |
  7. | 8ba06bd7-896c-47aa-a733-8a9a9822361c | DHCP agent | linux-node1.oldboyedu.com | :-) | True | neutron-dhcp-agent |
  8. | f16eef03-4592-4352-8d5e-c08fb91dc983 | Linux bridge agent | linux-node1.oldboyedu.com | :-) | True | neutron-linuxbridge-agent |
  9. +--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+

四、创建一台虚拟机

图解网络,并创建一个真实的桥接网络 
 
 
创建一个单一扁平网络(名字:flat),网络类型为flat,网络适共享的(share),网络提供者:physnet1,它是和eth0关联起来的

  1. [root@linux-node1 ~]# source admin-openrc.sh
  2. [root@linux-node1 ~]# neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flat
  3. Created a new network:
  4. +---------------------------+--------------------------------------+
  5. | Field | Value |
  6. +---------------------------+--------------------------------------+
  7. | admin_state_up | True |
  8. | id | 7a3c7391-cea7-47eb-a0ef-f7b18010c984 |
  9. | mtu | 0 |
  10. | name | flat |
  11. | port_security_enabled | True |
  12. | provider:network_type | flat |
  13. | provider:physical_network | physnet1 |
  14. | provider:segmentation_id | |
  15. | router:external | False |
  16. | shared | True |
  17. | status | ACTIVE |
  18. | subnets | |
  19. | tenant_id | 45ec9f72892c404897d0f7d6668d7a53 |
  20. +---------------------------+--------------------------------------+

对上一步创建的网络创建一个子网,名字为:subnet-create flat,设置dns和网关

  1. [root@linux-node1 ~]# neutron subnet-create flat 192.168.56.0/24 --name flat-subnet --allocation-pool start=192.168.56.100,end=192.168.56.200 --dns-nameserver 192.168.56.2 --gateway 192.168.56.2
  2. Created a new subnet:
  3. +-------------------+------------------------------------------------------+
  4. | Field | Value |
  5. +-------------------+------------------------------------------------------+
  6. | allocation_pools | {"start": "192.168.56.100", "end": "192.168.56.200"} |
  7. | cidr | 192.168.56.0/24 |
  8. | dns_nameservers | 192.168.56.2 |
  9. | enable_dhcp | True |
  10. | gateway_ip | 192.168.56.2 |
  11. | host_routes | |
  12. | id | 6841c8ae-78f6-44e2-ab74-7411108574c2 |
  13. | ip_version | 4 |
  14. | ipv6_address_mode | |
  15. | ipv6_ra_mode | |
  16. | name | flat-subnet |
  17. | network_id | 7a3c7391-cea7-47eb-a0ef-f7b18010c984 |
  18. | subnetpool_id | |
  19. | tenant_id | 45ec9f72892c404897d0f7d6668d7a53 |
  20. +-------------------+------------------------------------------------------+

查看创建的网络和子网

  1. [root@linux-node1 ~]# neutron net-list
  2. +--------------------------------------+------+------------------------------------------------------+
  3. | id | name | subnets |
  4. +--------------------------------------+------+------------------------------------------------------+
  5. | 7a3c7391-cea7-47eb-a0ef-f7b18010c984 | flat | 6841c8ae-78f6-44e2-ab74-7411108574c2 192.168.56.0/24 |
  6. +--------------------------------------+------+------------------------------------------------------+

注:创建虚拟机之前,由于一个网络下不能存在多个dhcp,所以一定关闭其他的dhcp选项 
下面开始正式创建虚拟机,为了可以连上所创建的虚拟机,在这里要创建一对公钥和私钥,并添加到openstack中

  1. [root@linux-node1 ~]# source demo-openrc.sh
  2. [root@linux-node1 ~]# ssh-keygen -q -N ""
  3. Enter file in which to save the key (/root/.ssh/id_rsa):
  4. [root@linux-node1 ~]# nova keypair-add --pub-key .ssh/id_rsa.pub mykey
  5. [root@linux-node1 ~]# nova keypair-list
  6. +-------+-------------------------------------------------+
  7. | Name | Fingerprint |
  8. +-------+-------------------------------------------------+
  9. | mykey | 9f:25:57:44:45:a3:6d:0d:4b:e7:ca:3a:9c:67:32:6f |
  10. +-------+-------------------------------------------------+
  11. [root@linux-node1 ~]# ls .ssh/
  12. id_rsa id_rsa.pub known_hosts

创建一个安全组,打开icmp和开放22端口

  1. [root@linux-node1 ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
  2. +-------------+-----------+---------+-----------+--------------+
  3. | IP Protocol | From Port | To Port | IP Range | Source Group |
  4. +-------------+-----------+---------+-----------+--------------+
  5. | icmp | -1 | -1 | 0.0.0.0/0 | |
  6. +-------------+-----------+---------+-----------+--------------+
  7. [root@linux-node1 ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
  8. +-------------+-----------+---------+-----------+--------------+
  9. | IP Protocol | From Port | To Port | IP Range | Source Group |
  10. +-------------+-----------+---------+-----------+--------------+
  11. | tcp | 22 | 22 | 0.0.0.0/0 | |
  12. +-------------+-----------+---------+-----------+--------------+

创建虚拟机之前要进行的确认虚拟机类型flavor(相当于EC2的intance的type)、需要的镜像(EC2的AMI),需要的网络(EC2的VPC),安全组(EC2的sg)

  1. [root@linux-node1 ~]# nova flavor-list
  2. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  3. | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  4. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  5. | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
  6. | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
  7. | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
  8. | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
  9. | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
  10. +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  11. [root@linux-node1 ~]# nova image-list
  12. +--------------------------------------+--------+--------+--------+
  13. | ID | Name | Status | Server |
  14. +--------------------------------------+--------+--------+--------+
  15. | 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros | ACTIVE | |
  16. +--------------------------------------+--------+--------+--------+
  17. [root@linux-node1 ~]# neutron net-list
  18. +--------------------------------------+------+------------------------------------------------------+
  19. | id | name | subnets |
  20. +--------------------------------------+------+------------------------------------------------------+
  21. | 7a3c7391-cea7-47eb-a0ef-f7b18010c984 | flat | 6841c8ae-78f6-44e2-ab74-7411108574c2 192.168.56.0/24 |
  22. +--------------------------------------+------+------------------------------------------------------+
  23. [root@linux-node1 ~]# nova secgroup-list
  24. +--------------------------------------+---------+------------------------+
  25. | Id | Name | Description |
  26. +--------------------------------------+---------+------------------------+
  27. | 2946cecd-0933-45d0-a6e2-0606abe418ee | default | Default security group |
  28. +--------------------------------------+---------+------------------------+

创建一台虚拟机,类型为m1.tiny,镜像为cirros(上文wget的),网络id为neutron net-list出来的,安全组就是默认的,选择刚开的创建的key-pair,虚拟机的名字为hello-instance

  1. [root@linux-node1 ~]# nova boot --flavor m1.tiny --image cirros --nic net-id=7a3c7391-cea7-47eb-a0ef-f7b18010c984 --security-group default --key-name mykey hello-instance
  2. +--------------------------------------+-----------------------------------------------+
  3. | Property | Value |
  4. +--------------------------------------+-----------------------------------------------+
  5. | OS-DCF:diskConfig | MANUAL |
  6. | OS-EXT-AZ:availability_zone | |
  7. | OS-EXT-STS:power_state | 0 |
  8. | OS-EXT-STS:task_state | scheduling |
  9. | OS-EXT-STS:vm_state | building |
  10. | OS-SRV-USG:launched_at | - |
  11. | OS-SRV-USG:terminated_at | - |
  12. | accessIPv4 | |
  13. | accessIPv6 | |
  14. | adminPass | JPp9rX5UBYcW |
  15. | config_drive | |
  16. | created | 2015-12-17T02:03:38Z |
  17. | flavor | m1.tiny (1) |
  18. | hostId | |
  19. | id | bb71867c-4078-4984-bf5a-f10bd84ba72b |
  20. | image | cirros (4b36361f-1946-4026-b0cb-0f7073d48ade) |
  21. | key_name | mykey |
  22. | metadata | {} |
  23. | name | hello-instance |
  24. | os-extended-volumes:volumes_attached | [] |
  25. | progress | 0 |
  26. | security_groups | default |
  27. | status | BUILD |
  28. | tenant_id | 4a213e53e4814685859679ff1dcb559f |
  29. | updated | 2015-12-17T02:03:41Z |
  30. | user_id | eb29c091e0ec490cbfa5d11dc2388766 |
  31. +--------------------------------------+-----------------------------------------------+

查看所创建的虚拟机状态、

  1. [root@linux-node1 ~]# nova list
  2. +--------------------------------------+----------------+--------+------------+-------------+---------------------+
  3. | ID | Name | Status | Task State | Power State | Networks |
  4. +--------------------------------------+----------------+--------+------------+-------------+---------------------+
  5. | bb71867c-4078-4984-bf5a-f10bd84ba72b | hello-instance | ACTIVE | - | Running | flat=192.168.56.101 |
  6. +--------------------------------------+----------------+--------+------------+-------------+---------------------+

ssh连接到所创建的虚拟机

  1. [root@linux-node1 ~]# ssh cirros@192.168.56.101

通过vnc生成URL在web界面上链接虚拟机

  1. [root@linux-node1 ~]# nova get-vnc-console hello-instance novnc
  2. +-------+------------------------------------------------------------------------------------+
  3. | Type | Url |
  4. +-------+------------------------------------------------------------------------------------+
  5. | novnc | http://192.168.56.11:6080/vnc_auto.html?token=1af18bea-5a64-490e-8251-29c8bed36125 |
  6. +------

五、深入Neutron讲解

5.1 虚拟机网卡和网桥

  1. [root@linux-node1 ~]# ifconfig
  2. brq7a3c7391-ce: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  3. inet 192.168.56.11 netmask 255.255.255.0 broadcast 192.168.56.255
  4. inet6 fe80::a812:a1ff:fe7b:b829 prefixlen 64 scopeid 0x20<link>
  5. ether 00:0c:29:34:98:f2 txqueuelen 0 (Ethernet)
  6. RX packets 60177 bytes 17278837 (16.4 MiB)
  7. RX errors 0 dropped 0 overruns 0 frame 0
  8. TX packets 52815 bytes 14671641 (13.9 MiB)
  9. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  10. eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  11. inet6 fe80::20c:29ff:fe34:98f2 prefixlen 64 scopeid 0x20<link>
  12. ether 00:0c:29:34:98:f2 txqueuelen 1000 (Ethernet)
  13. RX packets 67008 bytes 19169606 (18.2 MiB)
  14. RX errors 0 dropped 0 overruns 0 frame 0
  15. TX packets 56855 bytes 17779848 (16.9 MiB)
  16. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  17. lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
  18. inet 127.0.0.1 netmask 255.0.0.0
  19. inet6 ::1 prefixlen 128 scopeid 0x10<host>
  20. loop txqueuelen 0 (Local Loopback)
  21. RX packets 432770 bytes 161810178 (154.3 MiB)
  22. RX errors 0 dropped 0 overruns 0 frame 0
  23. TX packets 432770 bytes 161810178 (154.3 MiB)
  24. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  25. tap34ea740c-a6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  26. inet6 fe80::6c67:5fff:fe56:58a4 prefixlen 64 scopeid 0x20<link>
  27. ether 6e:67:5f:56:58:a4 txqueuelen 1000 (Ethernet)
  28. RX packets 75 bytes 8377 (8.1 KiB)
  29. RX errors 0 dropped 0 overruns 0 frame 0
  30. TX packets 1495 bytes 139421 (136.1 KiB)
  31. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

查看网桥状态

  1. [root@linux-node1 ~]# brctl show
  2. bridge name bridge id STP enabled interfaces
  3. brq7a3c7391-ce 8000.000c293498f2 no eth0
  4. tap34ea740c-a6

brq7a3c7391-ce(网桥):可以理解为一个小交换机,网桥上的设备都和eth0能通(数据链路层),其中tap34ea740c-a6作为虚拟机的网卡,从而实现通信

5.2 不同场景网络类型和OpenStack网络分层

5.2.1 Openstack网络分类

5.2.2Openstack网络分层

首先网络分层肯定是基于OSI七层模型的,在这里就不在赘述,只对Openstack的网络进行分层讲解

  • 网络:在实际的物理环境下,我们使用交换机或者集线器把多个计算机连接起来形成了网络,在Neutron的世界里,网络也是将多个不同的云主机连接起来。
  • 子网:在实际的物理环境下,在一个网络中,我们可以将网络划分成多个逻辑子网,在Neutron的世界里,子网也是路属于网络下的
  • 端口:在实际的物理环境下,每个字子网或者每个网络,都有很多的端口,比如交换机端口来供计算机链接,在Neutron的世界里端口也是隶属于子网下,云主机的网卡会对应到一个端口上。
  • 路由器:在实际的网络环境下,不同网络或者不同逻辑子网之间如果需要进行通信,需要通过路由器进行路由,在Neutron的实际里路由也是这个作用,用来连接不同的网络或者子网。

5.3 五种neutron常见的模型

  • 单一平面网络(也叫大二层网络,最初的 nova-network 网络模型) 
    单一平面网络的缺点: 
    a.存在单一网络瓶颈,缺乏可伸缩性。 
    b.缺乏合适的多租户隔离。 
    c.容易发生广播风暴,而且不能使用keepalived(vrrp组播) 
  • 多平面网络 
  • 混合平面私有网络 
  • 通过私有网络实现运营商路由功能 
  • 通过私有网络实现每个租户创建自己专属的网络区段 

5.4 图解Neutron服务的几大组件

  • ML2(The Modular Layer2):提供一个新的插件ML2,这个插件可以作为一个框架同时支持不同的2层网络,类似于中间协调在作用,通过ml2 
    调用linuxbridge、openvswitch和其他商业的插件,保证了可以同时使用linuxbridge、openvswitch和其他商业的插件。
  • DHCP-Agent:为虚拟机分配IP地址,在创建虚拟机之前,创建了一个IP地址池,就是为了给虚拟机分配IP地址的。具体如下 
    27 interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver Dhcp-agent需要配置与plugin对应的interface_driver 
    31 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq当启动一个实例时,分配和配置(ip)的程序包含一个在dnsmasq config中储存ip地址的进程,接着启动或重载dnsmasq。通常,OpenStack在每个网络中只有一个neutron-dhcp-agent负责spawn一个dnsmasq,所以一个庞大的网络(包含所有子网)中只会有一个dnsmasq提供服务。理论上,并且根据实用的实验室测试,dnsmasq应该能每秒处理1000个DHCP请求 
    52 enable_isolated_metadata = true 启用独立的metadata,后续会有说明
  • L3-agent:名字为neutron-l3-agent,为客户机访问外部网络提供3层转发服务。也部署在网络节点上。
  • LBaas:负载均衡及服务。后续会有说明

六、虚拟机知多少

虚拟机对于宿主机来说,知识一个进程,通过libvirt调用kvm进行管理虚拟机,当然也可以使用virsh工具来管理虚拟机

查看所虚拟机的真实内容

切换到虚拟机默认的存放路径

  1. [root@linux-node ~]# cd /var/lib/nova/instances/
  2. [root@linux-node instances]# ls
  3. _base bb71867c-4078-4984-bf5a-f10bd84ba72b compute_nodes locks
  • bb71867c-4078-4984-bf5a-f10bd84ba72b目录为虚拟机的ID(可通过nova list查看),详细内容如下 
    console.log 终端输出到此文件中 
    disk 虚拟磁盘,后端文件/var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f2545,使用的是copy on write模式,基础镜像就是这里的后端文件,只有变动的内容才放到disk文件中
  1. [root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# file disk
  2. disk: QEMU QCOW Image (v3), has backing file (path /var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f2545), 1073741824 bytes
  3. [root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# qemu-img info disk
  4. image: disk
  5. file format: qcow2
  6. virtual size: 1.0G (1073741824 bytes)
  7. disk size: 2.3M
  8. cluster_size: 65536
  9. backing file: /var/lib/nova/instances/_base/96bfe896f3aaff3091e7e222df51f254516fee9c
  10. Format specific information:
  11. compat: 1.1
  12. lazy refcounts: false
  • disk.info disk的详情
  1. [root@linux-node bb71867c-4078-4984-bf5a-f10bd84ba72b]# qemu-img info disk.info
  2. image: disk.info
  3. file format: raw
  4. virtual size: 512 (512 bytes)
  5. disk size: 4.0K

libvirt.xml 就是libvirt自动生成的xml,不可以改动此xml,因为改了也没什么卵用,此xml是启动虚拟机时动态生成的

  • compute_nodes记录了主机名和时间戳
  1. [root@linux-node instances]# cat compute_nodes
  2. {"linux-node.oldboyedu.com": 1450560590.116144}
  • locks目录:类似于写shell脚本时的lock文件

学习metadata

  • metadata(元数据) 
    在创建虚拟机时可以添加或者修改虚拟机的默认属性,例如主机名,key-pair,ip地址等 
    在新创建的虚拟机上查看metadata的数据,这些都是可以通过metadata生成
  1. $ curl http://169.254.169.254/2009-04-04/meta
  2. -data
  3. ami-id
  4. ami-launch-index
  5. ami-manifest-path
  6. block-device-mapping/
  7. hostname
  8. instance-action
  9. instance-id
  10. instance-type
  11. local-hostname
  12. local-ipv4
  13. placement/
  14. public-hostname
  15. public-ipv4
  16. public-keys/
  17. reservation-id
  18. security-groups
  • 查看路由
  1. $ ip ro li
  2. default via 192.168.56.2 dev eth0
  3. 169.254.169.254 via 192.168.56.100 dev eth0
  4. 192.168.56.0/24 dev eth0 src 192.168.56.101
  • 在控制节点查看网络的命令空间ns
  1. [root@linux-node1 ~]# ip netns li
  2. qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984
  • 查看上述ns的具体网卡情况,也就是在命名空间中使用ip ad li并查看端口占用情况
  1. [root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 ip ad li
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 ::1/128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ns-34ea740c-a6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  9. link/ether fa:16:3e:93:01:0e brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.56.100/24 brd 192.168.56.255 scope global ns-34ea740c-a6
  11. valid_lft forever preferred_lft forever
  12. inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-34ea740c-a6
  13. valid_lft forever preferred_lft forever
  14. inet6 fe80::f816:3eff:fe93:10e/64 scope link
  15. valid_lft forever preferred_lft forever
  16. valid_lft forever preferred_lft forever
  17. [root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 netstat -lntup
  18. Active Internet connections (only servers)
  19. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  20. tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3875/python2
  21. tcp 0 0 192.168.56.100:53 0.0.0.0:* LISTEN 3885/dnsmasq
  22. tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN 3885/dnsmasq
  23. tcp6 0 0 fe80::f816:3eff:fe93:53 :::* LISTEN 3885/dnsmasq
  24. udp 0 0 192.168.56.100:53 0.0.0.0:* 3885/dnsmasq
  25. udp 0 0 169.254.169.254:53 0.0.0.0:* 3885/dnsmasq
  26. udp 0 0 0.0.0.0:67 0.0.0.0:* 3885/dnsmasq
  27. udp6 0 0 fe80::f816:3eff:fe93:53 :::* 3885/dnsmasq
  • 总结 
    命名空间ns的ip地址dhcp服务分配的192.168.56.100而且还有一个169.254.169.254的ip,并在此启用了一个http服务(不仅提供http,还提供dns解析等),命名空间在neutron的dhcp-agent配置文件中启用了service_metadata_proxy = True而生效, 
    所以虚拟机的路由是命名空间通过dhcp推送(ip ro li查看出来的)的,key-pair就是通过命名空间在虚拟机生成时在/etc/rc.local中写一个curl的脚本把key-pair定位到.ssh目录下,并且改名即可,其他同理

七、Dashboard演示

7.1 编辑dashboard的配置文件

  1. [root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings
  2. 29 ALLOWED_HOSTS = ['*', 'localhost']哪些主机可以访问,localhost代表列表
  3. 138 OPENSTACK_HOST = "192.168.56.11"改成keystone的地址
  4. 140 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"keystone之前创建的
  5. 108 CACHES = {
  6. 109 'default': {
  7. 110 'BACKEND': 'django.core.cache.backends.memcached.Memcac
  8. hedCache',
  9. 111 'LOCATION': '192.168.56.11:11211',
  10. 112 }
  11. 113 } 打开使用memcached
  12. 320 TIME_ZONE = "Asia/ShangHai" 设置市区

重启apache 
[root@linux-node1 ~]# systemctl restart httpd

7.2 操作dashboard

7.2.1 登录dashboard

使用keystone的demo用户登录(只有在管理员admin权限下才能看到所有instance) 

7.2.2 删除之前的虚拟机并重新创建一台虚拟机

了解针对虚拟机的各个状态操作 

  • 绑定浮动ip:Eip
  • 绑定/解绑接口:绑定或者解绑API
  • 编辑云主机:修改云主机的参数
  • 编辑安全组:修改secrity group的参数
  • 控制台:novnc控制台
  • 查看日志:查看console.log
  • 中止实例:stop虚拟机
  • 挂起实例:save 状态
  • 废弃实例:将实例暂时留存
  • 调整云主机大小: 调整其type
  • 锁定/解锁实例:锁定/解锁这个云主机
  • 软重启实例:正常重启,先stop后start
  • 硬重启实例:类似于断电重启
  • 关闭实例: shutdown该实例
  • 重建云主机:重新build一个同样的云主机
  • 终止实例: 删除云主机

7.2.3 launch instance

八、cinder

8.1存储的三大分类

块存储:硬盘,磁盘阵列DAS,SAN存储 
文件存储:nfs,GluserFS,Ceph(PB级分布式文件系统),MooserFS(缺点Metadata数据丢失,虚拟机就毁了)

11.2网络类型选择

对象存储:swift,S3

8.2 cinder控制节点的部署

安装cinder

  1. [root@linux-node1 ~]# yum install openstack-cinder python-cinderclient -y

修改cinder配置文件,修改后结果如下 
修改结果如下

  1. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/cinder/cinder.conf
  2. 421:glance_host = 192.168.56.11
  3. 536:auth_strategy = keystone 配置glance服务的主机
  4. 2294:rpc_backend = rabbit 使用rabbirmq消息队列
  5. 2516:connection = mysql://cinder:cinder@192.168.56.11/cinder 配置mysql地址
  6. 2641:auth_uri = http://192.168.56.11:5000
  7. 2642:auth_url = http://192.168.56.11:35357
  8. 2643:auth_plugin = password
  9. 2644:project_domain_id = default
  10. 2645:user_domain_id = default
  11. 2646:project_name = service
  12. 2647:username = cinder
  13. 2648:password = cinder
  14. 2873:lock_path = /var/lib/cinder/tmp 锁路径
  15. 3172:rabbit_host = 192.168.56.11 rabbitmq的主机
  16. 3176:rabbit_port = 5672 rabbitmq的端口
  17. 3188:rabbit_userid = openstack rabbitmq的用户
  18. 3192:rabbit_password = openstack rabbitmq的密码

修改nov的配置文件

  1. [root@linux-node1 ~]# vim /etc/nova/nova.conf
  2. 2145 os_region_name = RegionOne 通知nova使用cinder

执行同步数据库操作

  1. [root@linux-node1 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

检查倒入数据库结果

  1. [root@linux-node1 ~]# mysql -ucinder -pcinder -e "use cinder;show tables;"
  2. +----------------------------+
  3. | Tables_in_cinder |
  4. +----------------------------+
  5. | backups |
  6. | cgsnapshots |
  7. | consistencygroups |
  8. | driver_initiator_data |
  9. | encryption |
  10. | image_volume_cache_entries |
  11. | iscsi_targets |
  12. | migrate_version |
  13. | quality_of_service_specs |
  14. | quota_classes |
  15. | quota_usages |
  16. | quotas |
  17. | reservations |
  18. | services |
  19. | snapshot_metadata |
  20. | snapshots |
  21. | transfers |
  22. | volume_admin_metadata |
  23. | volume_attachment |
  24. | volume_glance_metadata |
  25. | volume_metadata |
  26. | volume_type_extra_specs |
  27. | volume_type_projects |
  28. | volume_types |
  29. | volumes |
  30. +----------------------------+

创建一个cinder用户,加入service项目,给予admin角色

  1. [root@linux-node1 ~]# openstack user create --domain default --password-prompt cinder
  2. User Password:
  3. Repeat User Password:(密码就是配置文件中配置的2648行)
  4. +-----------+----------------------------------+
  5. | Field | Value |
  6. +-----------+----------------------------------+
  7. | domain_id | default |
  8. | enabled | True |
  9. | id | 096964bd44124624ba7da2e13a4ebd92 |
  10. | name | cinder |
  11. +-----------+----------------------------------+
  12. [root@linux-node1 ~]# openstack role add --project service --user cinder admin

重启nova-api服务和启动cinder服务

  1. [root@linux-node1 ~]# systemctl restart openstack-nova-api.service
  2. [root@linux-node1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
  3. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
  4. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
  5. [root@linux-node1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

创建服务(包含V1和V2)

  1. [root@linux-node1 ~]# openstack service create --name cinder --description "OpenStack Block Storage" volume
  2. +-------------+----------------------------------+
  3. | Field | Value |
  4. +-------------+----------------------------------+
  5. | description | OpenStack Block Storage |
  6. | enabled | True |
  7. | id | 57d5d78509dd4ed8b9878d312b8be26d |
  8. | name | cinder |
  9. | type | volume |
  10. +-------------+----------------------------------+
  11. [root@linux-node1 ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
  12. +-------------+----------------------------------+
  13. | Field | Value |
  14. +-------------+----------------------------------+
  15. | description | OpenStack Block Storage |
  16. | enabled | True |
  17. | id | bac129a7b6494e73947e83e56145c1c4 |
  18. | name | cinderv2 |
  19. | type | volumev2 |
  20. +-------------+----------------------------------+

分别对V1和V2创建三个环境(admin,internal,public)的endpoint

  1. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volume public http://192.168.56.11:8776/v1/%\(tenant_id\)s
  2. +--------------+--------------------------------------------+
  3. | Field | Value |
  4. +--------------+--------------------------------------------+
  5. | enabled | True |
  6. | id | 151da63772d7444297c3e0321264eabe |
  7. | interface | public |
  8. | region | RegionOne |
  9. | region_id | RegionOne |
  10. | service_id | 57d5d78509dd4ed8b9878d312b8be26d |
  11. | service_name | cinder |
  12. | service_type | volume |
  13. | url | http://192.168.56.11:8776/v1/%(tenant_id)s |
  14. +--------------+--------------------------------------------+
  15. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volume internal http://192.168.56.11:8776/v1/%\(tenant_id\)s
  16. +--------------+--------------------------------------------+
  17. | Field | Value |
  18. +--------------+--------------------------------------------+
  19. | enabled | True |
  20. | id | 67b5a787d6784184a296a46e46c66d7a |
  21. | interface | internal |
  22. | region | RegionOne |
  23. | region_id | RegionOne |
  24. | service_id | 57d5d78509dd4ed8b9878d312b8be26d |
  25. | service_name | cinder |
  26. | service_type | volume |
  27. | url | http://192.168.56.11:8776/v1/%(tenant_id)s |
  28. +--------------+--------------------------------------------+
  29. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volume admin http://192.168.56.11:8776/v1/%\(tenant_id\)s
  30. +--------------+--------------------------------------------+
  31. | Field | Value |
  32. +--------------+--------------------------------------------+
  33. | enabled | True |
  34. | id | 719d5f3b1b034d7fb4fe577ff8f0f9ff |
  35. | interface | admin |
  36. | region | RegionOne |
  37. | region_id | RegionOne |
  38. | service_id | 57d5d78509dd4ed8b9878d312b8be26d |
  39. | service_name | cinder |
  40. | service_type | volume |
  41. | url | http://192.168.56.11:8776/v1/%(tenant_id)s |
  42. +--------------+--------------------------------------------+
  43. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 public http://192.168.56.11:8776/v2/%\(tenant_id\)s
  44. +--------------+--------------------------------------------+
  45. | Field | Value |
  46. +--------------+--------------------------------------------+
  47. | enabled | True |
  48. | id | 140ea418e1c842c8ba2669d0eda47577 |
  49. | interface | public |
  50. | region | RegionOne |
  51. | region_id | RegionOne |
  52. | service_id | bac129a7b6494e73947e83e56145c1c4 |
  53. | service_name | cinderv2 |
  54. | service_type | volumev2 |
  55. | url | http://192.168.56.11:8776/v2/%(tenant_id)s |
  56. +--------------+--------------------------------------------+
  57. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 internal http://192.168.56.11:8776/v2/%\(tenant_id\)s
  58. +--------------+--------------------------------------------+
  59. | Field | Value |
  60. +--------------+--------------------------------------------+
  61. | enabled | True |
  62. | id | e1871461053449a0a9ed1dd93e2de002 |
  63. | interface | internal |
  64. | region | RegionOne |
  65. | region_id | RegionOne |
  66. | service_id | bac129a7b6494e73947e83e56145c1c4 |
  67. | service_name | cinderv2 |
  68. | service_type | volumev2 |
  69. | url | http://192.168.56.11:8776/v2/%(tenant_id)s |
  70. +--------------+--------------------------------------------+
  71. [root@linux-node1 ~]# openstack endpoint create --region RegionOne volumev2 admin http://192.168.56.11:8776/v2/%\(tenant_id\)s
  72. +--------------+--------------------------------------------+
  73. | Field | Value |
  74. +--------------+--------------------------------------------+
  75. | enabled | True |
  76. | id | 1b4f7495b4c5423fa8d541e6d917d3b9 |
  77. | interface | admin |
  78. | region | RegionOne |
  79. | region_id | RegionOne |
  80. | service_id | bac129a7b6494e73947e83e56145c1c4 |
  81. | service_name | cinderv2 |
  82. | service_type | volumev2 |
  83. | url | http://192.168.56.11:8776/v2/%(tenant_id)s |
  84. +--------------+--------------------------------------------+

8.3 cinder存储节点的部署(此处使用nova的计算节点)

  本文中cinder后端存储使用ISCSI(类似于nova-computer使用的kvm),ISCSI使用LVM,在定义好的VG中,每创建一个云硬盘,就会增加一个LV,使用ISCSI发布。 
在存储节点上加一个硬盘 
 
查看磁盘添加情况

  1. [root@linux-node ~]# fdisk -l
  2. Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
  3. Units = sectors of 1 * 512 = 512 bytes
  4. Sector size (logical/physical): 512 bytes / 512 bytes
  5. I/O size (minimum/optimal): 512 bytes / 512 bytes
  6. Disk label type: dos
  7. Disk identifier: 0x000bd159
  8. Device Boot Start End Blocks Id System
  9. /dev/sda1 * 2048 2099199 1048576 83 Linux
  10. /dev/sda2 2099200 35653631 16777216 82 Linux swap / Solaris
  11. /dev/sda3 35653632 104857599 34601984 83 Linux
  12. Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
  13. Units = sectors of 1 * 512 = 512 bytes
  14. Sector size (logical/physical): 512 bytes / 512 bytes
  15. I/O size (minimum/optimal): 512 bytes / 512 bytes

创建一个pv和vg(名为cinder-volumes)

  1. [root@linux-node ~]# pvcreate /dev/sdb
  2. Physical volume "/dev/sdb" successfully created
  3. [root@linux-node ~]# vgcreate cinder-volumes /dev/sdb
  4. Volume group "cinder-volumes" successfully created

修改lvm的配置文件中添加filter,只有instance可以访问

  1. [root@linux-node ~]# vim /etc/lvm/lvm.conf
  2. 107 filter = [ "a/sdb/", "r/.*/"]

存储节点安装

  1. [root@linux-node ~]# yum install openstack-cinder targetcli python-oslo-policy -y

修改存储节点的配置文件,在这里直接拷贝控制节点的文件

  1. [root@linux-node1 ~]# scp /etc/cinder/cinder.conf 192.168.56.12:/etc/cinder/cinder.conf
  2. [root@linux-node ~]# grep -n "^[a-Z]" /etc/cinder/cinder.conf
  3. 421:glance_host = 192.168.56.11
  4. 536:auth_strategy = keystone
  5. 540:enabled_backends = lvm 使用的后端是lvm,要对应添加的[lvm],当然使用hehe也可
  6. 2294:rpc_backend = rabbit
  7. 2516:connection = mysql://cinder:cinder@192.168.56.11/cinder
  8. 2641:auth_uri = http://192.168.56.11:5000
  9. 2642:auth_url = http://192.168.56.11:35357
  10. 2643:auth_plugin = password
  11. 2644:project_domain_id = default
  12. 2645:user_domain_id = default
  13. 2646:project_name = service
  14. 2647:username = cinder
  15. 2648:password = cinder
  16. 2873:lock_path = /var/lib/cinder/tmp
  17. 3172:rabbit_host = 192.168.56.11
  18. 3176:rabbit_port = 5672
  19. 3188:rabbit_userid = openstack
  20. 3192:rabbit_password = openstack
  21. 3414:[lvm] 此行不是grep过滤出来的,因为是在配置文件最后添加上的,其对应的是540行的lvm
  22. 3415:volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver 使用lvm后端存储
  23. 3416:volume_group = cinder-volumes vg的名称:刚才创建的
  24. 3417:iscsi_protocol = iscsi 使用iscsi协议
  25. 3418:iscsi_helper = lioadm

启动存储节点的cinder

  1. [root@linux-node ~]# systemctl enable openstack-cinder-volume.service target.service
  2. ln -s '/usr/lib/systemd/system/openstack-cinder-volume.service' '/etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service'
  3. ln -s '/usr/lib/systemd/system/target.service' '/etc/systemd/system/multi-user.target.wants/target.service'
  4. [root@linux-node ~]# systemctl start openstack-cinder-volume.service target.service

查看云硬盘服务状态(如果是虚拟机作为宿主机,时间不同步,会产生问题)

  1. [root@linux-node1 ~]# cinder service-list
  2. +------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
  3. | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  4. +------------------+------------------------------+------+---------+-------+----------------------------+-----------------+
  5. | cinder-scheduler | linux-node1.oldboyedu.com | nova | enabled | up | 2015-12-25T03:17:31.000000 | - |
  6. | cinder-volume | linux-node.oldboyedu.com@lvm | nova | enabled | up | 2015-12-25T03:17:29.000000 | - |
  7. +------------------+------------------------------+------+---------+-------+----------------------------+-----------------+

创建一个云硬盘 

将云硬盘挂载到虚拟机上,在虚拟机实例详情可以查看到 
 
在虚拟机中对挂载的硬盘进行分区格式化,如果有时不想挂载这个云硬盘了,一定不要删掉,生产环境一定要注意,否则虚拟机会出现error,应使用umont,确定卸载了,再使用dashboard进行删除云硬盘

  1. $ sudo fdisk -l
  2. Disk /dev/vda: 1073 MB, 1073741824 bytes
  3. 255 heads, 63 sectors/track, 130 cylinders, total 2097152 sectors
  4. Units = sectors of 1 * 512 = 512 bytes
  5. Sector size (logical/physical): 512 bytes / 512 bytes
  6. I/O size (minimum/optimal): 512 bytes / 512 bytes
  7. Disk identifier: 0x00000000
  8. Device Boot Start End Blocks Id System
  9. /dev/vda1 * 16065 2088449 1036192+ 83 Linux
  10. Disk /dev/vdb: 3221 MB, 3221225472 bytes
  11. 16 heads, 63 sectors/track, 6241 cylinders, total 6291456 sectors
  12. Units = sectors of 1 * 512 = 512 bytes
  13. Sector size (logical/physical): 512 bytes / 512 bytes
  14. I/O size (minimum/optimal): 512 bytes / 512 bytes
  15. Disk identifier: 0x00000000
  16. Disk /dev/vdb doesn't contain a valid partition table
  17. $ sudo fdisk /dev/vdb
  18. Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
  19. Building a new DOS disklabel with disk identifier 0xfb4dbd94.
  20. Changes will remain in memory only, until you decide to write them.
  21. After that, of course, the previous content won't be recoverable.
  22. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
  23. Command (m for help): n
  24. Partition type:
  25. p primary (0 primary, 0 extended, 4 free)
  26. e extended
  27. Select (default p): p
  28. Partition number (1-4, default 1):
  29. Using default value 1
  30. First sector (2048-6291455, default 2048):
  31. Using default value 2048
  32. Last sector, +sectors or +size{K,M,G} (2048-6291455, default 6291455):
  33. Using default value 6291455
  34. Command (m for help): w
  35. The partition table has been altered!
  36. Calling ioctl() to re-read partition table.
  37. Syncing disks.
  38. $ sudo mkfs.ext4 /dev/vdb\1
  39. mke2fs 1.42.2 (27-Mar-2012)
  40. Filesystem label=
  41. OS type: Linux
  42. Block size=4096 (log=2)
  43. Fragment size=4096 (log=2)
  44. Stride=0 blocks, Stripe width=0 blocks
  45. 196608 inodes, 786176 blocks
  46. 39308 blocks (5.00%) reserved for the super user
  47. First data block=0
  48. Maximum filesystem blocks=805306368
  49. 24 block groups
  50. 32768 blocks per group, 32768 fragments per group
  51. 8192 inodes per group
  52. Superblock backups stored on blocks:
  53. 32768, 98304, 163840, 229376, 294912
  54. Allocating group tables: done
  55. Writing inode tables: done
  56. Creating journal (16384 blocks): done
  57. Writing superblocks and filesystem accounting information: done
  58. $ sudo mkfs.ext4 /dev/vdbb1
  59. mke2fs 1.42.2 (27-Mar-2012)
  60. Could not stat /dev/vdbb1 --- No such file or directory
  61. The device apparently does not exist; did you specify it correctly?
  62. $ sudo mkfs.ext4 /dev/vdb\1
  63. mke2fs 1.42.2 (27-Mar-2012)
  64. Filesystem label=
  65. OS type: Linux
  66. Block size=4096 (log=2)
  67. Fragment size=4096 (log=2)
  68. Stride=0 blocks, Stripe width=0 blocks
  69. 196608 inodes, 786176 blocks
  70. 39308 blocks (5.00%) reserved for the super user
  71. First data block=0
  72. Maximum filesystem blocks=805306368
  73. 24 block groups
  74. 32768 blocks per group, 32768 fragments per group
  75. 8192 inodes per group
  76. Superblock backups stored on blocks:
  77. 32768, 98304, 163840, 229376, 294912
  78. Allocating group tables: done
  79. Writing inode tables: done
  80. Creating journal (16384 blocks): done
  81. Writing superblocks and filesystem accounting information: done
  82. $ sudo mkdir /data
  83. $ sudo mount /dev/vdb1 /data
  84. $ df -h
  85. Filesystem Size Used Available Use% Mounted on
  86. /dev 242.3M 0 242.3M 0% /dev
  87. /dev/vda1 23.2M 18.0M 4.0M 82% /
  88. tmpfs 245.8M 0 245.8M 0% /dev/shm
  89. tmpfs 200.0K 72.0K 128.0K 36% /run
  90. /dev/vdb1 3.0G 68.5M 2.7G 2% /dat

从云硬盘启动一个虚拟机,先创建一个demo2的云硬盘 
 

九、虚拟机创建流程:


 
第一阶段:用户操作 
1)用户使用Dashboard或者CLI连接keystone,发送用户名和密码,待keystone验证通过,keystone会返回给dashboard一个authtoken 
2)Dashboard会带着上述的authtoken访问nova-api进行创建虚拟机请求 
3)nova-api会通过keytoken确认dashboard的authtoken认证消息。 
第二阶段:nova内组件交互阶段 
4)nova-api把用户要创建的虚拟机的信息记录到数据库中. 
5)nova-api使用rpc-call的方式发送请求给消息队列 
6)nova-scheduler获取消息队列中的消息 
7)nova-scheduler和查看数据库中要创建的虚拟机信息和计算节点的信息,进行调度 
8)nova-scheduler把调度后的信息发送给消息队列 
9)nova-computer获取nova-schedur发送给queue的消息 
10)nova-computer通过消息队列发送消息给nova-conudctor,想要获取数据库中的要创建虚拟机信息 
11)nova-conductor获取消息队列的消息 
12)nova-conductor读取数据库中要创建虚拟机的信息 
13)nova-conductor把从数据库获取的消息返回给消息队列 
14)nova-computer获取nova-conducter返回给消息队列的信息 
第三阶段:nova和其他组件进行交互 
15)nova-computer通过authtoken和数据库返回的镜像id请求glance服务 
16)glance会通过keystone进行认证 
17)glance验证通过后把镜像返回给nova-computer 
18)nova-computer通过authtoken和数据库返回的网络id请求neutron服务 
19)neutron会通过keystone进行认证 
20)neutron验证通过后把网络分配情况返回给nova-computer 
21)nova-computer通过authtoken和数据库返回的云硬盘请求cinder服务 
22)cinder会通过keystone进行认证 
23)cinder验证通过后把云硬盘分配情况返回给nova-computer 
第四阶段:nova创建虚拟机 
24)nova-compute通过libvirt调用kvm根据已有的信息创建虚拟机,动态生成xml 
25)nova-api会不断的在数据库中查询信息并在dashboard显示虚拟机的状态 
生产场景注意事项: 
1、新加的一个计算节点,创建虚拟机时间会很长,因为第一次使用计算节点,没有镜像,计算节点要把glance的镜像放在后端文件(/var/lib/nova/instance/_base)下, 
镜像如果很大,自然会需要很长时间,然后才会在后端文件的基础上创建虚拟机(写时复制copy on write)。 
2、创建虚拟机失败的原因之一:创建网桥失败。要保证eth0网卡配置文件的BOOTPROTE是static而不是dhcp状态。

十、负载均衡及服务LBaas

10.1 使用neutron-lbaas

  1. [root@linux-node1 ~]# yum install openstack-neutron-lbaas python-neutron-lbaas -y

安装haproxy,openstack默认使用haproxy作为代理

  1. [root@linux-node1 ~]# yum install haproxy -y

修改lbaas-agent和neutron配置文件,并重启neutron服务

  1. [root@linux-node1 ~]# vim /etc/neutron/lbaas_agent.ini
  2. 16 interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  3. 31 device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
  4. [root@linux-node1 ~]# vim /etc/neutron/neutron.conf
  5. 77 service_plugins = router,lbaas
  6. [root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/neutron_lbaas.conf
  7. 64:service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
  8. [root@linux-node1 ~]# systemctl restart neutron-server
  9. [root@linux-node1 ~]# systemctl enable neutron-lbaas-agent.service
  10. [root@linux-node1 ~]# systemctl start neutron-lbaas-agent.service

使用lbaas创建一个http的负载均衡 
 
在此负载均衡下加一个http节点(此节点使用的不是cirros镜像) 
 
查看命名空间和命名空间

  1. [root@linux-node1 ~]# ip netns li
  2. qlbaas-1f6d0ac9-32ee-496b-a183-7eaa85aeb2db
  3. qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984
  4. [root@linux-node1 ~]# ip netns li
  5. qlbaas-244327fe-a339-4cfd-a7a8-1be95903d3de
  6. qlbaas-1f6d0ac9-32ee-496b-a183-7eaa85aeb2db
  7. qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984
  8. [root@linux-node1 ~]# ip netns exec qdhcp-7a3c7391-cea7-47eb-a0ef-f7b18010c984 netstat -lntup
  9. Active Internet connections (only servers)
  10. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  11. tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3875/python2
  12. tcp 0 0 192.168.56.100:53 0.0.0.0:* LISTEN 31752/dnsmasq
  13. tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN 31752/dnsmasq
  14. tcp6 0 0 fe80::f816:3eff:fe93:53 :::* LISTEN 31752/dnsmasq
  15. udp 0 0 192.168.56.100:53 0.0.0.0:* 31752/dnsmasq
  16. udp 0 0 169.254.169.254:53 0.0.0.0:* 31752/dnsmasq
  17. udp 0 0 0.0.0.0:67 0.0.0.0:* 31752/dnsmasq
  18. udp6 0 0 fe80::f816:3eff:fe93:53 :::* 31752/dnsmasq

查看控制节点自动生成的haproxy配置文件

  1. [root@linux-node1 ~]# cat /var/lib/neutron/lbaas/244327fe-a339-4cfd-a7a8-1be95903d3de/conf
  2. global
  3. daemon
  4. user nobody
  5. group haproxy
  6. log /dev/log local0
  7. log /dev/log local1 notice
  8. stats socket /var/lib/neutron/lbaas/244327fe-a339-4cfd-a7a8-1be95903d3de/sock mode 0666 level user
  9. defaults
  10. log global
  11. retries 3
  12. option redispatch
  13. timeout connect 5000
  14. timeout client 50000
  15. timeout server 50000
  16. frontend c16c7cf0-089f-4610-9fe2-724abb1bd145
  17. option tcplog
  18. bind 192.168.56.200:80
  19. mode http
  20. default_backend 244327fe-a339-4cfd-a7a8-1be95903d3de
  21. maxconn 2
  22. option forwardfor
  23. backend 244327fe-a339-4cfd-a7a8-1be95903d3de
  24. mode http
  25. balance roundrobin
  26. option forwardfor
  27. timeout check 30s
  28. option httpchk GET /
  29. http-check expect rstatus 200
  30. stick-table type ip size 10k
  31. stick on src
  32. server b6e8f6cc-9b3c-4936-9932-21330536e2fe 192.168.56.108:80 weight 5 check inter 30s fall 10

添加vip,关联浮动ip,搞定! 
 

十一、扩展

11.1 所加镜像不知道密码,需要修改

修改dashboard的配置文件,重启服务

  1. [root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings
  2. 201 OPENSTACK_HYPERVISOR_FEATURES = {
  3. 202 'can_set_mount_point': True,
  4. 203 'can_set_password': True,
  5. 204 'requires_keypair': True,

修改计算节点的nova配置文件,重启服务

  1. [root@linux-node ~]# vim /etc/nova/nova.conf
  2. 2735 inject_password=true
  3. [root@linux-node ~]# systemctl restart openstack-nova-compute.service

11.2openstack网络类型选择

1、Flat :主机数量限制(253),自己做私有云足够了 
2、VLAN :受Vlan4096限制 
3、GRE:三层隧道协议,使用封装和解封装的技术来进行传输的稳定,只能使用openvswitch,不支持linuxbridge。缺点:二层协议上升到三层,效率降低 
4、vxlan:vmvare的技术,解决了vlan不足的问题,克服GRE点对点扩展性差,把二层的数据进行封装通过UDP协议传输,突破了Vlan的限制,要使用上文所说的L3-Agent

11.3 私有云上线

1)开发测试云,用二手机器即可 
2)生产私有云, 
3)实现桌面虚拟化

原文链接:http://www.chuck-blog.com/chuck/294.html

猜你喜欢

转载自www.cnblogs.com/zgq123456/p/10018174.html