OpenStack components --cinder storage services

1.cinder Introduction

1) understanding Block Storage

Operating system to obtain storage space, there are two general ways:

(1) attached by a protocol (SAS, SCSI, SAN, iSCSI, etc.) bare hard disk, and then partition, format, creating a file system; or directly using raw data stored in a hard disk (database)

(2) via NFS, CIFS protocol, etc., mount a remote file system

The first method is called bare hard Block Storage (memory block), also commonly referred to each of the bare hard Volume (volume) is called the second file system storage. NAS and NFS server, and a variety of distributed file storage system provides all this.

 

2)理解 Block Storage Service

Block Storage Servicet provides management of volume from creation to delete the entire life cycle. From the point of view instance, mount each Volume is a hard drive. OpenStack Block Storage Service is provided Cinder, whose specific function is:

(1) provides a REST API enables users to query and manage volume, volume snapshot and volume type

(2) provide scheduler scheduling request to create a volume, optimize allocation of storage resources

(3) support a variety of back-end (back-end) storage by driver architecture, including LVM, NFS, Ceph and others, such as EMC, IBM and other commercial storage products and solutions
 

3) Cinder architecture

The figure is a logic chart cinder

 

Cinder comprises several components as follows

(1) Cinder-fire

Receiving an API request call cinder-volume. Cinder entire assembly is portal, all requests are first processed by the cinder cinder-api. cinder-api exposed to a number of HTTP REST API interfaces to the outside world. In the keystone we can query the cinder-api of endponits.

The client can send the request to the specified address endponits, the request operation cinder-api. Of course, as an end user, we will not send Rest API requests directly. OpenStack CLI, Dashboard and other components need to exchange with Cinder will use these API.

cinder-api received HTTP API requests will be processed as follows:

a) Check the client-human transmission parameters are valid

The client process b) call cinder other sub-service request

c) The cinder other child service returns the serial number and the results returned to the client

cinder-api which requests to accept it? Simply put, as long as the relevant Volume lifecycle operations, cinder-api can respond. Most operations can be seen on the Dashboard.

 

(2)cinder-volume

Management of service volume, and volume provider coordination, volume management lifecycle. Nodes running cinder-volume service is referred to as a storage node.

cinder-volume runs on the storage node, the OpenStack Volume operation, and finally to the cinder-volume are accomplished. cinder-volume itself does not really manage storage devices, storage devices are managed by the volume provider. Implement life cycle management volume with cinder-volume and volume provider.

a) support a variety of Volume Provider by Driver Architecture

The next question is: There are now so many blocks of storage products and solutions (volume provider), cinder-volume how to work with them?

Driver architecture adopted. cinder-volume is defined as the volume provider of unified interface, volume provider only need to implement these interfaces, you can plug and play into the form of Driver OpenStack system.

b) state of the node periodically reports to calculate OpenStack

cinder-volume screening will be done on a regular basis to start volume idle capacity storage node Cinder report

c) achieve volume Lifecycle Management

Cinder management of the life cycle of the final volume is completed by cinder-volume, including the volume of the create, extend, attach, snapshot, delete and so on.

 

(3)cinder-scheduler

scheduler create volume by selecting the most appropriate storage node scheduling algorithm. When creating Volume, cinder-scheduler will choose the most appropriate based on the capacity of the storage node, the Type Volume conditions, and then allowed to create Volume

 

(4)volume provider

Data storage device, to provide physical storage space for the volume. cinder-volume support multiple volume provider, each volume provider through their own driver and cinder-volume coordination.

 

(5)Message Queue

Interprocess communication and mutual cooperation Cinder each sub-service implementation through the message queue. Because the message queue, to achieve a decoupling between the sub-services, this loose structure is also an important feature of the distributed system.

 

(6)Database Cinder 

There are some data need to be stored in the database, the general use of MySQL. The database is installed on a control node, such as in our test environment, you can access the name "cinder" database.

 

4) physical deployment

Cinder service will be deployed in two types of nodes, control nodes and storage nodes. We take a look at the control node controller runs what cinder- * child services.

 

cinder-api and cinder-scheduler deployed on the control node, this is very reasonable.

As cinder-volume node may also control some students will be confused: cinder-volume should not be deployed on the storage node do?

       To answer this question, we must first figure out a fact: OpenStack is a distributed system, each sub service can be deployed anywhere, as long as the network can communicate. Whatever the node, as long as the above run the cinder-volume, it is a storage node, of course, on that node can run other OpenStack services.

       cinder-volume storage node is a hat, cinder-api control node is a hat. In our environment, devstack-controller while wearing the two hats, so it is both control nodes, but also a storage node. Of course, we can also use a special node to run the cinder-volume.

       This once again demonstrates the flexibility of the distributed architecture OpenStack deployment: You can have all services on a single physical machine, it is used as a All-in-One test environment; and in a production environment can be deployed in a multi-service the physical machines, better performance and availability.

RabbitMQ MySQL and generally on the control node. In addition, you can also view the list by cinder service cinder- * child services are distributed on which nodes

       There is another problem: volume provider in there?

       In general, volume provider independent. cinder-volume communication driver provider and the volume used and coordination. So just to driver and cinder-volume put together on it. There are a lot of driver source code directory in the cinder-volume, support for different volume provider.

Later we will both NFS and LVM volume provider to discuss an example of using cinder-volume, volume provider can view other OpenStack's configuration file.

 

2. Cinder design philosophy:

1) volume creation process from cinder- * to see how child services work together

For Cinder learn it, Volume creation is a very good scene, covering all cinder- * sub-services, the following is a flow chart:

 

(1) customer (end-user can be OpenStack, may also be other programs) sends a request to the API (cinder-api): "Help me create a volume"

(2) API request to do some necessary treatment, sent a message to the Messaging (RabbitMQ): "Let Scheduler to create a volume"

(3) Scheduler (cinder-scheduler) to obtain from Messaging API message sent to it, and then performing a scheduling algorithm, node A is selected from a plurality of storage count point

(4) Scheduler to send a message to Messaging: "Let A storage node to create this volume"

Volume (5) stored in the node A (cinder-volume) are obtained from the Messaging Scheduler message sent to it, and then create a volume on the volume Provider via driver.

 

2) Cinder design ideas

Cinder Nova continues the design philosophy and other components.

(1) API front-end services

       Cinder cinder-api as the only external component window, exposure Cinder function can be provided to the customer, when the customer needs to perform volume-related operations, and can only send a request to the REST cinder-api. End-user clients here include, OpenStack command line and other components.

Benefits Design API front-end services that:

a) External provide a unified interface to hide implementation details

b) API provides a standard REST call service, ease of integration with third-party systems

c) API can easily achieve high availability by running multiple instances API services, such as running multiple processes cinder-api

 

(2) Scheduler scheduling service

Cinder 可以有多个存储节点,当需要创建 volume 时,cinder-scheduler 会根据存储节点的属性和资源使用情况选择一个最合适的节点来创建 volume。

调度服务就好比是一个开发团队中的项目经理,当接到新的开发任务时,项目经理会根据任务的难度,每个团队成员目前的工作负荷和技能水平,将任务分配给最合适的开发人员。

 

(3)Worker 工作服务

调度服务只管分配任务,真正执行任务的是 Worker 工作服务。

在 Cinder 中,这个 Worker 就是 cinder-volume 了。这种 Scheduler 和 Worker 之间职能上的划分使得 OpenStack 非常容易扩展:当存储资源不够时可以增加存储节点(增加 Worker)。 当客户的请求量太大调度不过来时,可以增加 Scheduler。

 

(4)Driver 框架

OpenStack 作为开放的 Infrastracture as a Service 云操作系统,支持业界各种优秀的技术,这些技术可能是开源免费的,也可能是商业收费的。

这种开放的架构使得 OpenStack 保持技术上的先进性,具有很强的竞争力,同时又不会造成厂商锁定(Lock-in)。 那 OpenStack 的这种开放性体现在哪里呢?一个重要的方面就是采用基于 Driver 的框架。

以 Cinder 为例,存储节点支持多种 volume provider,包括 LVM, NFS, Ceph, GlusterFS,以及 EMC, IBM 等商业存储系统。 cinder-volume 为这些 volume provider 定义了统一的 driver 接口,volume provider 只需要实现这些接口,就可以 driver 的形式即插即用到 OpenStack 中。下面是 cinder driver 的架构示意图:

 

 

在 cinder-volume 的配置文件 /etc/cinder/cinder.conf 中 volume_driver 配置项设置该存储节点使用哪种 volume provider 的 driver,下面的示例表示使用的是 LVM。

 

 

3.cinder服务搭建

1)环境准备

(1)数据库准备

    create database cinder;
    grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'CINDER_DBPASS';
    grant all privileges on cinder.* to 'cinder'@'%' identified by 'CINDER_DBPASS';

 

(2)创建用户、服务

创建cinder用户

    openstack user create --domain default --password=cinder cinder
    +---------------------+----------------------------------+
    | Field               | Value                            |
    +---------------------+----------------------------------+
    | domain_id           | default                          |
    | enabled             | True                             |
    | id                  | b8b3fd44f25341b79da80dcaf5fd8383 |
    | name                | cinder                           |
    | options             | {}                               |
    | password_expires_at | None                             |
    +---------------------+----------------------------------+

将cinder用户设置为admin角色

    openstack role add --project service --user cinder admin

创建volume服务

    openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    +-------------+----------------------------------+
    | Field       | Value                            |
    +-------------+----------------------------------+
    | description | OpenStack Block Storage          |
    | enabled     | True                             |
    | id          | b193feeee389457cad58453c12f42453 |
    | name        | cinderv2                         |
    | type        | volumev2                         |
    +-------------+----------------------------------+


    openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    +-------------+----------------------------------+
    | Field       | Value                            |
    +-------------+----------------------------------+
    | description | OpenStack Block Storage          |
    | enabled     | True                             |
    | id          | b2438b20776946ce918c1b54a36f3a22 |
    | name        | cinderv3                         |
    | type        | volumev3                         |
    +-------------+----------------------------------+

 

(3)创建cinder服务endpoint

    openstack endpoint create --region RegionOne volumev2 public http://node1:8776/v2/%\(project_id\)s
    +--------------+-------------------------------------+
    | Field        | Value                               |
    +--------------+-------------------------------------+
    | enabled      | True                                |
    | id           | a84ae2d530d84f1988c478e46290f3df    |
    | interface    | public                              |
    | region       | RegionOne                           |
    | region_id    | RegionOne                           |
    | service_id   | b193feeee389457cad58453c12f42453    |
    | service_name | cinderv2                            |
    | service_type | volumev2                            |
    | url          | http://node1:8776/v2/%(project_id)s |
    +--------------+-------------------------------------+


    openstack endpoint create --region RegionOne volumev2 internal http://node1:8776/v2/%\(project_id\)s
    +--------------+-------------------------------------+
    | Field        | Value                               |
    +--------------+-------------------------------------+
    | enabled      | True                                |
    | id           | c8494d0ec1864f54a263cfe9d00b1167    |
    | interface    | internal                            |
    | region       | RegionOne                           |
    | region_id    | RegionOne                           |
    | service_id   | b193feeee389457cad58453c12f42453    |
    | service_name | cinderv2                            |
    | service_type | volumev2                            |
    | url          | http://node1:8776/v2/%(project_id)s |
    +--------------+-------------------------------------+


    openstack endpoint create --region RegionOne volumev2 admin http://node1:8776/v2/%\(project_id\)s
    +--------------+-------------------------------------+
    | Field        | Value                               |
    +--------------+-------------------------------------+
    | enabled      | True                                |
    | id           | 0cb2154bbcf14f57be74a50fbc86d231    |
    | interface    | admin                               |
    | region       | RegionOne                           |
    | region_id    | RegionOne                           |
    | service_id   | b193feeee389457cad58453c12f42453    |
    | service_name | cinderv2                            |
    | service_type | volumev2                            |
    | url          | http://node1:8776/v2/%(project_id)s |
    +--------------+-------------------------------------+


    openstack endpoint create --region RegionOne volumev3 public http://node1:8776/v3/%\(project_id\)s
    +--------------+-------------------------------------+
    | Field        | Value                               |
    +--------------+-------------------------------------+
    | enabled      | True                                |
    | id           | 3193d7e2959b41158df1a4780065bf34    |
    | interface    | public                              |
    | region       | RegionOne                           |
    | region_id    | RegionOne                           |
    | service_id   | b2438b20776946ce918c1b54a36f3a22    |
    | service_name | cinderv3                            |
    | service_type | volumev3                            |
    | url          | http://node1:8776/v3/%(project_id)s |
    +--------------+-------------------------------------+


    openstack endpoint create --region RegionOne volumev2 internal http://node1:8776/v3/%\(project_id\)s
    +--------------+-------------------------------------+
    | Field        | Value                               |
    +--------------+-------------------------------------+
    | enabled      | True                                |
    | id           | 864b120ab68645f19d286200d7b5be2d    |
    | interface    | internal                            |
    | region       | RegionOne                           |
    | region_id    | RegionOne                           |
    | service_id   | b193feeee389457cad58453c12f42453    |
    | service_name | cinderv2                            |
    | service_type | volumev2                            |
    | url          | http://node1:8776/v3/%(project_id)s |
    +--------------+-------------------------------------+


    openstack endpoint create --region RegionOne volumev2 admin http://node1:8776/v3/%\(project_id\)s
    +--------------+-------------------------------------+
    | Field        | Value                               |
    +--------------+-------------------------------------+
    | enabled      | True                                |
    | id           | 08499c458d0b43e783def7b0d2a625c3    |
    | interface    | admin                               |
    | region       | RegionOne                           |
    | region_id    | RegionOne                           |
    | service_id   | b193feeee389457cad58453c12f42453    |
    | service_name | cinderv2                            |
    | service_type | volumev2                            |
    | url          | http://node1:8776/v3/%(project_id)s |
    +--------------+-------------------------------------+

 

2)配置控制节点

(1)安装相关软件

    yum install openstack-cinder -y

 

(2)修改配置文件

vim /etc/cinder/cinder.conf

[DEFAULT]
my_ip = 192.168.52.101
#glance_api_servers = http://node1:9292
auth_strategy = keystone
#enabled_backends = lvm
transport_url = rabbit://openstack:admin@node1
...
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@node1/cinder
...
[keystone_authtoken]
auth_uri = http://node1:5000
auth_url = http://node1:35357
memcached_servers = node1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
...
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
...

 

(3)同步数据库

    su -s /bin/sh -c "cinder-manage db sync" cinder

 

(4)启动服务

    systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

 

3)准备存储节点环境

(1)安装软件包

    yum install lvm2 -y

 

(2)添加一块硬盘并设置为SATA接口

    pvcreate /dev/sdb
      Physical volume "/dev/sdb" successfully created.
      
    vgcreate cinder-vol /dev/sdb
      Volume group "cinder-vol" successfully created

    修改配置文件
    vim /etc/lvm/lvm.conf
    devices {
    ...
    filter = [ "a/sdb/", "r/.*/"]
    ...

 

(3)启动服务

    systemctl enable lvm2-lvmetad.service
    systemctl start lvm2-lvmetad.service

 

4)配置存储节点

(1)安装相关软件包

    yum install openstack-cinder targetcli python-keystone

 

(2)修改配置文件

vim /etc/cinder/cinder.conf

[DEFAULT]
my_ip = 192.168.52.101
glance_api_servers = http://node1:9292
auth_strategy = keystone
enabled_backends = lvm
transport_url = rabbit://openstack:admin@node1
...
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@node1/cinder
...
[keystone_authtoken]
auth_uri = http://node1:5000
auth_url = http://node1:35357
memcached_servers = node1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
...
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
... 
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-vol
volumes_dir = $state_path/volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
iscsi_ip_address = 192.168.52.103

 

(3)启动服务

    systemctl restart openstack-nova-api.service

    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
    systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

 

Guess you like

Origin www.cnblogs.com/Agnostida-Trilobita/p/11302152.html