OpenStake components

KeyStone components

Openstack collaboration between the components is done by calling rest api, since the components need to call the API between each other, then the security authentication is inevitable, right?

The main function is the keystone of distribution endpoint of each component and to provide certification services api calls between components ;

  • The User : refers to the use of the object openstack services

  • Project (Tenant) : In openstack resource pool, a logical resource partitioning is called a project

  • Role : for the division of authority, user associated with the role, the user has different permissions

  • Policy : The default is a /etc/keystone/policy.json file permissions defined roles included

  • Tocken : issued after completion of token authentication

  • Credentials : used to confirm the identity of the user

  • Authentication : the authentication process (to get the token authentication process from ordinary users)

  • Service : OpenStack in the service provided by various components

  • Endpoint :. In openstack, each service (Nova / Glance / Newtron) has three different API End Points Web the Admin , public , Internal .

    1. Admin is used as a management purposes, as it can modify user / tenant (project).
    2. public is to allow customers to call, for example, can be deployed outside the Internet allows customers to manage their own cloud.
    3. internal internal openstack call.
    # 以上3种endpoints 在网络上开放的权限一般也不同。Admin通常只能对内网开放,public通常可以对外网开放,internal通常只能对安装有openstack服务的机器开放;
    admin url----->面向admin用户 port:3557
    
    internal url ---->面向openstack内部组件间通信 port:5000
    
    public url----->大家都可以访问的api port:5000(与internal url 的端口相同绑定Ip不同)
    # 注意:用户的权限是根据角色赋予的,所以以上3中api能否调用成功是根据用户的角色决定的,只要有权限调用哪个api都可以,但是不要随便走这样配置会比较乱;

keystoneV3 version of the new concept

  • Tenant Rename Project
  • Add a Domain concept: domain contains multiple project
  • Group, adds the concept: the N users join a group to facilitate management

work process

  1. User Credentials to send you keystone, keystone return a temporary token (temporary tokens) and a generic catalog (openstack in Endpoint, which is 27 url)
  2. The user again took temporary token again request keystone, kestone to obtain a list of all projects;
  3. Users find their desired item from the list of items sent to kestone, keystone of the project issued token to the user
  4. Direct access to the user holding the Token Endpoint
  5. Endpoint where the service and kestone to verify that the token is correct, expired? If it is correct and not expired endpoint is to provide services to the user

Clance formation

glance components provide the main mirror service, architecture is divided into glance-api and glance-registry.

glance-api responsible for receiving client component (Nova ..) the request api, distributed glance registry

glance-registry to retrieve Maria DB metadata mirror, the mirror returns information glance api

The metadata mirror glance api glance registry returned to pull the rear end of the corresponding memory image, in response to the client

ps:

NOTE : glance communication between internal components (glance-api, glance-registry ) without using the message queue, the queue configuration message in the socket glance profile is superfluous;

Cinder components

"Cinder block storage function mainly provides"

cinder contains the following components:

  1. API-Cinder : providing rest api interface that receives client requests distributed cinder-scheduler (receiving services)
  2. Scheduler-Cinder : through algorithms to choose the right cinder-volume, through the cinder-api rpc task scheduler distributed to the rear end of cinder-volume (dispatcher)
  3. volume-Cinder : responsible for handling requests specific volume, storage volume provided by different back-end storage. Currently the major storage vendors have been active in the driver storage products contributed to the cinder community (specifically the work)

RPC mechanism

Inter-component communication openstack: rest call interfaces provided by components api

Communication within the component: Based rpc (Remote Procedure Call) mechanism, rpc mechanism is the AMQP-based model

From the perspective of rpc use, nova, neutron, and cinder process is similar to cinder example to explain rpc mechanisms:

Achieve internal components of Openstack RPC (Remote Producer Call) mechanism is based on the AMQP protocol as the communication model to meet the group (Advanced Message Queuing Protocol) pieces loosely coupled architecture of the interior .

AMQP is a message-oriented middleware for asynchronous message communication protocol, AMQP model has four important roles:

  • The Exchange : The forwarding message Routing key corresponding to the Message Queue (router)
  • Key the Routing : Exchange for determining which messages need to be sent corresponding to the Message Queue (routing table)
  • Publisher : Producers of the message, publish the message sent to the Exchange, and indicate Routing key, to ensure that the message queue (Message Queue) may receive a message (server)
  • The Customer : customer / recipient of the message, remove the message (Client) from the message queue (Message Queue) in

Publisher can be divided into four categories:

  • Direct Publisher message transmission point; (send message 1 to 1)
  • Topic Publisher uses "publish - subscribe" model to send a message; (sending multicast)
  • Fanout Publisher send broadcast messages transmitted; (broadcast transmission)
  • Notify Publisher same Topic Publisher, relevant messaging Notification.

Exchange can be divided into three categories:

  1. According to an exact match Direct Exchange Routing Key, Message Queue will only receive a corresponding message;
  2. Topic Exchange Routing Key according to pattern matching, pattern matching that they meet the Message Queue will receive the message;
  3. Fanout Exchange forwards the message to all bindings of Message Queue.

cinder workflow

  • The virtual machines storage resources, so by Nova rest full api calls cinder api
  • cinder api thrown into the message queue by the task Exchange 1, cinder scheduler to get a job from the message queue
  • cinder shcheduler cinder volume to obtain all the information MariaDB algorithm selected by a best cinder-volume (storage node)
  • cinder shcheduler inverted Publisher thrown by the task message queues Exchange 2, the best cinder-volume (storage node) to get from the message queue to store the task 2
  • volume-backen storage nodes (data not stored functions), just call the back-end storage node real storage devices, create a device;

PS: Volume-backen cloud storage node only when the host is created, but to help them to call back-end storage real, open up a storage space for the use of cloud hosting, opened after the completion of these tranches cloud hosting and equipment to complete the mount;

Nova Components

"There is no virtual machine is no cloud computing, Nova is created by calling the virtual machine hypervisor"

nova main components:

  • nova-api: receiving a client request (control node 1)
  • nova-scheduler: sent by the scheduling request nova-api, calculated by the algorithm selects the current node to create the most suitable virtual machine
  • nova-compute: Creating a virtual machine (compute node N) by calling the hypervisor
  • nova-conductor: nova-compute nova-conductor connecting via MariaDB

Why nova-conductor this "middleman"?

  1. Nova-computer many they have to go to the database to obtain virtual machine Nova-api stored in a database created information likely to cause stress to the database (performance considerations)
  2. After Nova-compyte was broken, hackers stealing virtual machine information directly (for security reasons) from Maria DB by Nova-compute

Nova workflow

  1. Nova-api receiving a request to create a virtual machine, Nova-api virtual machine to create detailed information stored Maria DB, Maria DB creating data after the response created Nova-api
  2. Nova-api to create a virtual machine to send a message through the message queue in the Exchange, Nova-scheduler get messages from a queue
  3. Nova-scheduler compute nodes get information from the database (Nova-compute server software installed) get the information to select the best current computing nodes to create a virtual machine based on a specific algorithm
  4. Nova-scheduler to create a virtual machine tasks Exchange sent to the message queue (queue compute nodes connected to the current best to create a virtual machine)
  5. The Best compute the current virtual machine receives a request to create the virtual machine, the message on the message queue, to a Nova-condutor
  6. Nova-condutor connected database query to create detailed information to compute message queue
  7. nova-compute call the hypervisor to start creating virtual machines ...... begin to network, to mirror .....

Neutron components

neutron components include:

  • Server-Neutron : special rest full api receiving client requests, and then requests a different rest full api distributed to different neutron-plugin
  • neutron-plugin : Neuton is to provide a network of resources, in the real world there are many network equipment manufacturers, and neutron-plugin is a plugin to manufacturers in the form of plug-in function to their own network equipment by implementing software
  • Agent-Neutron : and newtron-plugin corresponding to implement the functions of neutron-plugin

neutron-plugin Categories

openstack just emerging when each network device vendor Cisco, Citrix and so develop their own plug-ins integrated into the network equipment openstack, in fact, each network device manufacturers to develop feature is similar, but not the same development standards, which have a repeatedly create the wheel, code redundancy problem, solve these problems newtron-plugin divided into 2.

  1. plugin-Core : Neutron is, in ML2 (Modular Layer 2) to provide the core functionality of the Layer 2 network, other network equipment vendors based on Core-plugin developed its own neutron-plugin;
  2. -plugin-Service : that is, other than the core-plugin plugin other, comprising three network router, firewall, loadbalancer, ×××, metering , etc., the main achievement of the L3-L7 network services. These plugin resources to operate rich, the operation of these resources are neutron-server REST API regarded Extension API, requires manufacturers to expand their own.

core-plugin ML2 core plug

ML2 insert includes Type and Mechanism2 portion, which in turn is divided into two parts and Driver Manager;

So we have to use what network mode specified went ML2 profile species;

  1. neutron-server and each of the neutron-plugin deployed on the control node or a network node (Nova received a request transmitted api)
  2. neutron agent is deployed on the network node and compute nodes (network function specific implementation, a pile mounted switches, routers, VPN, etc. Agent)

neutron workflow

  1. Nova neutron-server receives a request to create a network, analyze neutron-server which neutron-plugin can work, the neutron-plugin sent to the message queue according to a request by creating a virtual machine Nova;
  2. Upon receipt of the demand and send to the neutron-agent neutron-plugin needs through the message queue
  3. neutron-agent complex network created specifically to work

Guess you like

Origin www.cnblogs.com/jiumo/p/11502586.html