OpenStack core component-neutron

1. Introduction to neutron:

1.1. The background of neutron

        Traditional network management methods rely heavily on administrators to manually configure and maintain various network hardware devices; while networks in cloud environments have become very complex, especially in multi-tenant scenarios, users can Networks may need to be created, modified, and deleted, and network connectivity and isolation are unlikely to be guaranteed through manual configuration. The flexibility and automation advantages of software-defined network SDN make it the mainstream of network management in the cloud era. The design follows the principle of network virtualization based on SDN, and the implementation makes full use of various network-related technologies on the Linux system. SDN mode service, by using which network administrators and cloud computing operators can programmatically dynamically define virtual network devices.

2. Basic concepts in neutron:

2.1、network(vSwitch)

network is an isolated layer 2 broadcast domain. Neutron supports multiple types of networks, including local, flat, VLAN, VxLAN and GRE.

2.1.1, local network

The local network is isolated from other networks and nodes. Instances in the local network can only be connected to instances in the same network on the same node.

Instance communication, local network is mainly used for stand-alone testing.

2.1.2, flat (无vlan tagging的网络)

Instances in the flat network can communicate with instances on the same network and can span multiple nodes.

2.1.3, vlan (network with tagging)

VLAN is a Layer 2 broadcast domain. Instances in the same VLAN can communicate, but different VLANs can only communicate through the router. VLAN networks can span nodes and are the most widely used network type.

2.1.4, VXLAN (Large Layer 2 Network)

vxlan is a network based on tunneling technology. A vxlan network is distinguished from other vxlan networks by a unique VNI (Virtual Network Identifier). Data packets in vxlan will be encapsulated into UDP packets through VNI for transmission. Because Layer 2 packets are encapsulated and transmitted at Layer 3, they can overcome the limitations of VLAN and physical network infrastructure.

2.1.5. GRE type network

gre is a network similar to vxlan. The main difference is the use of IP packets instead of UDP for encapsulation. Different networks are isolated on the second layer. Taking the vlan network as an example, network A and network B will be assigned different VLAN IDs, which ensures that broadcast packets in network A will not go to network B. Of course, the isolation here refers to the isolation on the second layer, which is different with the help of routers.

It is possible for the network to communicate on Layer 3. The network must belong to a Project (Tenant), and multiple networks can be created in the Project. There is a 1-to-many relationship between network and Project.

2.2、subnet

2.2.1. The role of subnet:

A subnet is an IPv4 or IPv6 address range. The IP of the instance is assigned from the subnet. Each subnet needs to define an IP address range and mask. (Equivalent to the role of a DHCP server) network and subnet have a 1-to-many relationship. A subnet can only belong to a certain network; a network can have multiple subnets, and these subnets can be different IP segments, but they cannot overlap.

2.2.2. Use of subnet:

(1) The following configuration is valid:

        network A subnet A-a: 10.10.1.0/24 {"start": "10.10.1.1", "end": "10.10.1.50"}

        subnet A-b: 10.10.2.0/24 {"start": "10.10.2.1", "end": "10.10.2.50"}

(2) The following configuration is invalid because the subnets overlap

        network A subnet A-a: 10.10.1.0/24 {"start": "10.10.1.1", "end": "10.10.1.50"}

        subnet A-b: 10.10.1.0/24 {"start": "10.10.1.51", "end": "10.10.1.100"}

Judgment basis: It is not judged whether the IP overlaps, but the CIDR overlap of the subnet (both are 10.10.1.0/24).

If the subnets are in different networks, CIDR and IP can overlap.

        network A  subnet A-a: 10.10.1.0/24 {"start": "10.10.1.1", "end": "10.10.1.50"}

        network B  subnet B-a: 10.10.1.0/24 {"start": "10.10.1.1", "end": "10.10.1.50"}

You may wonder here: If the above IP addresses can overlap, then there may be two IP addresses with the same IP address.

instance, will this conflict? The simple answer is: No!

Specific reasons: Because Neutron's router is implemented through the Linux network namespace. network

Namespace is a network isolation mechanism. Through it, each router has its own independent routing table. The above configuration has

Two results:

1. If two subnets are routed through the same router, according to the configuration of the router, there is only one specified subnet.

Can be routed.

2. If the above two subnets are routed through different routers, because the routing tables of the routers are independent, the two subnets must be routed through different routers.

All subnets can be routed.

2.3、port

2.3.1. Port functions

port can be regarded as a port on the virtual switch. The MAC address and IP address are defined on the port. When the instance's virtual network card VIF (Virtual Interface) is bound to the port, the port will assign the MAC and IP to the VIF. Subnet and port have a 1-to-many relationship. A port must belong to a certain subnet; a subnet can have multiple ports.

2.3.2. Relationship between Project, Network, Subnet, Port and VIF
Project 1 : m Network 1 : m Subnet 1 : m Port 1 : 1 VIF m : 1 Instance
Neutron function
Neutron provides network support for the entire OpenStack environment, including Layer 2 switching, Layer 3 routing, load balancing, and firewalls.
and VPN etc. Neutron provides a flexible framework that, through configuration, can be used with both open source and commercial software
to implement these functions.
Layer 2 switching Switching

Nova's Instance is connected to the virtual Layer 2 network through a virtual switch. Neutron supports a variety of virtual switches, including Linux-native Linux Bridge and Open vSwitch. Open vSwitch is an open source virtual switch that supports standard management interfaces and protocols. Using Linux Bridge and OVS, Neutron can not only create traditional VLAN networks, but also create overlay networks based on tunnel technology, such as VxLAN and GRE. Instance can be configured with IPs of different network segments, and Neutron's router (virtual router) enables instance communication across network segments. The router implements routing and NAT through technologies such as IP forwarding and iptables.

3. Neutron network architecture:

Like other OpenStack services, Neutron also adopts a distributed architecture, with multiple components jointly providing network services to the outside world. Neutron is composed of the following components:

(1)Neutron Server

        ​​​​Provide OpenStack network API to the outside world, receive requests, and call Plugin to process requests.

(2)Plugin

        Processing requests from Neutron Server, maintaining OpenStack logical network status, and calling Agent to process requests.

(3)Agent

        Processing Plugin requests, responsible for truly implementing various network functions on the network provider.

(4)network provider

        A virtual or physical network device that provides network services, such as Linux Bridge, Open vSwitch, or other Neutron-enabled physical switches.

(5)Messageing Queue

        ​​ ​​Communication and calls between Neutron Server, Plugin and Agent are through Messaging Queue.

(6)Database

        ​​​Stores OpenStack network status information, including Network, Subnet, Port, Router, etc.

4. Collaboration between components

4.1. Workflow between components

Take creating a network of VLAN100 as an example. Assume that the network provider is a Linux bridge. The process is as follows:

1. Neutron Server receives the request to create a network and notifies the registered Linux Bridge Plugin through Message Queue (RabbitMQ).

2. Plugin saves the information of the network to be created (such as name, VLAN ID, etc.) into the database, and notifies the Agent running on each node through Message Queue.

3. After receiving the message, the Agent will create a VLAN device (such as eth2.100) on the physical network card (such as eth2) on the node, and create a bridge bridge VLAN device.

4.2. Responsibilities of components

1. The plugin solves the problem of What, that is, how should the network be configured? The work of configuring How is left to the agent.

2. Plugin, agent and network provider are used together. For example, in the above example, the network provider is Linux bridge, then the plunger and agent of Linux bridge must be used; if the network provider is replaced by OVS or a physical switch, the plugin and agent must also be replaced. .

3. One of the main responsibilities of the plugin is to maintain the status information of the Neutron network in the database, which creates a problem: all network provider plugins must write a very similar set of database access codes. In order to solve this problem, Neutron implemented a ML2 (Modular Layer 2) in the Havana version

4. Plugin abstracts and encapsulates the functions of plgin. With the ML2 plugin, various network providers do not need to develop their own plugins. They only need to develop corresponding drivers for ML2. The workload and difficulty are greatly reduced.

Plugins are divided into two categories according to their functions: core plugins and service plugins. The core plugin maintains information about Neutron's netowrk, subnet and port related resources. The agents corresponding to the core plugin include linux bridge, OVS, etc.; the service plugin provides routing, firewall, load balance and other services, and also has corresponding agents.

5. Detailed explanation of components

5.1、Neutron Server

5.1.1. Neutron Server model split

Core API: Provides RESTful API for external management of network, subnet and port.

Extension API: Provides RESTful API for external management of router, load balance, firewall and other resources.

Commnon Service: Authentication and verification of API requests.

Neutron Core: The core handler of Neutron server, which handles requests by calling the corresponding Plugin.

Core Plugin API: defines the abstract function set of Core Plgin. Neutron Core calls the corresponding

Core Plgin。

Extension Plugin API: defines the abstract function set of Service Plgin. Neutron Core calls

Use the corresponding Service Plgin.

Core Plugin: Implements the Core Plugin API and maintains the status of network, subnet and port in the database.

It is also responsible for calling the corresponding agent to perform related operations on the network provider, such as creating a network.

Service Plugin: Implements the Extension Plugin API and maintains router, load balance, and security in the database

group and other resource status, and is responsible for calling the corresponding agent to perform related operations on the network provider, such as

Create router.

Neutron Server consists of two parts:
1. Provide API services. 2. Run the plugin. That is Neutron Server = API + Plugins

5.2、Moduler Layer 2

5.2.2, Multipleproblems with network provider

1. Only one core plugin can be used in OpenStack, and multiple network providers cannot coexist. There is nothing wrong with using just one core plugin per se. But the problem is that the traditional core plugin and core plugin agent have a one-to-one correspondence. That is to say, if you select the open vswitch plugin, only open vswitch can be used on all nodes, and other network providers cannot be used.

2. There is a large amount of duplicate code between different plugins, and the workload of developing new plugins is heavy. All traditional core plugins need to write a large amount of repeated and similar database access codes, which greatly increases the workload of plugin development and maintenance.

5.2.3 , solved by core plugin 's problem

Moduler Layer 2: It is a new core plugin implemented by Neutron in the Havana version, which is used to replace the original linux bridge plugin and open vswitch plugin. As a new generation of core plugin, it provides a framework that allows multiple Layer 2 network technologies to be used simultaneously in the OpenStack network. Different nodes can use different network implementation mechanisms.

5.2.4, ML2 Core Plugin 的优势:

(1) Linux bridge agent, open vswitch agent or other third-party agents can be deployed on different nodes respectively.

(2) ML2 not only supports heterogeneous deployment solutions, but can also be seamlessly integrated with existing agents: the previously used agents do not need to be changed, and only the traditional core plugin on the Neutron server needs to be replaced with ML2.

(3) It becomes much simpler to support new network providers: there is no need to develop the core plugin from scratch, only the corresponding driver needs to be developed, which greatly reduces the code to be written and maintained.

5.2.5, ML2 Core Plugin architecture

(1) Type Driver

Each network type supported by Neutron has a corresponding ML2 type driver. The type driver is responsible for maintaining the state of network types, performing verification, creating networks, etc. Network types supported by ML2 include local, flat, vlan, vxlan and gre.​ 

(2) Mechansim Driver

Each network mechanism supported by Neutron has a corresponding ML2 mechansim driver. The mechanism driver is responsible for obtaining the network state maintained by the type driver and ensuring that these states are correctly implemented on the corresponding network device (physical or virtual).

There are three types of mechanism drivers:

1、Agent-based (linux bridge, open vswitch)

2、Controller-based(OpenDaylight, VMWare NSX)

3. Based on physical switches including Cisco Nexus, Arista, Mellanox.

5.2.6. Case demonstration:

 The type driver is vlan, the mechansim driver is linux bridge, and the operation is to create network vlan100:

(1) The vlan type driver will ensure that the information of vlan100 is saved to the Neutron database, including the name of the network,

vlan ID etc.

(3) The linux bridge mechanism driver will ensure that the linux brige agent on each node creates an ID on the physical network card

For 100 vlan device and brige device, and bridge the two.

5.3. Service Plugin/Agent detailed explanation:

Core Plugin/Agent is responsible for managing core entities: net, subnet and port. For more advanced network services, they are managed by Service Plugin/Agent. Service Plugin and its Agent provide richer extension functions, including routing, load balance, firewall, etc.

5.3.1、DHCP

The dhcp agent provides dhcp services for instances through dnsmasq.

5.3.2、Routing

L3 Agent can create routers for projects (tenants) and provide routing services between Neutron subnets. routing

The function is implemented through iptables by default.

5.3.3、Firewall

L3 Agent can configure firewall policies on the router to provide network security protection. Another security related feature is

Security Group is also implemented through iptables.

The difference between Firewall and Security Group is:

The Firewall security policy is located on the router and protects all networks of a certain project.

The Security Group security policy is located on the instance and protects a single instance.

5.3.4、Load Balance

Neutron provides load balance services for multiple instances in the project through HAProxy by default.

5.3.5、metadata-agent 

        When an instance starts, it needs to access the nova-metadata-api service to obtain metadata and userdata. These data are customized information for the instance, such as hostname, ip, public key, etc. However, the instance does not have an IP when it is started. The neutron-metadata-agent agent allows the instance to communicate with the nova-metadata-api through dhcp-agent or L3-agent.

6. Summary of Neutron functions:

1. Network services provided by Neutron through plugins and agents.

2. The plugin is located in Neutron server, including core plugin and service plugin.

3. The agent is located at each node and is responsible for implementing network services.

4. The core plugin provides L2 functionality, and ML2 is the recommended plugin.

5. The most widely used L2 agents are linux bridge and open vswitch.

6. Service plugin and agent provide extended functions, including dhcp, routing, load balance, firewall, vpn, etc.

7. In-depth exploration of Neutron architecture

(1) Restful API: Provides API services to clients, including Core API and Extension API.

(2) Common Service: Common service, verifying and authenticating API requests from the upper layer.

(3) Neutron Core: Responsible for calling Plugin to handle requests from the upper layer.

(4) Plugin API: Provides an API for calling Plugin.

(5) Core Plugin is configured as ML2 Plugin by default, which is responsible for providing basic network functions and managing and maintaining the network. 

(6) There are two main types of Drivers for ML2 Plugin: Type Drivers: manage network types and maintain network status. Supported network types include Local, Flat, GRE, VLAN, VxLAN, and Geneve. Mechabism Drivers: Manage the underlying network and achieve underlying network isolation.

(7) Service Plugin is responsible for providing three-layer and above network services, such as: routing, load balancing, firewall and VPN services, etc. L3 Service Plugin: Provides routing and floating IP services. Load Balance Plugin: Provides load balancing services. Firewall Plugin: Provides firewall services. VPN Plugin: Provides VPN services.

(8) Agent provides layer 2 and layer 3 network connections to virtual machines, completes conversion between virtual network and physical network, and provides extended services, etc. L2 Agent: Responsible for connecting ports and devices. L3 Agent: Responsible for connecting the Tenant network to the data center or the Internet. DHCP agent: used to automatically configure the virtual machine IP address. Metadata Agent: Provides metadata services.

8. Neutron network type

Features of the Flat network model:

(1) Does not support virtual LAN and belongs to a flat network model.

(2) Linux Bridge directly binds the physical network card and connects all virtual machines in the Flat network.

(3) Each Flat network occupies an exclusive physical network card, and the physical network card cannot be configured with an IP address.

Features of the VLAN network model:

(1) Network isolation between multi-tenants can be achieved.

(2) Virtual machines in the same VLAN network can communicate, but virtual machines in different VLAN networks can only communicate through routers.

Features of the VxLAN network model:

(1) VxLAN network uses tunnel technology.

(2) Greatly expands the number of Layer 2 network segments. VxLAN uses 24-bit VNI and can provide more than 16 million VxLAN network segments.

(3) Tenant internal communications can span any IP network and support arbitrary migration of virtual machines.

9. Neutron network implementation model

9.1.Network devices in Neutron architecture:

br-ex: Bridge connecting to the external network.

br-int: Integrated network bridge to which all instance's virtual network cards and other virtual network devices will be connected.

br-tun: Tunnel bridge. VxLAN and GRE networks based on tunnel technology will use this bridge for communication.

Tap interface: Named tapXX, connect to virtual machines or different namespaces, such as connecting to dhcp (ns1) and br-int (root).

linux bridge: named qbrXXXX.

veth pair: named qvbXXXX, qvoXXXX

OVS patch ports: named int-br-ethX and phy-br-ethX (X is the serial number of the interface).

ethX, physical interface, X is the serial number of the interface.

9.2. Management network and data network in Neutron:

In OpenStack, the Management Network and Data Network have different roles and functions:

  1. Management Network:

    • Purpose: Used for management communication between various components of OpenStack, such as communication between API services, database services, message queue services and other components.

    • Function: Provides a network for managing and maintaining the OpenStack infrastructure, including management operations such as instance creation, deletion, status monitoring, log collection, and communication between services.

    • Security: Typically this network is designed to be an internal network, allowing only specific management traffic, and possibly through firewalls or security groups.

  2. Data Network:

    • Purpose: Used for communication between instances (virtual machines, containers, etc.) or between instances and external networks.

    • Features: Provides the network infrastructure for communication between instances or for instances to communicate with external networks.

    • Security: Data networks generally require broader network access as it relates to normal communication between instances and their connections to external networks.

Summary: The management network usually uses ethernet for communication, and the data network uses vxlan tunnels to communicate because a variety of virtual switches are created inside the host.

Guess you like

Origin blog.csdn.net/m0_73901077/article/details/134584825