[Posts] Kubernetes CNI network strongest contrast: Flannel, Calico, Canal and Weave

Kubernetes CNI network strongest contrast: Flannel, Calico, Canal and Weave

Introduction

Kubernetes network architecture is the more complex, so one of many aspects of the user headaches. Kubernetes network model itself has certain requirements for certain network functions, but also to achieve some flexibility. Therefore, the industry has a lot of different network solutions to meet the specific circumstances and requirements.

CNI means container network interface, it is a standard designed to allow users to more easily configure the network container when the container is created or destroyed. In this article, we will focus on exploration and comparison of the most popular plug-CNI: Flannel, Calico, Weave and Canal (technically a combination of multiple plug-ins). These plug-ins can not only ensure that it meets the requirements Kubernetes network, but also provides certain network functions they need Kubernetes cluster administrator.

Background

Network selection mechanisms container is a container to another container connected to the host and the external network (e.g. the Internet). runtime container provides a variety of network modes, each will have a different experience. For example, Docker default network can configure the following case where the container:

  • none: Add the container to a dedicated network stack a container, no external connection.
  • host: the host network stack is added to the container, there is no isolation.
  • default bridge: the default mode network. Each container may be interconnected by IP address.
  • Custom Bridge: User-defined bridges, more flexibility, isolation, and other convenience features.

Docker also allows users to other drivers and plug-ins to configure more advanced network (including multi-host network coverage).

CNI's original intention was to create a framework for dynamic network configuration and configure the appropriate resources when configuring or destruction of containers. CNI link below summarizes the plug-in interface specification used in the preparation of the network, this interface allows coordination with the plug-in container runtime:

https://github.com/containernetworking/cni/blob/master/SPEC.md

Plug-in interface configuration and management is responsible for the IP address, and typically provide IP management, IP allocation for each container, as well as multi-host connectivity-related features. It calls runtime container network plug-in to assign an IP address when the container is up and configure the network, and when you remove the container it called again to clean up these resources.

Run-time coordinator or decide which network should be added to the container and which it needs to plug-in calls. Then, the interface will be added to the vessel plug network namespace, as a side veth pair. Next, it will change on the host, including the connection to the other parts of veth bridge. After then, it will be assigned an IP address by calling a separate IPAM (IP Address Management) plug-in and set the route.

In Kubernetes in, kubelet can call it finds the plug at the right time, for automatic network configuration through pod kubelet started.

the term

Before we compare CNI plug-in, we can first make an overall understanding of the relevant terminology network will see. Whether you read this article, or in the future access to the CNI and other related content, to understand some of the common terminology is always very useful.

Some of the most common terms include:

  • Network Layer 2: OSI (Open Systems Interconnections, Open Systems Interconnection) "data link" layer network model. Frames transmitted between two adjacent nodes on layer 2 network will handle the network. A notable example of a Layer 2 Ethernet network, wherein the MAC sublayer is expressed.
  • Network Layer 3: Layer "Network" OSI network model. The main focus of the layer 3 network is routing packets between hosts on the layer 2 connection. IPv4, IPv6, and ICMP is an example of a Layer 3 network protocol.
  • VXLAN: stands for "Virtual Extensible LAN". First, VXLAN for layer 2 Ethernet by encapsulating the frame in a UDP datagram to help achieve a large cloud deployments. VXLAN virtualization and VLAN similar, but offers more flexibility and functionality (VLAN is limited to 4096 network ID). VXLAN is an encapsulation protocol and the cover, it can be run on existing networks.
  • Overlay network: Overlay network is to create a virtual logical networks over existing networks. Overlay networks are commonly used on top of existing network to provide useful abstraction, and to separate and protect the different logical networks.
  • Package: a package refers to the additional layer is encapsulated network data packets and processes to provide additional context information. In overlay network, from the virtual package is used to convert the network address to the bottom space, so that it can be routed to a different location (the packet may be decapsulated, and continue to its destination).
  • Mesh: mesh (Mesh network) means that each node is connected to many other routing nodes to cooperate, and achieve greater network connection. Routed through mesh network allows a plurality of paths, providing a more reliable network. Mesh grid disadvantage is that each node will increase the number of additional overhead.
  • BGP: stands for "Border Gateway Protocol", for management packets between the edge routers routing. BGP by considering the available paths, and routing rules specific network policy to help figure out how to send packets from one network to another. BGP is sometimes used as a plug-in CNI routing mechanism, instead of the package overlay network.

Understanding of the technical terms and technical support for all kinds of plug-ins, here we can begin to explore some of the most popular CNI plugin.

CNI Compare

Flannel

Link: https: //github.com/coreos/flannel

CoreOS developed by the project Flannel, is probably the most direct and the most popular plug-CNI. It is one of the most mature container filing system structure example network, aimed at achieving better inter containers and the host network. With the rise of the concept of CNI, Flannel CNI plug regarded as entry earlier.

Compared with other solutions, Flannel relatively easy to install and configure. It is packaged as a single binary file flanneld, many common Kubernetes cluster deployment tools and many Kubernetes distributions can be installed by default Flannel. Flannel Kubernetes cluster may be used prior to using the API etcd cluster state information which is stored, so no dedicated data storage.

Flannel Configuring Layer 3 IPv4 overlay network. It will create a large internal network across each node in the cluster. In this overlay network, each node has a sub-network, for allocating IP addresses internally. When configuring pod, Docker bridge interface on each node will assign a new address for each container. Pod same host bridge may be used to communicate Docker, and pod on different hosts which uses flanneld traffic encapsulated in UDP packets for routing to the appropriate target.

Flannel There are several different types of packaging may be used and the rear end of the route. The default and recommended method is to use VXLAN, because VXLAN better performance and requires less manual intervention.

Overall, Flannel is a good choice for most users. From a management perspective, it provides a simple network model, users only need some basic knowledge, you can set the environment for most use cases. In general, the early use Flannel is a safe and secure choice, until you start to need something it can not provide.

Calico

Link: https: //github.com/projectcalico/cni-plugin

Calico is Kubernetes ecosystems another popular network selection. Although Flannel is recognized as the easiest option, but Calico its performance, flexibility and famous. Calico fuller feature not only provides the network connection between the host and the pod, also relates to network security and management. Calico Calico CNI plug encapsulates functionality within the framework of CNI.

On Kubernetes new cluster configurations to meet system requirements, the user can quickly deploy Calico by applying a single manifest file. If you are interested in network policy Calico optional function can be applied to other manifest cluster, to enable these features.

Despite the deployment of the required operating Calico looks quite simple, but it creates a network environment at the same time simple and complex properties. Unlike Flannel, Calico does not use overlay network. Instead, Calico Layer 3 network configuration, the network uses the BGP routing protocol to route packets between hosts. This means that when moving between the host, the packets need not be packaged in an additional encapsulation layer. BGP routing mechanism may locally lead packet without additional traffic flow packed layer.

In addition to performance advantages, in the event of network problems, users can also troubleshoot using more conventional methods. Although the use of techniques such VXLAN package is a good solution, but the process that processes data packets in the same field are difficult to track. Use Calico, standard debugging tools can access the same information in a simple environment, so that more developers and administrators easier to understand behavior.

In addition to network connectivity, Calico also for its advanced networking capabilities and famous. Network policy is one of the most sought after features. In addition, Calico can also be integrated with the service grid Istio, in order to interpret and implement strategies within the cluster workload in the service grid layer and network infrastructure layer. This means that users can configure powerful rules, how to describe the pod should send and receive traffic, increase security and control network environments.

If for your environment, support network strategy is very important, but you also have a need for additional capabilities and features, then Calico would be an ideal choice. Also, if you have now or in the future may want to get technical support, then Calico provide commercial support. In general, when you want to be able to long-term control network, rather than just configure it once and forget, Calico is a good choice.

Canal

Link: https: //github.com/projectcalico/canal

Canal is also an interesting choice for many reasons.

First, Canal is the name of a project that attempts to integrate Flannel provided by the network layer and the network policy Calico function. However, when a contributor work only to find complete details, it is clear that if Flannel and standardization and flexibility Calico each of these two projects have ensured, then not so great integration also necessary. As a result, this official project become a little "unfinished", but it is expected to achieve the ability to deploy both technologies together. For this reason, even if the project ceased to exist, the industry will still habitually Flannel and Calico composition called "Canal".

Since a combination Flannel Canal and Calico, so it is also an advantage that the intersection of these two techniques. Network layer using a simple overlay Flannel provided, can run on many different deployment environments without additional configuration. In the network strategy, Calico strong network rule evaluation, to provide more supplementary-based network, thus providing more security and control.

Ensure that the cluster meets the necessary system requirements (https://docs.projectcalico.org/v3.6/getting-started/kubernetes/requirements), users need to apply two manifest to deploy Canal, which makes its configuration than any individual a project is difficult. If the enterprise IT team plans to change their network solutions, and hoping to make some experiments on the network policy before implementing change and get some experience, then Canal is a good choice.

Generally speaking, if you like the network model Flannel provided, but found some of the features Calico's very attractive, so might as well try Canal. From a security perspective, the ability to define network policy rule is a huge advantage, and is Calico's killer feature in many ways. The technology can be applied to a familiar network layer, meaning you can get a more powerful environment, and can save most of the transition process.

Weave

Link: https: //www.weave.works/oss/net/

Weave is a Kubernetes CNI network options provided by Weaveworks, it provides models and all network programs we discussed so far are different. Weave create a mesh overlay network between each node in the cluster can be flexibly routed between participants. This feature combined with several other unique features, and in some cases can lead to problems, Weave can intelligently route.

To create a network, Weave dependent routing components in the network installed on each host. Then, these routers exchange topology information to maintain a current view of the available network environment. When it is desired to send traffic to the pod is located on a different node, the routing component Weave determined automatically by "fast data path" to, or fall back method "Sleeve" packet forwarding.

Rely on fast data path of the present machine Open vSwitch kernel data path module, the packet is forwarded to the appropriate POD, without repeatedly into and out of user space. Weave router will update the Open vSwitch configuration to ensure that the core layer has accurate information about how to route incoming data packets. Conversely, when the network topology is not suitable for fast data path routing, sleeve serves as backup mode. It is a slow mode of packaging, or the lack of necessary routing information in the case of fast data connection path, which may be routed packets. When traffic through the router, which peers they which MAC addresses associated with understanding, thereby permitting them to a smaller number of hops, more intelligently routes subsequent traffic. When the network changes cause the available routes change, the same mechanism can help each node correct itself.

And Calico Like, Weave also provides capabilities for network policy Kubernetes cluster. When setting Weave, network policy will be automatically installed and configured, so in addition to add a network rule, users do not need additional configuration. Other programs do not have a network, Weave unique features, is simple encrypt the entire network. Although this will increase quite a lot of network overhead, but Weave can be used to encrypt NaCl (http://nacl.cr.yp.to) for the sleeve to automatically encrypt all traffic routed traffic and for fast data path traffic, because it needs to be encrypted VXLAN traffic kernel, Weave will come quickly encrypt data path traffic using IPsec ESP.

For those seeking a feature-rich network, while a lot of people do not want to increase management complexity or difficulty speaking, Weave is a good choice. It is relatively easy to set up, provides many built-in functions and automatic configuration, and can provide intelligent routing scenarios in other solutions that may fail. Mesh topology does limit the size of the network can be reasonably accommodated, but for most users, this is not a big problem. In addition, Weave also provide technical support for a fee, can provide troubleshooting and other technologies and services for business users.

Conclusion

CNI standard Kubernetes adopted, let Kubernetes ecosystem flourishing network solutions. Greater variety of choices, means that most users will be able to find suitable for their current needs and deployment environment CNI plug-ins, but also may be able to find new solutions in a changing environment.

Operation between different business requirements vary widely, and therefore has a rich set of proven solutions with different complexity and functionality, Kubernetes greatly help in meeting the unique needs of different users, and still provide a consistent user experience .

Guess you like

Origin www.cnblogs.com/jinanxiaolaohu/p/11271653.html