[K8S series] In-depth analysis of k8s network plug-in - Flannel

 

preamble

It is not difficult to do one thing, the difficulty lies in persistence. It's not hard to persevere, but the hard part is to persevere to the end.

Article tag color description:

  • Yellow: important headlines
  • Red: used to mark conclusions
  • Green: used to mark arguments
  • Blue: used to mark arguments

Kubernetes (k8s) is a container orchestration platform that allows applications and services to be run in containers. Learn about k8s network plug-ins today

I hope this article will not only give you some gains, but also enjoy learning. If you have any suggestions, you can leave a message and communicate with me.

 Column introduction

This is the column where this article is located, welcome to subscribe: [In-depth analysis of k8s] column

Briefly introduce what this column will do:

Kubernetes is a distributed system capable of managing and orchestrating containerized applications. Among them, monitoring is a very important aspect, which can help users understand the health status, performance and availability of the cluster.

In this article, the Flannel plugin in the Kubernetes network plugin will be introduced in detail.

 

1 Basic introduction 

In Kubernetes, network plug-ins are also called container network interface (Container Network Interface, CNI) plug-ins, which are used to implement communication and network connections between containers. Here are some common Kubernetes networking plugins:

  1. Flannel : Flannel is a popular CNI plugin that uses a virtual overlay network to connect containers on different nodes. Flannel supports a variety of back-end drivers, such as VXLAN, UDP, Host-GW, etc.

  2. Calico : Calico is an open source networking and security solution that uses the BGP protocol for routing between containers. Calico supports flexible network policies and security rules, which can be used for large-scale deployment.

  3. Weave Net : Weave Net is a lightweight CNI plugin that connects containers on different nodes by creating virtual network devices and network proxies. Weave Net supports overlay mode and direct connection mode, which is flexible.

  4. Cilium : Cilium is a high-performance network and security solution for Kubernetes, using eBPF (Extended Berkeley Packet Filter) technology to provide fast inter-container communication and network policy enforcement.

  5. Canal : Canal is a comprehensive CNI plugin that combines the functionality of Calico and Flannel. It can use Flannel to provide overlay networking while using Calico's network policy and security features.

  6. Antrea : Antrea is an Open vSwitch-based CNI plugin designed for Kubernetes networking and security. It provides high-performance network connectivity and network policy functions.

  7. kube-router : kube-router is an open source CNI plugin that combines network and service proxy functionality. It supports BGP and IPIP protocols, and has load balancing features.

These are some of the common options in Kubernetes networking plugins, each with its specific benefits and use cases. Choosing the right network plugin depends on factors such as your needs, network topology, and performance requirements. At the same time, the Kubernetes community is constantly evolving and introducing new networking plugins to meet changing needs.

2 Flannel plugins

Flannel is a container network solution widely used in Kubernetes clusters. It aims to simplify the communication of containers on different nodes and realize the interconnection between containers. Flannel uses virtual network overlay technology (overlay network) to connect containers on different nodes, providing a lightweight and easy-to-deploy network solution.

2.1 How Flannel works

Flannel works as follows:

  1. Node registration: Each node in the Kubernetes cluster will be registered in etcd (distributed key-value storage system) and assigned a unique subnet segment (Subnet).

  2. Routing table management: Flannel runs an agent (flanneld) on each node, and the agent is responsible for managing the routing table between nodes. These routing table information is also stored in etcd.

  3. Virtual network: Flannel uses overlay network technology to create a virtual network that is overlaid on the underlying node network. When containers need to communicate across nodes, Flannel creates virtual network connections between nodes.

  4. Container communication: When container A needs to communicate with container B, the data packet will reach the node where container B is located through the virtual network connection, and then forwarded to container B through the flanneld proxy on the node.

2.2 Common Flannel backend drivers

Flannel supports a variety of backend drivers to implement virtual network coverage. Here are some common Flannel backend drivers:

  1. VXLAN: Use VXLAN to encapsulate data packets to achieve cross-node Layer 2 communication. It is the default backend driver of Flannel.

  2. UDP: Encapsulate data packets through UDP and create tunnels in the network to realize communication between containers.

  3. Host-GW: Directly use the routing table of the host to point the target IP address to the container on the target node.

2.3 Advantages & Disadvantages

2.3.1 Advantages

Advantages of Flannel include:

  • Ease of use: The deployment of Flannel is relatively simple, and there are not many requirements for the configuration of the Kubernetes cluster, so it is easy to use and manage.

  • Lightweight: Flannel is designed to remain lightweight, with relatively low requirements on the underlying network infrastructure.

  • Scalability: Flannel supports large-scale container deployment and can adapt to the growing number of containers.

2.3.2 Disadvantages

However, Flannel also has some limitations:

  • Performance: Although the performance of Flannel is usually sufficient, in certain scenarios, some performance loss may be caused by the additional encapsulation of the overlay network.

  • Reliability: The reliability of Flannel depends on the stability of the underlying network and etcd. If there is a problem with etcd, it may affect the function of Flannel.

2.4 Steps to use

The following is a detailed introduction to the use of Flannel:

  1. Install Flannel: Flannel is usually installed as one of the networking plugins for Kubernetes clusters. Flannel can be deployed using the kubectl command line tool or a Kubernetes configuration file. The installation process may vary depending on the Kubernetes version and deployment method. Please refer to Flannel's official documentation or related documentation for installation and configuration.

  2. Configure etcd: Flannel uses etcd (distributed key-value storage system) to store node registration information and routing table information. Therefore, before deploying Flannel, make sure you have your etcd cluster properly setup and running.

  3. Configure Flannel: On each node of the Kubernetes cluster, you need to configure the Flannel agent (flanneld). Flannel proxies use the subnet information in etcd to assign unique subnets to each node. This way, each node knows how to communicate with containers on other nodes.

  4. Choose a backend driver: Flannel supports a variety of backend drivers to implement the overlay network. VXLAN is the default backend driver, but you can choose other drivers such as UDP, Host-GW, etc. according to your needs. The choice of back-end driver may affect the performance and network topology, please choose according to the actual situation.

  5. Verify the network: After the installation is complete, you can use the kubectl command or other network testing tools to verify that the Flannel network is working properly. Make sure containers can communicate across nodes and that network connections are stable and reliable.

  6. Network policy: In Flannel, by default, all containers are interoperable. If you need more fine-grained network control, you can use Kubernetes' Network Policy feature to implement access control between containers.

It's important to note that Flannel is a simple networking solution that works for most basic Kubernetes networking needs. However, if your scenario has higher requirements on network performance and security, you may need to consider other advanced network solutions, such as Calico, Cilium, etc.

 

2.5 Example demonstration

A simple example showing how to use Flannel to create an overlay network in a Kubernetes cluster.

2.5.1 Cluster description 

Suppose we have a 3-node Kubernetes cluster, where the node IP addresses are as follows:

  • Node1: 192.168.1.101
  • Node2: 192.168.1.102
  • Node3: 192.168.1.103

2.5.2 Configuration explanation

Here is an example of a simple Flannel configuration:

  1. Deploy etcd cluster: First, make sure that etcd cluster has been deployed and all nodes can connect to etcd.

  2. Install Flannel: Use kubectl to create a Flannel DaemonSet and ensure that each node is running the Flannel agent (flanneld).

    # flannel.yaml
    
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds
      namespace: kube-system
      labels:
        app: kube-flannel
    spec:
      selector:
        matchLabels:
          app: kube-flannel
      template:
        metadata:
          labels:
            app: kube-flannel
        spec:
          hostNetwork: true
          containers:
            - name: kube-flannel
              image: quay.io/coreos/flannel:v0.14.0
              command:
                - /opt/bin/flanneld
                - --ip-masq
                - --kube-subnet-mgr
              env:
                - name: POD_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
              securityContext:
                privileged: true
    

  3. Make sure all nodes are registered with etcd: the Flannel agent will automatically register nodes with etcd and assign subnets. You can use etcdctl to verify that the node is successfully registered.

  4. Validate the network: run a test container on all nodes and perform interconnection tests.

    # 创建一个测试容器
    kubectl run test-pod --image=busybox --restart=Never --rm -it -- sh
    
    # 在测试容器中,测试网络通信
    # 例如,ping其他节点上的容器
    ping 192.168.1.102
    

The above example is just a simple Flannel configuration example, and more details and parameters may be involved in the actual configuration. Please make sure to carefully review the official documentation of Flannel in the production environment, and configure it according to actual needs.

2.6 Other explanations

Flannel has some other content that needs to be supplemented, in addition to the basic configuration and use mentioned above:

  •  Subnet segment selection: Flannel uses 10.244.0.0/16 as the subnet segment of the container network by default. If your cluster network already uses this subnet or other conflicting subnets, you need to explicitly specify a different subnet in the Flannel configuration to avoid conflicts. 
    # flannel-config.yaml
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
    data:
      cni-conf.json: |
        {
          "cniVersion": "0.3.1",
          "name": "cbr0",
          "type": "flannel",
          "delegate": {
            "forceAddress": false,
            "isDefaultGateway": true
          }
        }
      net-conf.json: |
        {
          "Network": "192.168.0.0/16",  # 替换为你想要的子网段
          "Backend": {
            "Type": "vxlan"
          }
        }
    
    wAAACH5BAEKAAAAAAAAAABAAEAAAICRAEAOw==
  •  Container network policy : By default, Flannel allows all containers in the cluster to communicate with each other, with no access controls. If you need fine-grained access control between containers, you can use Kubernetes' Network Policy feature.

  • Multi-network plug-in support : In some complex scenarios, it may be necessary to use multiple network plug-ins at the same time, such as the combination of Calico and Flannel. In this case, you need to make sure they work together and avoid conflicts.

  • High availability and fault recovery : Flannel may be affected by the underlying network and etcd, so high availability and fault recovery mechanisms need to be ensured in the production environment to ensure network stability. 

3 Summary 

Overall, Flannel is a simple, lightweight and reliable container network solution, especially suitable for initial deployment or scenarios that do not have particularly demanding network requirements.

It is widely used and supported in the Kubernetes community, and is compatible with other network solutions (such as Calico, Weave, etc.), and can be flexibly selected according to needs.

Guess you like

Origin blog.csdn.net/weixin_36755535/article/details/131828064