[K8S Series] In-depth analysis of k8s network plug-in—Antrea

Preface

It is not difficult to do something, the difficulty lies in persistence. It is not difficult to persist for a while, but the difficult thing is to persist to the end.

Article tag color description:

  • Yellow:Important title
  • Red: used to mark conclusions
  • Green: Used to mark arguments
  • Blue: used to mark arguments

In the world of modern containerized applications, the container orchestration platform Kubernetes has become the standard. Kubernetes is a distributed system. In order to support complex applications and microservice architecture, the network is an integral part of the Kubernetes cluster.

Able to manage and orchestrate containerized applications, where monitoring is a very important aspect to help users understand the health, performance, and availability of the cluster.

In this article, we will introduce the [Antrea] plug-in in the Kubernetes network plug-in in detail.

I hope this article will not only make you gain something, but also make you learn happily. If you have any suggestions, you can leave a message to communicate with me.

 Column introduction

This is the column where this article is located. Welcome to subscribe:【In-depth analysis of k8s】column

Let’s briefly introduce what this column will do:

1 Basic introduction 

In Kubernetes, the network plug-in is also called the Container Network Interface (CNI) plug-in, which is used to implement communication and network connections between containers. Here are some common Kubernetes network plugins:

  1. Flannel: Flannel is a popular CNI plug-in that uses virtual network overlay technology (overlay network) to connect containers on different nodes. Flannel supports a variety of back-end drivers, such as VXLAN, UDP, Host-GW, etc.

  2. Calico: Calico is an open source network and security solution that uses the BGP protocol to implement routing between containers. Calico supports flexible network policies and security rules and can be used for large-scale deployment.

  3. Weave Net: Weave Net is a lightweight CNI plug-in that connects containers on different nodes by creating virtual network devices and network proxies. Weave Net supports overlay mode and direct connection mode, providing flexibility.

  4. Cilium: Cilium is a high-performance network and security solution for Kubernetes, using eBPF (Extended Berkeley Packet Filter) technology to provide fast inter-container communication and network policy implementation.

  5. Canal: Canal is a comprehensive CNI plug-in that combines the functions of Calico and Flannel. It can use Flannel to provide an overlay network, while using Calico's network policy and security features.

  6. Antrea: Antrea is a CNI plug-in based on Open vSwitch, designed for Kubernetes networking and security. It provides high-performance network connectivity and network policy capabilities.

  7. kube-router: kube-router is an open source CNI plug-in that combines network and service proxy functions. It supports BGP and IPIP protocols and has load balancing features.

These are some common options among Kubernetes network plugins, each with its own specific advantages and applicable scenarios. Choosing the right network plug-in depends on factors such as your needs, network topology, and performance requirements.

At the same time, the Kubernetes community is constantly evolving and launching new network plug-ins to meet changing needs.

2 Introduction to Antrea

Antrea is a powerful K8s network plug-in with the advantages of high performance, network policy and observability, and is suitable for K8s clusters of various sizes and needs.

By in-depth understanding of Antrea's core concepts, advantages and disadvantages, usage scenarios, and installation steps, you can better utilize it to manage and protect your containerized applications.

2.1 Concept introduction

Antrea is an open source K8s network plug-in, which aims toprovide high-performance, secure and scalable network connections and network policies. The following are the core concepts of Antrea:

  1. CNI plug-in: Antrea is aCNI (Container Network Interface) plug-in, which is responsible for managing K8s Network interfaces and communication for containers in the cluster. It implements the K8s network model, enabling containers to communicate with each other transparently.

  2. Open vSwitch (OVS): Antrea uses OVS as the data plane, it is a high-performance A virtual switch used to handle the forwarding of network data packets. OVS provides a programmable data plane that enables Antrea to implement advanced network functions.

  3. Network Policy: Antrea supports K8s network policy, allowing administrators to define which containers can interact with What other containers communicate, and how security is implemented. This helps ensure network security and isolation within the cluster.

  4. Service proxy: Antrea also provides service proxy function, which enables K8s services to communicate transparently with Backend Pod communication does not need to disclose the IP address of the Pod.

2.2 Advantages and Disadvantages

advantage:

  1. Lightweight: Antrea is designed to be very lightweight, occupying few resources and having little impact on system performance.
  2. Easy to configure: Antrea provides a simple and easy-to-use configuration file to facilitate users to get started quickly.
  3. High performance: Antrea uses efficient data structures and algorithms to ensure good performance.
  4. Supports multiple protocols: Antrea supports multiple protocols such as TCP and UDP to meet the needs of different scenarios.
  5. Scalability: Antrea provides a rich API to facilitate users’ secondary development and customization.
  6. Observability: Based on Calico, Antrea provides rich network observability to help administrators better understand network conditions.

shortcoming:

  1. Limited functions: Compared with other mature k8s network plug-ins, Antrea has relatively few functions and may not meet the needs of some complex scenarios.
  2. Limited community support : Because Antrea is relatively new, its community support and documentation may not be as rich as other mature plugins.
  3. Complexity: Antrea can be somewhat complex to set up and configure for beginners, especially if advanced network strategies are required.

  4. OVS dependency: Antrea relies on OVS as the data plane, which may introduce additional complexity in some environments.

2.3 Usage scenarios

Antrea is suitable for the following scenarios:

  1. Microservice architecture: In a microservice architecture, communication and load balancing between services are very important. Antrea can help realize automatic discovery and load balancing of services, improving system scalability and availability.
  2. Containerized deployment: In the scenario of containerized deployment, network plug-ins are an essential component. Antrea can help containers communicate with each other and connect to the external network.
  3. Edge computing: In edge computing scenarios, services are widely distributed, requiring efficient communication and load balancing. Antrea can meet these needs and improve the utilization of edge nodes.
  4. Large-scale clusters:When you need to achieve high-performance container communication in a large-scale K8s cluster, Antrea is a good choice.

  5. Network Policy Requirements:In multi-tenant environments that require precise network policy control, security and isolation, Antrea’s Network Policy Function is very useful.

  6. Observability requirements:If detailed network monitoring and logging is required for troubleshooting purposes and performance optimization, Antrea provides these functions.

 3 Installation and use

To install the Antrea plugin, you can follow these steps:

  1. Download Antrea YAML file

  2. Edit YAML file

  3. Apply YAML file

  4. Wait for installation to complete

  5. Configure network policy

  6. test

3.1: Download Antrea YAML file

Execute the following command on a machine in the K8s cluster to download Antrea's YAML file. The latest version of the YAML file can be obtained from Antrea's GitHub repository.

curl -O https://raw.githubusercontent.com/vmware-tanzu/antrea/main/build/yamls/antrea.yml

3.2: Editing YAML files

Open the downloaded Antrea YAML file (usually named antrea.yml) and edit it according to the cluster requirements. The file can be opened using a text editor and configured as needed. Here's an example:

apiVersion: operator.antrea.io/v1alpha1
kind: AntreaCluster
metadata:
  name: antrea-cluster
spec:
  defaultAntreaAgent: {}
  controller:
    # Antrea控制器的配置选项
    service:
      type: LoadBalancer  # 选择适合您集群的Service类型
    networkPolicy:
      enable: true  # 启用网络策略
  agent:
    # Antrea代理的配置选项
    logLevel: info  # 设置日志级别
    ovs:
      bridgeName: br-int  # 指定OVS的网桥名称
    podCIDR: 192.168.0.0/16  # 指定Pod的CIDR范围

Ensure that the configuration in the file is consistent with the K8s cluster topology and network policy requirements. Save and close the file.

3.3: Apply YAML file

Use kubectl or other K8s cluster management tools to apply the edited YAML file to your K8s cluster. Execute the following command:

kubectl apply -f antrea.yml

This will start the Antrea plugin installation and configuration process.

3.4: Wait for the installation to complete

Wait for a while until the Antrea plugin is automatically installed and configured in the K8s cluster. You can use the following command to check whether Antrea-related Pods are running:

kubectl get pods -n kube-system | grep antrea

When all related Antrea Pods are in the "Running" state, the installation is complete.

antrea-agent-74d2s               1/1     Running     4m
antrea-controller-9x6z2          1/1     Running     4m

3.5: Configure network policy

Based on specific needs, use K8s network policies to define communication rules between containers. Network policy objects can be created and applied to control traffic between containers.

3.6: Testing

Finally, ensure that containers in the K8s cluster can communicate according to your network policies while meeting security and isolation requirements. You can deploy some test applications and ensure that they adhere to the defined network policies.

This example will use an Nginx container as the test application and limit communication between them.

Step 1: Create a namespace

First, create a new namespace to isolate our test application:

kubectl create namespace test-namespace

Step 2: Deploy two Nginx Pods

Create two Nginx Pods and deploy them into the namespace you just created:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-1
  namespace: test-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-2
  namespace: test-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest

Save the above YAML files asnginx-deployment.yaml and then deploy them using the following command:

kubectl apply -f nginx-deployment.yaml

Step 3: Define network policy

Create a network policy that limits traffic from another Pod.

In this example, we will prevent the Pod in nginx-deployment-1 from communicating with the Pod in nginx-deployment-2:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-nginx-communication
  namespace: test-namespace
spec:
  podSelector:
    matchLabels:
      app: nginx
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: nginx

Save the above YAML file asnetwork-policy.yaml and then create the network policy using the following command:

kubectl apply -f network-policy.yaml

Step 4: Test network policy

Now, we have defined a network policy that should prevent Pods in nginx-deployment-1 from communicating with Pods in nginx-deployment-2. We can test this by executing the following command on the Pod in nginx-deployment-1:

# 创建一个临时Pod,用于测试通信
kubectl run -i --tty --rm debug --image=nginx --namespace=test-namespace

# 在临时Pod中尝试访问另一个Pod的IP地址
curl <IP_OF_NGINX_DEPLOYMENT_2_POD>

If the network policy is in effect, you will see a connection timeout or other error indicating that the Pod in nginx-deployment-1 cannot communicate with the Pod in nginx-deployment-2.

curl: (7) Failed to connect to <IP_OF_NGINX_DEPLOYMENT_2_POD> port 80: Connection timed out

Through this example, you can see how to use Kubernetes network policies to ensure that communication between containers meets security and isolation requirements.

Depending on the actual needs, more complex network policies can be defined to meet specific application and security requirements.

Guess you like

Origin blog.csdn.net/weixin_36755535/article/details/133632917