k8s network application scenarios

Network Application Scenarios
Now that we have a Kubernetes cluster, let’s first think about it. In addition to allowing the cluster to run normally, the container network also allows the Pending CoreDNS to run after installing Kubernetes (shake the chicken spirit -_-). What other usage scenarios are there? ?
Insert image description here
Here I summarize the seven main usage scenarios through a picture, which should also cover most of the network requirements of operation and maintenance personnel.

Fixed IP. After the existing virtualization/bare metal business/single application is migrated to the container environment, inter-service calls are made through IP instead of domain name. At this time, the CNI plug-in needs to have the function of fixed IP, including Pod/Deployment/Statefulset.
Network isolation. Between different tenants or different applications, container groups should not be able to call or communicate with each other.
Multi-cluster network interconnection. For the scenario where microservices between different Kubernetes clusters call each other, multi-cluster network interconnection is required. This scenario is generally divided into IP reachability and Service intercommunication to meet the needs of different microservice instances to call each other.
Outbound restrictions. For databases/middleware outside the container cluster, only container applications that can control specific attributes can access, and other connection requests are rejected.
Entry restrictions. Restrict access to specific container applications by applications outside the cluster.
Bandwidth limitation. Network access between container applications is bandwidth limited.
Egress gateway access. For containers that access specific applications outside the cluster, set up an egress gateway to perform SNAT on them to meet auditing and security requirements for unified egress access. After sorting out the requirements and application scenarios, let's take a look at how to solve the above pain points through different CNI plug-ins.
Network plug-in function implementation
Fixed IP
Basically, mainstream CNI plug-ins have their own IPAM mechanism, all support the allocation of fixed IP and IP Pool, and each CNI plug-in uses the Annotation method to specify fixed IP. For Pod, a fixed IP is allocated, and for Deployment, IP Pool is used. For stateful Statefulset, after allocation using IP Pool, the IP of the Pod will be remembered according to the allocation order of the Pool to ensure that the same IP can still be obtained after the Pod is restarted.

Calico
“cni.projectcalico.org/ipAddrs”: "[“192.168.0.1”]”
Kube-OVN
ovn.kubernetes.io/ip_address:
192.168.100.100 ovn.kubernetes.io/ip_pool: 192.168.100.201,192.168.100.202
1
Antrea
Antrea IPAM can only be used in Bridge mode, so with the assistance of Multus, the main network card can use NodeIPAM to allocate, and the secondary network card can use Antrea IPAM to allocate VLAN type network addresses.

ipam.antrea.io/ippools: 'pod-ip-pool1'
ipam.antrea.io/pod-ips: '<ip-in-pod-ip-pool1>'

1
Cilium
Not Yet!
Multi-cluster network interconnection
For multi-cluster network interconnection, assume that there are multiple existing clusters and different microservices run in different clusters. App01 of cluster 1 needs to communicate with App02 of cluster 2. Since they all It is registered in the VM registration center outside the cluster through IP, so App01 and App02 can only communicate through IP. In this scenario, multi-cluster Pods need to be interconnected.

Calico
For CNI plug-ins like Calico, which natively support BGP well, it is easy to achieve this. As long as the two clusters establish neighbors through BGP and announce their respective routes to each other, dynamic routing can be established. If there are multiple clusters, BGP RR can also be used to solve the problem. However, this solution may not be the most ideal, because it requires cooperation and joint debugging with the physical network environment, which requires network personnel and container operation and maintenance personnel to work together to build a multi-cluster network, which will be involved in later operation, maintenance and management. Not very convenient and agile.

What about Calico VxLAN mode?

Now that VxLAN is mentioned, it can be seen together with Kube-OVN, Antrea, and Cilium. The four CNIs all support the Overlay network model, and all support the establishment of tunnel networks in the form of VxLAN/GENEVE to open up container network communications. This gives the operation and maintenance personnel high flexibility. The container network adjustment, IPAM allocation, network monitoring and observability, and network policy adjustment are all in the charge of the container cluster operation and maintenance personnel, while the network personnel only need to plan the physical network in advance. A large network segment is sufficient to ensure network communication between Nodes in the container cluster.

So how to realize multi-cluster interconnection of overlay network?

Submariner
CNCF has a sandbox project called Submariner, which achieves multi-cluster communication by establishing different gateway nodes in different clusters and opening tunnels. Let’s explain from the official architecture diagram: To put it
Insert image description here
simply, Submariner uses a cluster metadata intermediary service (broker) to master the information of different clusters (Pod/Service CIDR), and directs Pod traffic from Node to gateway node (Gateway Engine) through Route Agent. ), and then the gateway node opens the tunnel and sends it to another cluster. This process is consistent with the concept of using VxLAN network communication between containers on different hosts. It is also very simple to achieve cluster connection. Deploy Broker in one of the clusters, and then register it separately through kubeconfig or context.
Cilium
Cilium Cluster Mesh is similar to Submariner, and can realize cluster interconnection through tunnels or NativeRoute.

Insert image description here
Insert image description here

Cilium is also very simple to open multi-cluster network connections:
KubeOVN
Kube-OVN also provides an OVNIC component, which runs a routing relay OVN-IC Docker container and serves as the gateway node of the two clusters to connect the Pod networks of different clusters. Connected.

Multi-cluster service
mutual access In addition to the interconnection and intercommunication of Pod IP, the multi-cluster network can also consider the inter-cluster Service mutual access, which can be realized by Submariner, Cilium, and Antrea. Both Submariner and Antrea use the MultiCluster Service of the Kubernetes community and combine their own components to achieve multi-cluster service access. MultiCluster Service uses the CRD of ServiceExport and ServiceImport, ServiceExport exports the service that needs to be exposed, and then imports this service to another cluster through ServiceImport.
Insert image description here
Submariner
Take the implementation of Submariner as an example. There are two clusters ks1 and ks2. ks1 has a service nginx in the test namespace. At this time, the nginx service is exported through ServiceExport, and Submariner will discover the nginx.test.svc.cluster.local service as nginx.test.svc.clusterset.local, the coredns of the two clusters will create a new stub domain of clusterset.local, and send all requests matching cluster.set to the service discovery component of submariner. At the same time, ServiceImport is imported to the ks2 cluster, and the Pods of the ks2 cluster can be resolved to the nginx Service of the ks1 cluster through nginx.test.svc.clusterset.local. If both clusters have nginx services with the same name, then the submariner can give priority to local access. If the local service endpoint fails, it can then access the nginx services of other clusters. Can we start building a dual-active service? Haha.

Antrea
Antrea is implemented in a similar way. It also combines ServiceExport and ServiceImport and encapsulates them into ResourceExport and ResourceImport to build multi-cluster services. In each cluster, a node is selected as the gateway, and different cluster tunnels are opened through the gateway to achieve access to multi-cluster services.
Insert image description here
Cilium
Cilium does not use the concept of MultiService. Cilium builds multi-cluster access service access through the concept of Global Service.
Insert image description here
Create a kubernetes cluster and install KubeSphere, and experience the application of different CNI functions in Kubernetes. After all, what operator doesn’t like a page-friendly container management platform?

Insert image description here

Guess you like

Origin blog.csdn.net/xiuqingzhouyang/article/details/131401434