Cloud native technology open class study notes: K8s network concept and policy control, Kubernetes Service

10. Kubernetes network concept and policy control

1. Kubernetes basic network model

Insert picture description here

Insert picture description here

Because the complexity of the container network development is that it is actually parasitic on the Host network. From this perspective, the container network solution can be roughly divided into two major factions: Underlay/Overlay :

  • Underlay's standard is that it is at the same layer as the Host network. An externally visible feature is whether it uses the same network segment of the Host network, input and output basic equipment, and the IP address of the container. Does it need to cooperate with the Host network? (From the same central distribution or unified division). This is Underlay
  • The difference between Overlay is that it does not need to apply for an IP from the IPM management component of the Host network. Generally speaking, it only needs to not conflict with the Host network, and this IP can be freely allocated.

2. Exploring Netns

Insert picture description here

The kernel basis for network implementation in Network Namespace. In a narrow sense, runC container technology does not depend on any hardware. Its execution basis is its kernel. The kernel representative of the process is task. If it does not need isolation, it uses the host space (namespace). Space isolation data structure that needs to be specially set (nsproxy-namespace proxy)

On the contrary, if an independent network proxy, or mount proxy, it must be filled with real private data. The data structure it can see is shown in the figure above

From the sensory point of view, an isolated network space will have its own network card or network equipment. The network card may be virtual or physical. It will have its own IP address, IP table and routing table, and its own protocol stack state. This specifically refers to the TCP/Ip protocol stack, which has its own status, and its own iptables, ipvs

From the overall sense, this is equivalent to having a completely independent network, which is isolated from the host network. Of course, the code of the protocol stack is still public, but the data structure is different

Insert picture description here

The above figure can clearly show the relationship of Netns in pods. Each pod has an independent network space, and the pod net container will share this network space. Generally, K8s recommends using the Loopback interface to communicate between pod net containers, and all containers provide external services through the pod’s IP. In addition, Root Netns on the host can be regarded as a special network space, except that its Pid is 1.

3. Introduction to mainstream network solutions

Insert picture description here

Insert picture description here

The Flannel scheme is currently the most commonly used. As shown in the figure above, you can see a typical container network solution. The first thing it has to solve is how the container package reaches the Host, here is the method of adding a Bridge. Its backend is actually independent, that is to say, how the package leaves the Host, which encapsulation method is used, or does not require encapsulation, are all optional

Now let’s introduce the three main backends:

  • One is user-mode udp, which is the earliest implementation
  • Then there is the kernel's Vxlan, both of which are considered overlay solutions. The performance of Vxlan will be better, but it has requirements for the version of the kernel, and the kernel needs to support the characteristics of Vxlan.
  • If your cluster is not large enough and is in the same second-tier domain, you can also choose to use host-gw. The backend in this way is basically started by a broadcast routing rule, and the performance is relatively high.

4. Usefulness of Network Policy

Insert picture description here

Just mentioned that the basic model of the Kubernetes network requires full interconnection between pods. This will cause some problems: in a K8s cluster, some call chains are not directly called. For example, between two departments, then I hope that department A will not visit the services of department B. At this time, the concept of strategy can be used.

Basically its idea is this: it uses various selectors (tags or namespace) to find a group of pods, or find the two ends of the communication, and then use the feature description of the stream to determine whether they can be connected , Can be understood as a whitelist mechanism

Before using Network Policy, please note that apiserver needs to turn on these switches as shown in the figure above. Another more important thing is that the network plug-in we choose needs to support the implementation of Network Policy. Network Policy is only an object provided by K8s, and there are no built-in components for implementation. It depends on whether the container network solution you choose supports this standard and its completeness. If you choose Flannel or the like, it does not really go. If this Policy is implemented, it will be of no use if you try this

Insert picture description here

Next, let's talk about an example of configuration, or what to do when designing a Network Policy?

  • The first thing is to control the object, just like the spec part in this instance. In spec, through podSelector or namespace selector, you can choose to make a specific set of pods to accept our control
  • The second is to think clearly about the flow direction, do you need to control the incoming or outgoing direction? Still need to control in both directions
  • The most important part is the third part. If you want to add a control object to the selected direction to describe its flow, which stream can be put in or out? Analogous to the five-tuple of this stream feature, we can use some selectors to determine which ones can be my remote. This is the choice of the object; we can also use the mechanism of IPBlock to get which IPs can be released; the last is Which protocols or which ports. In fact, the combination of flow characteristics is a five-tuple, which will select a specific acceptable flow

11. Kubernetes Service

1. Source of demand

1) Why is service discovery needed

Insert picture description here

In the K8s cluster, applications are deployed through pods. Unlike traditional application deployment, traditional applications are deployed on a given machine. We know how to call the IP addresses of other machines. However, applications in the K8s cluster are deployed through pods, and the pod life cycle is short-lived. During the life cycle of a pod, such as its creation or destruction, its IP address will change, so traditional deployment methods cannot be used, and the specified IP cannot be specified to access the specified application.

In addition, in the application deployment of K8s, although I learned the application deployment mode of deployment before, it is still necessary to create a pod group, and then these pod groups need to provide a unified access entry and how to control traffic load balancing to this group. For example, the test environment, pre-release environment, and online environment need to maintain the same deployment template and access method during the deployment process. Because in this way, you can use the same set of application templates to publish directly in different environments

Finally, the application service needs to be exposed to the outside for access, and needs to be provided to external users to call. The pod's network and the machine are not in the same segment of the network, so how can the pod network be exposed to external access? Service discovery

2) Service: Service discovery and load balancing in Kubernetes

Insert picture description here

In K8s, service discovery and load balancing are K8s Service. The above picture is the service structure in K8s. K8s Service provides access to the external network and pod network, that is, the external network can be accessed through the service, and the pod network can also be accessed through the K8s Service.

Downward, K8s is connected to another set of pods, that is, it can be load-balanced to a set of pods through K8s Service, which not only provides a unified access portal for service discovery, but also provides access to external networks. Access between different pods, provide a unified access address

2. Use case interpretation

1), Service syntax

Insert picture description here

First look at a grammar of K8s Service. The above figure is actually a statement structure of K8s. There are many syntaxes in this structure, and there are many similarities with some of the standard objects of K8s introduced earlier. For example, selector to make some choices, label to declare some of its label labels, etc.

There is a new knowledge point here, which is to define a protocol and port for K8s Service service discovery. Continue to look at this template, declare a K8s Service named my-service, it has a app:my-servicelabel, it chooses app:MyAppsuch a label pod as its backend

The last is the defined service discovery protocol and port. In this example, the TCP protocol is defined. The port is 80 and the destination port is 9376. The effect is that access to the service port 80 will be routed to the back-end targetPort, that is, as long as you access this Service port 80 will be load-balanced to app:MyAppport 9376 of the back-end label pod

2), create and view Service

Insert picture description here

Create service: kubectl apply -f service.yamlorkubectl created -f service.yaml

View the result after service creation:kubectl discribe service

After the service is created, you can see that its name is my-service. Namespace, Label, and Selector are the same as our previous declarations. After the declaration, an IP address will be generated. This IP address is the IP address of the service. This IP address can be accessed by other pods in the cluster, which is equivalent to passing this The IP address provides a unified access to a pod and service discovery

There is also a property of Endpoints, which can be seen through Endpoints: Which pods are selected through the previously declared selector? And what is the state of these pods? For example, through selector, we see that it has selected an IP of these pods and a port of the targetPort declared by these pods

Insert picture description here

The actual architecture is shown in the figure above. After the service is created, it will create a virtual IP address and port in the cluster. In the cluster, all pods and nodes can access the service through such an IP address and port. This service will mount the pod it selects and its IP address to the backend. In this way, when accessing through the IP address of the service, you can load balance to these back-end pods.

When the life cycle of a pod changes, for example, one of the pods is destroyed, the service will automatically remove the pod from the backend. This is achieved: even if the life cycle of the pod changes, the endpoints it accesses will not change

3) Access to Service in the cluster

Insert picture description here

In the cluster, how do other pods access the service we created? There are three ways:

  • First, you can go through the virtual IP service access, for example, my-service this service you just created, through kubectl get svcor kubectl discribe servicecan see its virtual IP address is 172.29.3.27, the port is 80, then you can use this virtual IP and port Access the address of this service directly in the pod

  • The second way is to directly access the service name, relying on DNS resolution, that is, pods in the same namespace can directly access the service just declared through the service name. In different namespaces, we can add the name of the service ., and then add the namespace where the service is located to access the service. For example, use curl to access the service directly, that is, my-service:80 can access the service.

  • The third is to access through environment variables. When the pod in the same namespace is started, K8s will put some IP addresses, ports, and some simple configurations of the service into the pod of K8s through environment variables. After the K8s pod container is started, by reading the environment variables of the system, you can read an address configured by other services in the namespace, or its port number, etc. For example, in a pod in the cluster, you can directly get the value of an environment variable through curl $. For example, getting MY_SERVICE_SERVICE_HOST is its IP address, MY_SERVICE is the MY_SERVICE we declared just now, and SERVICE_PORT is its port number. You can request the service MY_SERVICE in the cluster

4)、Headless Service

Insert picture description here

A special form of service is Headless Service. When the service is created, you can specify clusterIP:None to tell K8s that I don't need clusterIP, and then K8s will not assign a virtual IP address to the service. How can it achieve load balancing and unified access without a virtual IP address?

The pod can directly resolve the IP address of all back-end pods through service_name using DNS, and resolve to the addresses of all back-end pods through the A record of DNS. The client selects a back-end IP address, and this A record will As the life cycle of the pod changes, the returned A record list also changes. This requires the client application to return all DNS from the A record to the IP address in the A record list, and the client chooses an appropriate address. Go visit pod

5), Expose Service to the outside of the cluster

Insert picture description here

The previous introduction is that the node or pod in the cluster accesses the service. How can the service be exposed to the outside? How to actually expose the application to the public network for access? There are also two types of service to solve this problem, one is NodePort and the other is LoadBalancer

  • The method of NodePort is to expose a port on the node on the node of the cluster (that is, on the host of the node of the cluster), which is equivalent to a layer of forwarding after being accessed on a port of the node, and forwarding to Above the virtual IP address is the virtual IP address of the service on the host just now

  • The LoadBalancer type is another layer of conversion on the NodePort. The NodePort mentioned just now is actually a port on each node in the cluster. LoadBalancer is to add a load balancer in front of all the nodes. For example, if you hang an SLB on Alibaba Cloud, the load balancer will provide a unified entrance and load balance all the traffic it touches to the node pod of each cluster node. Then the node pod is converted to ClusterIP to access the actual pod

3. Operation demonstration

1) Access to Service in the cluster

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    run: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx    

server.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    run: nginx
spec:
  replicas: 2
  selector:
   matchLabels:
    run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
hanxiantaodeMBP:yamls hanxiantao$ kubectl create -f server.yaml 
deployment.apps/nginx created
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-79699b7df9-jn5p4   1/1     Running   0          46s
nginx-79699b7df9-th5hj   1/1     Running   0          46s
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pod -o wide -l run=nginx
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
nginx-79699b7df9-jn5p4   1/1     Running   0          77s   10.1.0.139   docker-desktop   <none>           <none>
nginx-79699b7df9-th5hj   1/1     Running   0          77s   10.1.0.140   docker-desktop   <none>           <none>

To create a set of pods first is to create this K8s deployment. After the deployment is created, let's see if the pod is created. You can see the IP address through kubectl get pod -o wide. Filter through -l, that is label, run=nginx. These two pods are IP addresses 10.1.0.139 and 10.1.0.140 respectively, and both have the label run=nginx

hanxiantaodeMBP:yamls hanxiantao$ kubectl create -f service.yaml 
service/nginx created
hanxiantaodeMBP:yamls hanxiantao$ kubectl describe svc
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.96.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         192.168.65.3:6443
Session Affinity:  None
Events:            <none>


Name:              nginx
Namespace:         default
Labels:            run=nginx
Annotations:       <none>
Selector:          run=nginx
Type:              ClusterIP
IP:                10.108.96.80
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.1.0.139:80,10.1.0.140:80
Session Affinity:  None
Events:            <none>

Let's create K8s service again. You can see the actual status of this service through kubectl describe svc. For the created nginx service, its selector is run=nginx, and the pod address to the backend is selected through the selector of run=nginx, which is the address of the two pods just seen: 10.1.0.139 and 10.1.0.140. Here you can see that K8s has generated a virtual IP address in the cluster for it. Through this virtual IP address, it can load balance to the next two pods.

pod1.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx1
  namespace: default
  labels:
    env: dev
    tie: front  
spec:
  containers:
  - name : nginx
    image: nginx:1.8
    ports:
    - containerPort: 80      

Create a client pod to test how to access the service

hanxiantaodeMBP:yamls hanxiantao$ kubectl create -f pod1.yaml 
pod/nginx1 created
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-79699b7df9-jn5p4   1/1     Running   0          10m
nginx-79699b7df9-th5hj   1/1     Running   0          10m
nginx1                   1/1     Running   0          39s

Use kubectl exec -it nginx1 sh to enter this pod and install curl

先添加163源
tee /etc/apt/sources.list << EOF
deb http://mirrors.163.com/debian/ jessie main non-ffree contrib
deb http://mirrirs.163.com/dobian/ jessie-updates main non-free contrib
EOF

hanxiantaodeMBP:yamls hanxiantao$ kubectl exec -it nginx1 sh
# apt-get update && apt-get install -y curl

Access via ClusterIP

# curl http://10.108.96.80:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Access via {service name}.{namespace}

# curl http://nginx     
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Access by service name

# curl http://nginx.default
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Access through environment variables

# env
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=nginx1
HOME=/root
NGINX_PORT_80_TCP=tcp://10.108.96.80:80
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
NGINX_VERSION=1.8.1-1~jessie
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_SERVICE_HOST=10.108.96.80
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
NGINX_SERVICE_PORT=80
NGINX_PORT=tcp://10.108.96.80:80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
NGINX_PORT_80_TCP_ADDR=10.108.96.80
NGINX_PORT_80_TCP_PORT=80
NGINX_PORT_80_TCP_PROTO=tcp
# curl $NGINX_SERVICE_HOST
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

2) Expose Service to outside the cluster

service.yaml addtype: LoadBalancer

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    run: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
  type: LoadBalancer  
hanxiantaodeMBP:yamls hanxiantao$ kubectl apply -f service.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/nginx configured
hanxiantaodeMBP:yamls hanxiantao$ kubectl get svc -o wide
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        29d   <none>
nginx        LoadBalancer   10.108.96.80   localhost     80:31943/TCP   29m   run=nginx

There is an extra EXTERNAL-IP here, you can access the service through http://localhost/

3) The service access address has nothing to do with the pod life cycle

hanxiantaodeMBP:yamls hanxiantao$ kubectl describe svc nginx
Name:                     nginx
Namespace:                default
Labels:                   run=nginx
Annotations:              <none>
Selector:                 run=nginx
Type:                     LoadBalancer
IP:                       10.108.96.80
LoadBalancer Ingress:     localhost
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31943/TCP
Endpoints:                10.1.0.139:80,10.1.0.140:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason  Age    From                Message
  ----    ------  ----   ----                -------
  Normal  Type    4m49s  service-controller  ClusterIP -> LoadBalancer

Now the back-end IP addresses of service mapping are 10.1.0.139 and 10.1.0.140

hanxiantaodeMBP:yamls hanxiantao$ kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-79699b7df9-jn5p4   1/1     Running   0          35m
nginx-79699b7df9-th5hj   1/1     Running   0          35m
nginx1                   1/1     Running   0          25m
hanxiantaodeMBP:yamls hanxiantao$ kubectl delete pod nginx-79699b7df9-jn5p4
pod "nginx-79699b7df9-jn5p4" deleted
hanxiantaodeMBP:yamls hanxiantao$ kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
nginx-79699b7df9-bb95z   1/1     Running   0          37s   10.1.0.142   docker-desktop   <none>           <none>
nginx-79699b7df9-th5hj   1/1     Running   0          36m   10.1.0.140   docker-desktop   <none>           <none>
nginx1                   1/1     Running   0          26m   10.1.0.141   docker-desktop   <none>           <none>
hanxiantaodeMBP:yamls hanxiantao$ kubectl describe svc nginx
Name:                     nginx
Namespace:                default
Labels:                   run=nginx
Annotations:              <none>
Selector:                 run=nginx
Type:                     LoadBalancer
IP:                       10.108.96.80
LoadBalancer Ingress:     localhost
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31943/TCP
Endpoints:                10.1.0.140:80,10.1.0.142:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason  Age   From                Message
  ----    ------  ----  ----                -------
  Normal  Type    6m2s  service-controller  ClusterIP -> LoadBalancer

Delete one of the pods, now the pod's ip address has become 10.1.0.140 and 10.1.0.142, and the back-end ip address of service mapping has also become 10.1.0.140 and 10.1.0.142

4. Architecture design

Insert picture description here

As shown in the figure above, K8s service discovery and K8s Service are such an overall architecture

K8s is divided into master node and worker node:

  • The main content of the master is K8s control
  • The worker node is where the user application actually runs

There is an APIServer in the K8s master node, which is the place to uniformly manage all K8s objects. All components will be registered on the APIServer to monitor the changes of this object. For example, the life cycle of our component pod changes, these events

There are three key components:

  • One is Cloud Controller Manager, responsible for configuring a load balancer of LoadBalancer for external access
  • The other is Coredns, which uses Coredns to observe a change in the service backend pod in the APIServer, to configure the service's DNS resolution, so that the virtual IP of the service can be directly accessed through the name of the service, or the headless type of Service. Analysis of IP list
  • Then there will be a kube-proxy component in each node, which monitors service and pod changes, and then actually configures the node pod in the cluster or an access to the virtual IP address

What is the actual access link? For example, accessing the Service from a Client Pod3 inside the cluster is similar to the effect just demonstrated. Client Pod3 first resolves the ServiceIP through Coredns. Coredns will return to it what the service IP corresponding to ServiceName is. This Client Pod3 will use this Service IP to make a request. After its request reaches the host's network, it will The iptables or IPVS configured by kube-proxy do a layer of interception processing, and then load balance to each actual backend pod, thus achieving a load balancing and service discovery

For external traffic, for example, a request that was accessed through the public network just now. It uses an external load balancer Cloud Controller Manager to monitor service changes, configures a load balancer, and then forwards it to a NodePort on the node. The NodePort will also be configured by kube-proxy. Iptables converts NodePort traffic into ClusterIP, and then into the IP address of a pod on the backend for load balancing and service discovery. This is the entire K8s service discovery and the overall structure of the K8s Service

Course address : https://edu.aliyun.com/roadmap/cloudnative?spm=5176.11399608.aliyun-edu-index-014.4.dc2c4679O3eIId#suit

Guess you like

Origin blog.csdn.net/qq_40378034/article/details/112212177