[Kubernetes resources articles] Service four-tier proxy entry practical detailed explanation

1. The concept and principle of Service four-tier proxy

Chinese reference documents on the official website:

1. Service four-layer proxy concept

In K8S, Pod has a life cycle. If you restart the Pod, if the Pod is restarted, its IP is likely to change. If the Pod address specified in our program is hard-coded, every time the Pod is upgraded, the newly upgraded Pod, in the cluster Other services associated with this Pod will not be able to find the upgraded Pod.

In order to solve the above problems, a Service resource object (svc for short) is defined in the K8S cluster. The IP address of the Service will not be changed casually, which is equivalent to defining a service entry. The client or other components in the cluster connect to the Service. The Service proxies to the corresponding service instance on the backend. The Service is a collection of Pods. This set of Pods can be accessed by the Service, usually through label association.

summary:

1. The IP address of the Pod changes frequently. The Service proxies the Pod, and our client only needs to access the Service, and the Service proxies to the backend Pod.

2. The Pod IP cannot be accessed outside the K8S cluster, so a Service needs to be created, and this Service can be accessed outside the K8S cluster.

2. Service working principle

  • When k8s creates a Service, it will find the Pod according to the label selector (lable selector), and create an endpoint object with the same name as the Service.

  • When the Pod address changes, the endpoint will also change accordingly. When the service receives the request from the front-end client, it will use the endpoint to find out which Pod is forwarded to for access. (As for the Pod forwarded to which node, it is determined by the load balancing kube-proxy)

summary:

Specifically, when a Pod is created and added to the Service, Kubernetes will create an Endpoint object for the Pod, and the Endpoint object contains the IP address and port number of the Pod. When the request arrives at the Cluster IP address of the Service, the ipvs (iptables) rule will forward the request to the corresponding Endpoint, thus realizing the function of a four-layer proxy.

3. Interpretation of Service Principle

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-mCUCSq3H-1686303029941) (D:\MD Archives\IMG\image-20230605095326789.png)]

  • Service is a fixed access layer. The client accesses the Pod resources associated with the backend by accessing the ServiceIP+port. The work of the Service depends on the DNS service in the K8S cluster. Different versions of the K8S cluster use different DNS services. Yes, versions before 1.11 use kubeDNS, and newer versions use coreDNS.

  • The domain name resolution of Service needs to rely on DNS service, and DNS service needs to depend on network plug-ins (flannel, calico), etc., so network plug-ins also need to be deployed after deploying K8S cluster.

  • The kube-proxy component on the K8s node will always monitor the change information of service resources in the apiserver. It needs to interact with the apiserver above the master, and connect to the apiserver at any time to obtain any resource change status related to service resources. This is It is realized through an inherent request method watch (monitoring) in kubernetes . Once the content of a service resource changes (such as creation, deletion), kube-proxy will convert it into a service resource scheduling on the current node. The rule that dispatches our request to the backend-specific pod resource, this rule may be iptables or ipvs, depending on how the service is implemented.

4. Four types of Service

  • ClusterIP: The default type, which can only be accessed within the cluster. The service exposed through ClusterIP can be accessed in other Pods within the cluster.
  • NodePort: On the basis of ClusterIP, the service is exposed to the outside of the cluster through NodePort, and the service can be accessed through NodeIP:NodePort.
  • ExternalName: Map the service to the DNS record outside the cluster, you can directly access the external service through the service name, and the application can access it in the namespace.

2. Cases of three types of Service four-tier proxy

1. Create a ClusterIP type Service

The default type can only be accessed within the cluster. The service exposed through ClusterIP can be accessed in other Pods within the cluster.

cat clusterip-deploy.yaml 

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-cluster
  namespace: default
  labels:
    type: web-cluster
    env: uat
spec:
  selector:
    matchLabels:
      type: web-cluster
      env: uat
  replicas: 3 
  template:
    metadata:
      namespace: default
      labels:
        type: web-cluster
        env: uat
    spec:
      containers:
      - name: web-cluster
        image: nginx
        imagePullPolicy: IfNotPresent 
        startupProbe:
          tcpSocket:
            port: 80
        readinessProbe:
          httpGet:
            port: 80
            path: "index.html"
        livenessProbe:
          httpGet:
            port: 80
            path: "index.html"

The list of service resources is as follows:

cat clusterip-svc.yaml 

---
apiVersion: v1
kind: Service
metadata:
  name: web-clusterip
  namespace: default
  labels:
    env: uat
spec:
  type: ClusterIP     # ClusterIP类型
  ports:
  - port: 80          # SVC端口
    protocol: TCP     # TCP协议
    targetPort: 80    # Pod暴露端口
  selector:           # 关联具有以下标签的Pod
    env: uat
    type: web-cluster

Execute YAML file && view resources

kubectl apply -f clusterip-deploy.yaml -f clusterip-svc.yaml 
kubectl get pods,svc

Check Endpointswhether the Service is associated with the Pod

kubectl describe svc web-clusterip

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-b0R2UynR-1686303029942) (D:\MD Archives\IMG\image-20230609124223861.png)]

Obtain the Service IP address and access the backend Pod through the Service proxy

kubectl get svc web-clusterip|awk NR==2|awk '{
    
    print $3}'
10.103.211.187
curl 10.103.211.187

[External link image transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the image and upload it directly (img-H4jELXhy-1686303029943) (D:\MD Archives\IMG\image-20230609124400233.png)]

2. Create NodePort type Service

On the basis of ClusterIP, the service is exposed to the outside of the cluster through NodePort, and the service can be accessed through NodeIP:NodePort.

Create a NodePort type svc association tag withapp=demo-nginx

The Deployment resource list is as follows:

cat nodeport-deploy.yaml 

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-nodeport
  namespace: default
  labels:
    app: web-nodeport
    env: uat
spec:
  selector:
    matchLabels:
      app: web-nodeport
      env: uat
  replicas: 3
  template:
    metadata:
      namespace: default
      labels:
        app: web-nodeport
        env: uat
    spec:
      containers:
      - name: uat-nginx
        image: nginx
        imagePullPolicy: IfNotPresent 
        startupProbe:
          tcpSocket:
            port: 80
        readinessProbe:
          httpGet:
            port: 80
            path: "index.html"
        livenessProbe:
          httpGet:
            port: 80
            path: "index.html"

The Service resource list is as follows:

cat nodeport-svc.yaml 

---
apiVersion: v1
kind: Service
metadata:
  name: nodeport-svc
  namespace: default
  labels:
    env: uat
spec:
  type: NodePort
  ports:
  - port: 80               # svc的端口,这个是k8s集群内部服务可访问的端口
    protocol: TCP          # 协议
    targetPort: 80         # Pod上端口
    nodePort: 30303        # Node节点暴露端口
  selector:                # 关联具有app=web-nginx && env=uat 的Pod
    app: web-nodeport
    env: uat

Execute YAML manifest && view status

kubectl apply -f nodeport-deploy.yaml 
kubectl apply -f nodeport-svc.yaml 
kubectl get pods,svc

Check whether the Endpoints of the Service are associated with the Pod

kubectl describe svc nodeport-svc|grep Endpoints

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-BFxq3buD-1686303029943) (D:\MD Archives\IMG\image-20230609130818398.png)]

Access NodeIP through a browser: 30303

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-8aLEg8Ex-1686303029943) (D:\MD Archives\IMG\image-20230609130955135.png)]

3. Create an ExternalName type Service

Map the service to a DNS record outside the cluster, and you can directly access the external service through the service name.

Application scenario: cross-namespace association, ExternalName can be understood as a soft link SVC resource

Requirement: The client service under the default namespace must be able to access the nginx-svc service under the nginx-ns namespace. The experiment diagram is as follows:

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-cODOsLwZ-1686303029943) (D:\MD Archives\IMG\image-20230609164044118.png)]

Create server-related resources and place them nginx-nsunder the namespace

cat nginx-server.yaml 

---
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-server
  namespace: nginx-ns
  labels:
    app: nginx-server
    env: uat
spec:
  selector:
    matchLabels:
      app: nginx-server
      env: uat
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx-server
        env: uat
    spec:
      containers:
      - name: nginx-server
        image: nginx
        imagePullPolicy: IfNotPresent 
        startupProbe:
          tcpSocket:
            port: 80
        readinessProbe:
          httpGet:
            port: 80
            path: "index.html"
        livenessProbe:
          httpGet:
            port: 80
            path: "index.html"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: nginx-ns
spec:
  ports:
  - name: http
    port: 80              
    protocol: TCP        
  selector:           
    app: nginx-server 
    env: uat
kubectl apply -f nginx-server.yaml 

Create a client Pod resource:

cat nginx-client.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-client
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-client
  template:
    metadata:
      labels:
        app: nginx-client
    spec:
      containers:
      - name: nginx-client
        image: busybox:1.28
        imagePullPolicy: IfNotPresent
        command: ["/bin/sh", "-c", "sleep 3600000"]
kubectl apply -f nginx-client.yaml

Create a Service of type ExternalName and link to the Service nginx-ns under the namespacenginx-svc

cat nginx-client-svc.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: client-nginx
spec:
  type: ExternalName
  externalName: nginx-svc.nginx-ns.svc.cluster.local # 访问此SVC链接到nginx-svc.nginx-ns.svc.cluster.local SVC
  ports:
  - name: http
    port: 80
    targetPort: 80
kubectl apply -f nginx-client-svc.yaml

View Service details:

kubectl describe svc client-nginx

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-iQ2iZRNw-1686303029944) (D:\MD Archives\IMG\image-20230609163103581.png)]

Test: Enter the client side to visit the nginx-service website

kubectl exec -it nginx-client-784fd7bfc7-2d892 -- /bin/sh
wget -q -O - client-nginx

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-bM2E4cJY-1686303029944) (D:\MD Archives\IMG\image-20230609163403871.png)]

3. Expansion

1. Service domain name resolution

As long as the Service is successfully created, we can directly access it through its domain name (the domain name can be used for access in the Pod, but not for the Node node) . After each service is created, a resource will be dynamically added to the cluster DNS Records can be parsed after they are added. The format of resource records is:

SVC_NAME.NS_NAME.DOMAIN.LTD.
服务名.命名空间.域名后缀
集群默认的域名后缀是svc.cluster.local.

Use the above 创建ClusterIP类型svcto do experiments, and use the domain name access in the Pod.

kubectl exec -it web-cluster-5db6bc847b-226k9 -- /bin/bash
curl web-clusterip.default.svc.cluster.local

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-IZMqj0Q5-1686303029944) (D:\MD Archives\IMG\image-20230609130152791.png)]

2. Customize Endpoints resources

Scenario: Create a Service resource and proxy port 3306 of the machine

First install and start the MySQL service on the host

yum install mariadb-server.x86_64 -y
systemctl start mariadb
systemctl enable mariadb

Create the Service proxy port 3306 YAML as follows:

cat 3306-svc.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: svc-3306
spec:
  type: ClusterIP
  ports:
  - port: 3306
kubectl apply -f 3306-svc.yaml	

There is no associated Pod label defined in our svc, so the Endpoint in svc is none

kubectl describe svc svc-3306

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-yNYsLcQg-1686303029944) (D:\MD Archives\IMG\image-20230609165500615.png)]

Create Endpointresources and associate the above SVC

cat 3306-ep.yaml 
---
apiVersion: v1
kind: Endpoints
metadata:
  name: svc-3306 # ep的名称必须和svc一致,通过名称相关联
subsets:
- addresses:
  - ip: 172.21.0.13
  ports:
  - port: 3306

OK, at this point we check that the Endpoint of SVC will be associated with the Endpoint created above, and then the K8S cluster can connect to the MySQL service directly by writing to the IP+port of SVC for access.

kubectl describe svc svc-3306

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-1xOXWVf6-1686303029945) (D:\MD Archives\IMG\image-20230609171418074.png)]

Guess you like

Origin blog.csdn.net/weixin_45310323/article/details/131132091