[Cloud native kubernets] Functions and applications of Service

1. Service introduction

        In kubernetes, pod is the carrier of the application. We can access the application through the pod's IP, but the pod's IP address is not fixed, which means that it is inconvenient to directly use the pod's IP to access the service. In order to solve this problem, Kubernetes provides Service resources. Service will aggregate multiple pods that provide the same service and provide a unified entry address. By accessing the entry address of the Service, you can access the subsequent pod services.

        Service is just a concept in many cases. What really works is the kube-proxy service process. Each Node node runs a kube-proxy service process. When a Service is created, the created service information will be written to etcd through api-server, and kube-proxy will discover changes in this Service based on the monitoring mechanism, and then it will convert the latest Service information into corresponding access rules. .

2. Three working modes of kube-proxy 

 (1) userspace mode
        In userspace mode, kube-proxy will create a listening port for each Service, and requests sent to the Cluster IP will be redirected to kube- On the port that the proxy listens to, kube-proxy selects a Pod that provides services based on the LB algorithm and establishes a link with it to forward the request to the Pod. In this mode, kube-proxy acts as a four-layer load balancer. Since kube-proxy runs in userspace, data copies between the kernel and user space will be increased during forwarding processing. Although it is relatively stable, the efficiency is relatively low.

 (2) iptables mode
        In iptables mode, kube-proxy creates corresponding iptables rules for each Pod in the service backend and directly redirects requests to the Cluster IP to A Pod IP. In this mode, kube-proxy does not assume the role of a balancer in the fourth layer, but is only responsible for creating iptables rules. The advantage of this mode is that it is more efficient than the userspace mode, but it cannot provide a flexible LB strategy and cannot retry when the backend Pod is unavailable.

(3)ipvs mode

The ipvs mode is similar to iptables. kube-proxy monitors Pod changes and creates corresponding ipvs rules. ipvs has higher forwarding efficiency than iptables. In addition, ipvs supports more LB algorithms.

 3. Service resource list

kind: Service  # 资源类型
apiVersion: v1  # 资源版本
metadata: # 元数据
  name: service # 资源名称
  namespace: dev # 命名空间
spec: # 描述
  selector: # 标签选择器,用于确定当前service代理哪些pod
    app: nginx
  type: # Service类型,指定service的访问方式
  clusterIP:  # 虚拟服务的ip地址
  sessionAffinity: # session亲和性,支持ClientIP、None两个选项
  ports: # 端口信息
    - protocol: TCP 
      port: 3017  # service端口
      targetPort: 5003 # pod端口
      nodePort: 31122 # 主机端口

 (1) ClusterIP: Default value. It is a virtual IP automatically assigned by the Kubernetes system and can only be accessed within the cluster.

(2) NodePort: Expose the Service to the outside through the port on the specified Node. Through this method, the service can be accessed outside the cluster.

(3) LoadBalancer: Use an external load balancer to complete load distribution to the service. This mode requires support from the external cloud environment.

(4)ExternalName: Introduce services outside the cluster into the cluster and use them directly

4. Functions of Endpoint

4.1.The role of Endpoint

        Endpoint is a resource object in kubernetes. It is stored in etcd and is used to record the access addresses of all pods corresponding to a service. It is generated based on the selector description in the service configuration file. A Service consists of a set of Pods, which are exposed through Endpoints, which are a collection of endpoints that implement the actual service. In other words, the connection between service and pod is achieved through endpoints. If there is no selector configured in the Service, the corresponding Endpoints will not be generated by default.

 4.2. Load distribution strategy

Service access is distributed to the back-end Pod. Currently, kubernetes provides two load distribution strategies:

(1) If not defined, the kube-proxy strategy will be used by default, such as randomization and polling.

(2) Session persistence mode based on client address, that is, all requests from the same client will be forwarded to a fixed Pod. This mode enables adding the sessionAffinity:ClientIP option to the spec
 

4.3. View the mapping rules of ipvs:

4.4. Loop access for testing:
 4.5. Modify the distribution strategy and test it:

 5. Type of Service

5.1.HeadLiness type Service

        In some scenarios, developers may not want to use the load balancing function provided by Service, but want to control the load balancing strategy themselves. For this situation, kubernetes provides HeadLiness Service. This type of Service will not allocate Cluster IP. If you want To access the service, you can only query it through the domain name of the service.

Create service-headliness.yaml

apiVersion: v1
kind: Service
metadata:
  name: service-headliness
  namespace: dev
spec:
  selector:
    app: nginx-pod
  clusterIP: None # 将clusterIP设置为None,即可创建headliness Service
  type: ClusterIP
  ports:
  - port: 80    
    targetPort: 80

 Create the corresponding service and query service-related information:

Specify the DNS server address and resolve the domain name to obtain the IP:

5.2.NodePort type Service

In the previous example, the IP address of the created Service can only be accessed within the cluster. If you want to expose the Service to the outside of the cluster, you must use another type of Service, called the NodePort type. The working principle of NodePort is to map the service port to a port of Node, and then you can access the service through NodeIp:NodePort.

Create service-nodeport.yaml 

apiVersion: v1
kind: Service
metadata:
  name: service-nodeport
  namespace: dev
spec:
  selector:
    app: nginx-pod
  type: NodePort # service类型
  ports:
  - port: 80
    nodePort: 30002 # 指定绑定的node的端口(默认的取值范围是:30000-32767), 如果不指定,会默认分配
    targetPort: 80

Create a Service and view the details of the Service: 

 5.3.LoadBalancer type Service

        LoadBalancer and NodePort are very similar. They both aim to expose a port to the outside. The difference is that LoadBalancer will make a load balancing device outside the cluster, and this device needs to be supported by the external environment. Requests sent by external services to this device will After being loaded by the device, it is forwarded to the cluster.

 5.4.ExternalName type Service

        The Service of the ExternalName type is used to introduce services outside the cluster. It specifies the address of an external service through the externalName attribute, and then accesses this service within the cluster to access the external service. Yes, its essence is to use Service to proxy an external service.

Create a yaml file that proxies the service of an external application: 

apiVersion: v1
kind: Service
metadata:
  name: service-externalname
  namespace: dev
spec:
  type: ExternalName # service类型
  externalName: www.baidu.com  #改成ip地址也可以

 Create Service and use domain name resolution for verification:

6. Experimental application

4.1. Use Deployment to create 3 pods and set the pod labelapp=nginx-pod

apiVersion: apps/v1
kind: Deployment      
metadata:
  name: pc-deployment
  namespace: dev
spec: 
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80

 4.2. Check the details of the Pod and modify the nginx pages in the three Pods to the corresponding IP addresses:

 4.3. Create a ClusterIP type Service: service-clusterip.yaml file

apiVersion: v1
kind: Service
metadata:
  name: service-clusterip
  namespace: dev
spec:
  selector:
    app: nginx-pod
  clusterIP: 10.97.97.97 # service的ip地址,如果不写,默认会生成一个
  type: ClusterIP
  ports:
  - port: 80  # Service端口       
    targetPort: 80 # pod端口

Use yaml file to create service:

Use curl to access the service IP: 

Guess you like

Origin blog.csdn.net/m0_73901077/article/details/134979701