K8S Service Introduction

1. Service definition

Kubernetes Service defines such an abstraction: a logical grouping of Pods, and a strategy for accessing them-usually called microservices. This group of Pods can be accessed by Service, usually through Label Selector

Insert picture description here
Service can provide load balancing capabilities, but has the following limitations when used:

  • It only provides four-layer load balancing capabilities, but does not have seven-layer functions, but sometimes we may need more matching rules to forward requests. This is not supported by four-layer load balancing.

1.1 Types of Service

Service has the following four types in Kubernetes

  • ClusterIP: The default type, automatically assigns a virtual IP that can only be accessed within the Cluster
  • NodePort: Bind a port to each machine for Service on the basis of ClusterIP, so that the service can be accessed through NodeIP: NodePort
  • LoadBalancer: On the basis of NodePort, create an external load balancer with the help of cloud provider, and forward the request to NodeIP: NodePort
  • ExternalName: Introduce services external to the cluster into the cluster and use them directly within the cluster. No proxy of any type is created. This is only supported by kube-dns of Kubernetes 1.7 or higher

Insert picture description here

1.1.1 ClusterIP

ClusterIP mainly uses iptables (or ipvs) on each node to forward the data sent to the corresponding port of ClusterIP to kube-proxy. Then kube-proxy implements its own internal load balancing method, and can query the address and port of the corresponding Pod under this service, and then forward the data to the corresponding Pod

In order to achieve the functions on the diagram, the following components are mainly required to work together:

  • apiserver: The user sends a command to create a service (svc) to the apiserver through the kubectl command, and the apiserver stores the data in etcd after receiving the request
  • kube-proxy: Each node of Kubernetes has a process called kube-proxy, which is responsible for sensing changes in service and pod, and writing the changed information into the local iptables rules
  • iptables: Use NAT and other technologies to transfer VIP traffic to the endpoint

1.1.2 Headless Service

Sometimes you don't need or don't want load balancing, and a separate Service IP. In this case, you can create a Headless Service by specifying the value of ClusterIP (spec.clusterIP) as "None". Such services will not allocate ClusterIP, kube-proxy will not process them, and the platform will not perform load balancing and routing for them.

1.1.3 NodePort

The principle of NodePort is to open a port on the node, import the traffic accessing the port to kube-proxy, and then kube-proxy to further give the corresponding Pod

1.1.4 LoadBalancer

LoadBalancer and NodePort are actually the same way. The difference is that LoadBalancer is one step more than NodePort, that is, you can call cloud provider to create LB to divert to the node.
Insert picture description here

1.1.5 ExternalName

This type of Service can map the service to the content of the externalName field by returning the CNAME (alias) and its value. ExternalName Service is a special case of Service. It has no selector, nor does it define any port or endpoint. On the contrary, for services running outside the cluster, the service is provided inside the cluster by returning the alias of the external service

For example:

kind:Service
apiVersion:v1
metadata:
  name:my-service-1
  namespace:default
spec:
  type:ExternalName
  externalName:www.baidu.com

When querying the host my-service-1.default.svc.cluster.local (SVC_NAME.NAMESPACE.svc.cluster.local), the DNS service of the cluster will return a CNAME record with the value www.baidu.com. Access to this service works the same as other services, the only difference is that the redirection occurs at the DNS layer, and no proxy or forwarding is performed.

2. Classification of Agency Models

2.1 VIP (virtual IP) and Service proxy

In a Kubernetes cluster, each Node runs a "kube-proxy" process . "Kube-proxy" is responsible for implementing a form of VIP for "Service" instead of "ExternalName". In Kubernetes v1.0, the proxy is completely in the userspace. In the Kubernetes v1.1 version, iptables proxy is added, but it is not the default operating mode. Starting from Kubernetes v1.2, the default is iptables proxy. In Kubernetes v1.8.0-beta.0, the ipvs proxy has been added

In Kubernetes version 1.14, the ipvs proxy is used by default

In Kubernetes v1.0 version, ** "Service" is the concept of Layer 4 (TCP/UDP over IP) **. In the Kubernetes v1.1 version, the "Ingress" API (beta version) was added to represent the 7-layer (HTTP) service

Reasons for not using round-robin DNS for load balancing:
Due to the cache problem of DNS, multiple accesses are from the same cache address, which cannot achieve the desired load balancing effect

2.2 Classification of Agent Mode

2.2.1 Userspace proxy mode

Insert picture description here

2.2.2 iptables proxy mode

Insert picture description here

2.2.3 ipvs proxy mode

Insert picture description here

In this mode, kube-proxy will monitor Kubernetes Service objects and Endpoints, call the netlink interface to create ipvs rules accordingly, and periodically synchronize ipvs rules with Kubernetes Service objects and Endpoints to ensure that the ipvs status is consistent with expectations. When accessing the service, the traffic will be changed. Be redirected to one of the backend Pods

Similar to iptables, ipvs has the hook function of netfilter, but uses a hash table as the underlying data structure and works in the kernel space. This means that ipvs can redirect traffic faster and have better performance when synchronizing proxy rules. In addition, ipvs provides more options for load balancing algorithms, such as:

  • rr: Round-robin scheduling
  • lc: minimum number of connections
  • dh: target hash
  • sh: source hash
  • sed: the shortest expected delay
  • nq: no queue scheduling

⚠️Note: The ipvs mode assumes that the ipvs kernel module has been installed on the node before running kube-proxy. When kube-proxy is started in ipvs proxy mode, kube-proxy will verify whether the ipvs module is installed on the node. If it is not installed, kube-proxy will fall back to iptables proxy mode

Guess you like

Origin blog.csdn.net/sinat_34241861/article/details/113241342