Kubernetes Basics (5)-Service

1 Introduction

Service is mainly used to provide network services. Through the definition of Servicel, it can provide stable access addresses (domain names or IP addresses) and load balancing functions for client applications, as well as shield changes in back-end Endpoints. It is the core resource for Kubernetes to implement microservices. .

This article explains in detail the related concepts and principles of Service.

2 Service introduction

2.1 Concept of Service

The following demonstrates how to access the services provided by a multi-copy application container group before there is a Service.

Taking the Tomcat container as an example, its Deployment resource file is defined as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: kubeguide/tomcat-app:v1 
        ports:
        - containerPort: 8080

After creation, check the IP address of each pod:

The client application can directly access the web service through the IP addresses and port number 8080 of these two Pods, for example: curl 10.0.95.22:8080

However, container applications that provide services are usually distributed and provide services through multiple Pod copies (the issue of expansion and contraction also needs to be considered). To achieve dynamic awareness of changes in service backend instances, the client system implementation will be greatly increased. complexity, in order to solve this problem, Kubernetes introduces the Service resource type.

2.2 Concept

Service implements several core functions in the microservice architecture: fully automatic service registration, service discovery, service load balancing, etc.

2.2.1 How to create a Service

2.2.1.1 Create using kubectl expose command

The command is as follows:

$ kubectl expose deployment webapp 
service/webapp exposed

Looking at the newly created Service, you can see that the system assigns it a virtual IP address (ClusterIP address), and the port number of the Service is copied from the containerPort in the Pod:

It is also accessible through curl 169.169.140.242:8080. When accessed, the load will be automatically distributed to one of the two backend Pods: 10.0.95.22:8080 or 10.0.95.23:8080. 

2.2.1.2 Resource file creation
In addition to using commands, you can also use yaml resource files to create:
apiVersion: v1
kind: Service
metadata:
  name: webapp
spec:
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector:
    app: webapp

The key fields in the Service definition are ports and selector:

  • ports: The definition part specifies the port number of the Service itself as 8080;
  • targetPort: Specifies the container port number of the backend Pod;
  • selector: The definition part sets the label:app=webapp owned by the backend Pod.

After creating it using the kubectl create command, you can see the same effect as using the kubectl expose command to create a Service.

2.2.2 Endpoint

The "backend" corresponding to a Service consists of the Pod's IP and container port number, which is called Endpoint in the k8s system.

You can view the Endpoint list through the kubectl descirbe svc command, such as:

Kubernetes automatically creates the Endpoint resource object associated with the Service, which can be viewed by querying the Endpoint object:

3 Load balancing mechanism

When a Service object is defined in a Kubernetes cluster, client applications in the cluster can access the services provided by the specific Pod container through the service IP.

The load balancing mechanism from the service IP to the backend Pod is implemented by kube-proxy on each Node. Through the load balancing mechanism of Service, Kubernetes implements a unified entrance for distributed applications, eliminating the complexity of client applications knowing the list and changes of back-end service instances. .

3.1 Proxy mode of kube-proxy

kube-proxy proxy mode reference: Kubernetes basics (4)-Kube-proxy_kubectl proxy-CSDN blog

3.2 Session persistence mode

Service supports the session persistence mechanism based on client IP by setting sessionAffinity, that is: for the first time, a request initiated by a client source IP is forwarded to a Pod on the backend, and subsequent requests initiated from the same client IP will be forwarded. Forwarded to the same backend Pod.

The configuration parameter is service.spec.sessionAffinity. You can also set the maximum time for session retention (service.spec.sessionAffinityConfig.clientIP.timeoutSeconds). For example, the following service sets the session retention time to 10800s. (3h):

apiVersion: v1
kind: Service
metadata:
  name: webapp
spec:
  sessionAffinity: ClientIP 
  sessionAffinityConfig:
    clientIP:
      timeoutSecondes: 10080
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector:
    app: webapp

4 Multi-port settings for Service

A container application can provide multiple port services, and multiple port numbers can be set accordingly in the Service definition.

In the following example, Service sets two port numbers to provide different services, such as web service and management service (each port number is named below for easy differentiation):

apiversion: v1
kind: Service
metadata:
  name: webapp
spec:
ports:
- port: 8080
  targetPort: 8080
  name: web
- port: 8005
  targetPort: 8005 
  name: management
selector:
app: webapp

Another example is that the same port number uses different protocols, such as TCP and UDP, and it also needs to be set to multiple port numbers to provide different services:

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system 
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true" 
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 169.169.0.100
  ports:
  - name: dns 
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

5 Define the external service as Service

Ordinary Service abstracts the backend Endpoint list through Label Selector. If the backend Endpoint is not provided by the Pod replica set, the Service can also abstractly define any other service, and define a known service outside the Kubernetes cluster as Kubernetes A Service within the cluster that can be accessed by other applications in the cluster.

5.1 Scenario

Common application scenarios include:

  • A deployed out-of-cluster service: such as database service, cache service, etc.;
  • A service in other Kubernetes clusters;
  • During the migration process, the service name access mechanism in Kubernetes is verified for a certain service.

Service points to the external service as shown in the figure below:

5.2 Definition

For this application scenario, the user does not set the Label Selector when creating the Service resource object (the backend Pod does not exist either), and defines another Label Selector associated with the Service. Endpoint resource object, set the IP address and port number of the external service in Endpoint, for example:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

-----------
apiversion: v1
kind: Endpoints
metadata:
  name: my-service
subsets:
- addresses:
  - IP: 1.2.3.4
  ports:
  - port: 80

6 Service types

The ClusterIP address created by Kubernetes for the Service is an abstraction of the backend Pod list and has no meaning outside the cluster. However, there are many Services that need to provide services outside the cluster. Kubernetes provides a variety of mechanisms to expose the Service. , accessible to clients outside the cluster.

This can be set via the type field "type" of the Service resource object.

There are four types of Kubernetes services - ClusterIP, NodePort, LoadBalancer and ExternalName. The type attribute in the service spec determines how the service is exposed to the network.

6.1 ClusterIP (cluster IP)

  • ClusterIP is the default and most common service type.
  • Kubernetes assigns a cluster-internal IP address to the ClusterIP service. This makes the service only accessible within the cluster.
  • Requests to services (pods) cannot be made from outside the cluster.
  • You can optionally set the cluster IP in the service definition file.

6.1.1 Usage scenarios

Inter-service communication within the cluster. For example, communication between the front-end and back-end components of an application.

Example

apiVersion: v1
kind: Service
metadata:
  name: my-backend-service
spec:
  type: ClusterIP # Optional field (default)
  clusterIP: 10.10.0.1 # within service cluster ip range
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 8080

6.2 NodePort (node ​​port)

  • The NodePort service is an extension of the ClusterIP service. The ClusterIP service to which the NodePort service is routed is automatically created.
  • It exposes services outside the cluster by adding a cluster-wide port on top of ClusterIP.
  • NodePort exposes services on each node IP on a static port (NodePort). Each node proxies this port to the backend service. Therefore, external traffic can access fixed ports on each node. This means that any requests to the cluster on this port will be forwarded to this service.
  • Users can contact the NodePort service from outside the cluster by requesting <NodeIP>:<NodePort>.
  • Node ports must be in the range 30000-32767. Manually assigning ports to services is optional. If not defined, Kubernetes will automatically assign one.
  • If the user is to select a node port explicitly, make sure that the port is not already in use by another service.

6.2.1 Usage scenarios

  • When the user wants to enable external connections to the backend service.
  • Using NodePort gives users the freedom to set up their own load balancing solutions, configure environments that are not fully supported by Kubernetes, and even directly expose the IPs of one or more nodes.
  • It is better to place a load balancer above the nodes to avoid node failure.

Example

apiVersion: v1
kind: Service
metadata:
  name: my-frontend-service
spec:
  type: NodePort
  selector:
    app: web
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 8080
    nodePort: 30000 # 30000-32767, Optional field

6.3 LoadBalancer

  • The LoadBalancer service is an extension of the NodePort service. The NodePort and ClusterIP services that the external load balancer routes to are automatically created.
  • It integrates NodePort with cloud-based load balancer.
  • It exposes services externally using the cloud vendor's load balancer.
  • Each cloud vendor (AWS, Azure, GCP, Alibaba Cloud, Tencent Cloud, etc.) has its own native load balancer implementation. The cloud vendor will create a load balancer, which will then automatically route requests to your Kubernetes service.
  • Traffic from the external load balancer is directed to the backend Pods. The cloud provider decides how to perform load balancing.
  • The actual creation of the load balancer happens asynchronously.
  • Every time you want to expose a service to the outside world, you must create a new LoadBalancer and obtain an IP address.

6.3.1 Usage scenarios

When users use cloud vendors to host Kubernetes clusters.

Example

apiVersion: v1
kind: Service
metadata:
  name: my-frontend-service
spec:
  type: LoadBalancer
  clusterIP: 10.0.171.123
  loadBalancerIP: 123.123.123.123
  selector:
    app: web
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 8080

6.4 ExternalName

  • Services of type ExternalName map a Service to a DNS name instead of a typical selector such as my-service.
  • Users can specify these services using the `spec.externalName` parameter.
  • It maps the service to the contents of the externalName field (for example,  foo.bar.example.com ) by returning a CNAME record with its value.
  • No proxy of any kind is established.

6.4.1 Usage scenarios

  • This is typically used to create services within Kubernetes to represent external data stores, such as databases running outside of Kubernetes.
  • Users can use the ExternalName service (as a local service) when a Pod from one namespace communicates with a service in another namespace.

Example

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ExternalName
  externalName: my.database.example.com

7 Expose Service to external cluster

The ClusterIP address created by Kubernetes for the Service is an abstraction of the backend Pod list and has no meaning outside the cluster. However, there are many Services that need to provide services outside the cluster. Kubernetes provides a variety of mechanisms to expose the Service. , accessible to clients outside the cluster. 

External access method reference: Kubernetes Basics (3)-Service External Network Access Method-CSDN Blog

8 Network protocols supported by Service

The network protocols currently supported by Service are as follows.

protocol describe
TCP The default network protocol of the Service, which can be used for all types of Services.
UDP It can be used for most types of Services. The LoadBalancer type depends on the cloud service provider's support for UDP.
HTTP It depends on whether the cloud service provider supports HTTP and the implementation mechanism.
PROXY It depends on whether the cloud service provider supports HTTP and the implementation mechanism.
SCTP It was introduced in Kubernetes version 1.12 and reached the Beta stage in version 1.19. It is enabled by default. If you need to turn off this feature, you need to set the startup parameter of kube-apiserver –feature-gates=-SCTPSupport=-false to turn it off.

Starting from Kubernetes version 1.17, you can set a new segment "AppProtocol" for Service and Endpoint resource objects, which is used to identify the application layer protocol type provided by the back-end service on a certain port number, such as HTTP, HTTPS, SSL, DNS, etc. .

To use AppProtocol, you need to set the startup parameter of kube-apiserver --feature-gates=ServiceAppProtocol=true to enable it, and then set the AppProtocol field in the definition of Service or Endpoint to specify the type of application layer protocol, for example: 

apiVersion: v1
kind: Service
metadata: 
  name: webapp 
spec:
  ports:
  - port: 8080
    targetPort: 8080
    AppProtocol: HTTP
  selector:
    app: webapp

8 Kubernetes service discovery mechanism

The service discovery mechanism refers to how the client application learns the access address of the back-end service in a Kubernetes cluster. There are two ways.

8.1 Environment variable method

When a Pod is running, the system will automatically inject the information of all valid services in the cluster into its container running environment.

Service-related information includes service IP, service port number, protocols related to each port number, etc., and is set through the {SVCNAME_SERVICE_HOST} and {SVCNAME_SERVICE_PORT} formats.

The naming rules of SVCNAME are: Convert the Service name string to all uppercase letters, replace the horizontal line "" with an underscore "_", take the webapp service as an example:

apiVersion: v1
kind: Service
metadata:
  name: webapp
spec: 
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
  selector:
    app: webapp

In a newly created Pod (client application), you can see that the environment variables automatically set by the system are as follows:

WEBAPP_SERVICE_HOST=169.169.81.175
WEBAPP_SERVICE_PORT=8080
WEBAPP_P0RT=tcp://169.169.81.175:8080
WEBAPP_P0RT_8080_TCP=tcp://169.169.81.175:8080
WEBAPP_PORT_8080_TCP_PROTO=tcp
WEBAPP_PORT_8080_TCP_PORT=8080
WEBAPP_PORT_8080_TCP_ADDR=169.169.81.175

Then the client application can obtain the address of the target service that needs to be accessed from the environment variable according to the naming rules of Service-related environment variables, for example:

curl http://{WEBAPP_SERVICE_HOST}:${WEBAPP_SERVICE_HOST}

8.2 DNS method

Service follows the DNS naming convention in the Kubernetes system. The DNS domain name representation method of Service is ​<servicename>.<namespace>.svc.<clusterdomain>​​, where:

  • servicename: the name of the service;
  • namespace: the name of the namespace in which it is located;
  • clusterdomain: The domain name suffix set for the Kubernetes cluster (for example, cluster.local). The naming rules of the service name follow the requirements of the RFC 1123 specification.

In addition, if the port number in the Service definition is set with a name (name), the port number will also have a DNS domain name, which will be saved in the DNS server in the format of an SRV record: ​​​_<portname>._<protocol> .<servicename>.<namespace>.svc. <clusterdomain>​, its value is the numerical value of the port number.

When the Service is accessed as a DNS domain name, a DNS server needs to exist in the Kubernetes cluster to complete the domain name to ClusterIP Address resolution is working. After years of development, CoreDNS is currently used as the default DNS server of the Kubernetes cluster to provide domain name resolution services.

Taking the webapp service as an example, name its port number "http":

apiversion: v1
kind: Service
metadata:
  name: webapp
spec:
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
    name: http
  selector:
    app: webapp

Parsing the DNS SRV record named "_http._tcp.webapp.default.svc.cluster.local'" for the "http" port, you can query the port number as 8080 ​.

9 Concepts and Applications of Headless Service 

In some application scenarios, the client application does not need the load balancing function implemented through the Kubernetes built-in Service, or needs to complete the service discovery mechanism for each instance of the service backend by itself, or needs to implement the load balancing function by itself. In this case, you can create A special service called "Headless'" is implemented. Reference for introduction to headless:  Kubernetes Basics (2)-Headless Service_alden_ygq’s blog-CSDN blog

10 Endpoint Sharding and Service Topology

The backend of Service is a set of Endpoint lists, which provides great convenience for client applications. However, as the cluster scale expands and the number of Services increases, especially the number of Service backend Endpoints, the number of load distribution rules (such as iptables rules or ipvs rules) that kube-proxy needs to maintain will also increase sharply, resulting in subsequent The cost of update operations such as adding and deleting Service backend Endpoints has increased sharply.

Assuming 10,000 Endpoints running on approximately 5,000 Nodes in a Kubernetes cluster, updates to a single Pod would require a total of approximately 5GB Data transmission, which not only wastes huge network bandwidth in the cluster, but also has a great impact on Master, which will affect the overall performance of the Kubernetes cluster. This is especially true when rolling upgrade operations are ongoing. In this case, K8s has designed an Endpoint Slices mechanism to solve the problem.

EndpointSlice achieves the goal of reducing the amount of network transmission data between the Master and each Node and improving overall performance by sharding management of Endpoint. For the rolling upgrade of Deployment, only the Endpoint information on some Nodes can be updated, and the amount of data transmission between the Master and Node can be reduced by about 100 times, which can greatly improve management efficiency.

The second goal to be achieved by Endpoint Slices is to provide support for service routing based on Node topology, which needs to be implemented together with the Service Topology mechanism.

10.1 Endpoint Sharding

Starting from kubernetes version 1.19, the EndpointSplice mechanism and EndpointSliceProxying are enabled by default:

  • Enable by setting the startup parameters of the kube-apiserver and kube-proxy services --feature-gates="EndpointSlice=true". kube-proxy still uses the Endpoint object by default. In order to improve performance, you can set the kube-proxy startup parameter --feature-gates-="EndpointSliceProxying=true" to let kube-proxy use EndpointSlice, which can reduce kube-proxy and Network communication between masters and improve performance.

Taking a 3-copy webapp service as an example, the Pod list is as follows:

The service and Endpoint information is as follows:

Looking at the EndpointSlice, you can see that the system automatically created an EndpointSlice with the name prefix "​​webapp-​​":

Viewing its detailed information, you can see the IP addresses and port information of the three Endpoints, and Topology related information is set for the Endpoints:

 10.1.1 Parameters

By default, the EndpointSlice created by the EndpointSlice controller contains up to 100 Endpoints. If you need to modify it, you can set it through the startup parameter of the kube-controller-manager service -- max-endpoints-per-slice. But the upper limit cannot exceed 1,000.

The key information of EndpointSlice is as follows:

Configuration items describe
Associated service name Set the association information between EndpointSlice and Service as a label kubernetes.io/service-name=webapp, which indicates the service name.
AddressTypeAddressType

Includes the following 3 value types:

  • IPv4: IP address in IPv4 format
  • IPv6: IP address in Pv6 format
  • FQDN: fully qualified domain name.
Information about each Endpoint

Information about each Endpoint listed in the Endpoints list:

  • Addresses: IP address of Endpoint;
  • Conditions: Endpoint status information, used as query conditions for EndpointSlice;
  • Hostname: The host name nostname set in Endpoint;
  • TargetRef: Pod name corresponding to Endpoint;
  • Topology: Topology information, providing data for topology-aware service routing.

The current topology information automatically set by the EndpointSlice controller is as follows:

  • -- kubernetes.io/hostname: The name of the Node where the Endpoint is located;
  • -- topology.kubernetes.io/zone: The Zone information where the Endpoint is located, use the value of the Node label topology.kubernetes.io/zone. For example, the Node in the above example has the label topology.kubernetes.io/zone:north".
  • -- topology.kubernetes.io/region: Region information where the Endpoint is located, using the value of the Node label topology.kubernetes.io/region.

In large-scale clusters, administrators should set relevant topology labels for Nodes in different regions or regions to set topology information for Nodes.

EndpointSlice Management Controller Set through the endpointslice.kubernetes.io/managed-by label, which is used in application scenarios where there are multiple management controllers. For example, a Service Mesh management tool can also set EndpointSlice ​​​Manage. In order to support multiple management tools to manage EndpointSlice at the same time without interfering with each other, you can set the name of the management controller through the endpointslice.kubernetes.io/managed--by label. Kubernetes’ built-in EndpointSlice The controller automatically sets the value of this tag to endpointslice-controller.k8s.io and other management controllers should set a unique name for identification.

10.1.2 Copy function

EndpointSlice replication (Mirroring) function: Applications may sometimes create custom Endpoint resources. In order to prevent the application from creating an EndpointSlice resource when creating an Endpoint resource, the Kubernetes control plane will automatically copy the Endpoint resource into an EndpointSlice resource.

In the following situations, automatic copying will not be performed:

  • Endpoint资源设置了Label:endpointslice.kubernetes.io/skip-mirror=true;
  • Endpoint资源设置了Annotation:control-plane.alpha.kubernetes.io/leader;
  • Endpoint资源对应的Service资源不存在;
  • Endpoint:资源对应的Service资源设置了非空的Selector;

一个Endpoint资源同时存在IPv4和IPv6地址类型时,会被复制为多个EndpointSlice资源,每种地址类型最多会被复制为1000个EndpointSlice资源。

10.1.3 数据分布管理机制

如上例所示,可以看到每个EndpointSlice资源都包含一组作用于全部Endpoint的端口号(Ports)。如果Service定义中的端口号使用了字符串名称,则对于相同name的端口号,目标Pod 的targetPort可能是不同的,结果是EndpointSlice资源将会不同。这与Endpoint 资源设置子集(subset)的逻辑是相同的。

Kubernetes控制平面对于EndpointSlice中数据的管理机制是尽可能填满,但不会在多个EndpointSlice数据不均衡衡的情况下主动执行重新平衡(rebalance)操作,其背后的逻辑也很简单,步骤如下:

  1. 遍历当前所有EndpointSlice资源,删除其中不再需要的Endpoint,更新已更改的匹配Endpoint;
  2. 遍历第1步中已更新的EndpointSlice资源,将需要添加的新Endpoint填充进去;
  3. 如果还有新的待添加Endpoint,则尝试将其放入之前未更新的EndpointSlice中,或者尝试创建新的EndpointSlicez并添加。

重要的是,第3步优先考虑创建新的EndpointSlice而不是更新原EndpointSlice。例如,如果要添加l0个新的Endpoint,则当前有两个EndpointSlice各有5个剩余空间可用于填充,系统也会创建一个新的EndpointSlice用来填充这10个新Endpoint。换句话说,单个EndpointSlice的创建优于对多个EndpointSlice的更新。

以上主要是由于在每个节点上运行的kube-proxy都会持续监控EndpointSlice的变化,对EndpointSlice每次更新成本都很高,因为每次更新都需要​​Master​​​将更新数据发送到每个​​kube-proxy​​。

上述管理机制旨在限制需要发送到每个节点的更新数据量,即使可能导致最终有许多EndpointSlice资源未能填满。实际上,这种不太理想的数据分布情况应该是罕见的。

Master的EndpointSlice控制器处理的大多数更新所带来的数据量都足够小,使得对已存在 (仍有空余空间)EndpointSlice的数据填充都没有问题,如果实在无法填充,则无论如何都需要创建新的EndpointSlice资源。

此外,对Deployment执行滚动升级操作时,由于后端Pod列表和相关Endpoint列表全部会发生变化,所以也会很自然地对EndpointSlice资源的内容全部进行更新。

10.2 服务拓扑

在默认情况下,发送到一个Service的流量会被均匀转发到每个后端Endpoint,但无法根据更复杂的拓扑信息设置复杂的路由策略。服务拓扑机制的引入就是为了实现基于Node拓扑的服务路由,允许Service创建者根据来源Node和目标Node的标签来定义流量路由策略。

通过对来源Node和目标Node标签的匹配,用户可以根据业务需求对Node进行分组,设置有意义的指标值来标识 “较近” 或者 “较远” 的属性:

例如对于公有云环境来说,通常有区域(Zone或Region)的划分,云平台倾向于把服务流量限制在同一个区域内,这通常是因为跨区域网络流量会收取额外的费用。另一个例子是把流量路由到由DaemonSet管理的当前Node的Pod 上。又如希望把流量保持在相同机架内的Node上,以获得更低的网络延时。

10.2.1 配置

服务拓扑机制需要通过设置​​kube-apiserver​​​和​​kube-proxy​​​服务的启动参数​​--feature-gates-=“ServiceTopology=true,EndpointSlice=true“​​​进行启用(需要同时启用​​EndpointSlice​​​功能),然后就可以在​​Service​​​资源对象上通过定义 ​​topologyKeys​​字段来控制到Service的流量路由了。

对于需要使用服务拓扑机制的集群,管理员需要为Node设置相应的拓扑标签,包括​​kubernetes.io/hostname​​​、​​topology.kubernetes.io/zone​​​ 和​​topology.kubernetes.io/region​​。

然后为Service设置topologyKeys的值,就可以实现如下流量路由策略:

  • 配置为[“kubernetes.io/hostname“]:流量只会被路由到相同Node的
  • Endpoint上,如果Node的Endpoint不存在,则将请求丢弃。
  • 配置为[“kubernetes.io/hostname" "topology.kubernetes.io/zone“ “topology.kubernetes.io/region“]:流量优先被路由到相同Node的Endpoint上, 如果Node没有Endpoint,流量则被路由到同zone的Endpoint,如果在zone中没有Endpoint,流量则被路由到通region中的Endpoint上。
  • Configured as ["topology.kubernetes.io/zone", "​​*​​"]: Traffic is preferentially routed to Endpoints in the same zone. If there is no available Endpoint in the zone, the traffic is routed to any available Endpoint. of
  • Endpoint on.

There are currently several constraints on using service topology:

  • Service topology and externalTrafficPolicy=Local are incompatible, so a Service cannot use both features at the same time. In the same Kubernetes cluster, a Service that enables service topology and a Service that sets the externalTrafficPolicy=Local feature can exist at the same time.
  • TopologyKeys currently only has three labels that can be set: kubernetes.io/hostname, topology.kubernetes.io/zone, and topology.kubernetes.io/region. More labels will be added in the future.
  • topologyKeys must be in a valid label format, and a maximum of 16 can be defined.
  • If the wildcard "*" is used, it must be the last value.

10.2.2 Case

The following describes several common service topology application examples through the YAML file of Service.

1) Only route traffic to the Endpoint of the same Node. If the Node does not have an available Endpoint, the request will be discarded:

apiVersion: v1
kind: Service
metadata:
 name: webapp
spec:
  selector:
    app: webapp
  ports:
  - port: 8080
  topologykeys:
  - "kubernetes.io/hostname"

2) Route traffic to the Endpoint of the same Node first. If the Node does not have an available Endpoint, the request will be routed to any available Endpoint:

apiVersion: v1
kind: Service
metadata:
  name: webapp
spec:
  selector:
    app: webapp
  ports:
  - port:8080
  topologyKeys:
  - "kubernetes.io/hostname"
  - "*"

3) Only route traffic to Endpoints in the same zone or region. If there is no available Endpoint, the request will be discarded:

apiVersion: v1
kind: Service
metadata:
  name: webapp
spec:
  selector:
    app: webapp
  ports:
  - port:8080
  topologyKeys:
  - "topology.kubernetes.io/zone" 
  - "topology.kubernetes.io/region"

4) Route traffic according to the priority order of the same Node, the same zone, and the same region. If there is no available Endpoint in the Node, zone, or region, the request will be routed to any available Endpoint in the cluster:

apiVersion: v1
kind: Service
metadata: 
  name: webapp
spec:
  selector:
  app: webapp
  ports:
  - port:8080
  topologyKeys:
  - "kubernetes.io/hostname"
  - "topology.kubernetes.io/zone" 
  - "topology.kubernetes.io/region"
  - "*"

 

 

Guess you like

Origin blog.csdn.net/ygq13572549874/article/details/133282509