[Cloud native | Learning Kubernetes from scratch] 17. Kubernetes core technology Service

This article has been included in the column " Learn k8s from scratch "
Previous article: Container detection and startup strategy of k8spod Click jump

insert image description here

Quickly understand service

Earlier we learned that Deployment only guarantees the number of microservice Pods that support services, but does not solve the problem of how to access these services. A Pod is just an instance of a running service, which may stop on one node at any time and start a new Pod with a new IP on another node, so it cannot provide services with a definite IP and port number.

To provide services stably requires service discovery and load balancing capabilities. The work done by service discovery is to find the corresponding back-end service instance for the service accessed by the client. In the K8S cluster, the service that the client needs to access is the Service object. Each Service corresponds to a valid virtual IP within the cluster, and a service is accessed within the cluster through the virtual IP.

In a K8S cluster, the load balancing of microservices is implemented by kube-proxy. kube-proxy is a load balancer inside the k8s cluster. It is a distributed proxy server, and there is one on each node of K8S; this design reflects its scalability advantages, the more nodes that need to access the service, the more kube-proxy that provides load balancing capabilities, The number of high-availability nodes also increases. In contrast, we usually use reverse proxy for load balancing on the server side, and we need to further solve the high availability problem of reverse proxy.

The meaning of the existence of Service

Prevent Pod from being disconnected [Service Discovery]

Because each time a Pod is created, it corresponds to an IP address, and this IP address is short-lived and will change every time the Pod is updated. Suppose that when there are multiple Pods on our front-end page, and there are also multiple Pods on the back-end, this When they access each other, they need to get the IP address of the Pod through the registration center, and then access the corresponding Pod.
Please add image description

Define Pod access policy [Load Balance]

The Pod at the front end of the page accesses the Pod at the back end through the Service layer, and the Service can also perform load balancing here. There are many implementation strategies for load balancing, such as:

  • random
  • polling
  • response ratio
    Please add image description

The relationship between Pod and Service

Here, the association between Pod and Service is still based on label and selector [same as Controller]

Please add image description
When we access the service, we actually need an IP address. This IP is definitely not the IP address of the pod, but a virtual IP.vip

Common types of Service

There are three common types of Service

  • ClusterIp: Access within the cluster
  • NodePort: used by external access applications
  • LoadBalancer: used for external access applications, public cloud

Example

We can export a file containing the configuration information of the service

kubectl expose deployment web --port=80 --target-port=80 --dry-run -o yaml > service.yaml

service.yaml looks like this

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
status:
  loadBalancer: {
    
    }

If we don't set it, the first method, ClusterIp, is used by default, that is, it can only be used inside the cluster. We can add a type field to set our service type.

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: web
  name: web
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: web
  type: NodePort
status:
  loadBalancer: {
    
    }

After modifying the command, we create a pod using

kubectl apply -f service.yaml

Then you can see that it has been successfully modified to NodePort type, and the last remaining way is LoadBalanced: external access applications using public cloud

The node is generally deployed on the intranet, and the external network is generally inaccessible, so how to access it?

  • Find a machine that can be accessed through the external network, install nginx, and reverse proxy
  • Manually add accessible nodes to nginx

If we use LoadBalancer, there will be a load balancing controller, similar to the function of nginx, and we don't need to add it to nginx by ourselves

Layer 4 Load Balancing Service: Concept and Principle Interpretation

Why have Service?

In kubernetes, Pods have a life cycle, and if a Pod restarts, its IP is likely to change. If our services all write the IP address of the Pod to death, the Pod hangs or restarts, other services associated with the pod just restarted will not be able to find the Pod associated with it. In order to solve this problem, define in kubernetes With the service resource object, Service defines a service access entry, through which the client can access the application cluster instance behind the service. A service is a logical collection of a group of Pods, which can be accessed by a Service, usually Implemented through tag selectors.
Please add image description
1. The pod ip changes frequently. The service is the proxy of the pod. When our client accesses, we only need to access the service, and the request will be proxied to the pod.

2. The pod ip cannot be accessed outside the k8s cluster, so a service needs to be created, which can be accessed outside the k8s cluster.

Service overview

The service is a fixed access layer. The client can access the backend pod associated with the service by accessing the ip and port of the service. The work of this service depends on an accessory deployed on the kubernetes cluster, which is the kubernetes dns service (different kubernetes dns services). The version of dns used by default is also different. The version before 1.11 uses kubeDNS, and the newer version uses coredns). The name resolution of service depends on the dns attachment, so it needs to be deployed after k8s is deployed. dns attachment, kubernetes needs to rely on third-party network plug-ins (flannel, calico, etc.) in order to provide network functions (such as assigning ip) to clients. Each K8s node has a component called kube-proxy. This component of kube-proxy will always monitor the change information about service resources in the apiserver. It needs to interact with the apiserver on the master, and connect to the apiserver at any time to obtain any The resource change status related to service resources, which is realized through a request method watch (monitoring) inherent in kubernetes. Once the content of service resources changes (such as creation, deletion), then the operation will be stored in etcd , and then schedule our requests to the rules on the backend-specific pod resources. This rule may be iptables or ipvs, depending on how the service is implemented (you can configure the rules yourself). For example, if a new svc is created, the svc will have an ip, and the network segment of this ip is configured when the cluster is created (the default is 10).

How Service Works

When k8s creates a Service, it will look up the Pod according to the label selector (lable selector), and create an endpoint object with the same name as the Service accordingly. When the Pod address changes, the endpoint will also change, and the service receives the front-end client request. When , it will find the address of which Pod to forward to access through the endpoint. (As for the Pod forwarded to which node, it is determined by the load balancing kube-proxy)

[root@k8smaster node]# kubectl get endpoints
NAME         ENDPOINTS             AGE
kubernetes   192.168.11.139:6443   15d
[root@k8smaster node]# kubectl get pods -n kube-system -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE     
kube-apiserver-k8smaster            1/1     Running   4          15d   192.168.11.139   k8smaster 
apiserver是绑定了宿主机的网络ip,apiserver这个pod,封装的k8sapiservice服务端口是443,这个service关联的pod是apiserver,ednpoints列表里编写的也是apiserverpod的ip和端口

There are three types of IP addresses in a kubernetes cluster

1、Node Network(节点网络):物理节点或者虚拟节点的网络,如 ens33 接口上的网路地址
[root@k8smaster node]# ip addr
ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:94:59:0f brd ff:ff:ff:ff:ff:ff
    inet 192.168.11.139/24 brd 192.168.11.255 scope global noprefixroute dynamic ens3

2、Pod network(pod 网络),创建的 Pod 具有的 IP 地址 
[root@k8smaster node]#  kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE  
frontend-d5tr9              1/1     Running   0          23h   10.244.2.30   k8snode    <none>          
Node Network 和 Pod network 这两种网络地址是我们实实在在配置的,其中节点网络地址是配置在节点接口之上,而 pod 网络地址是配置在 pod 资源之上的,因此这些地址都是配置在某些设备之上的,这些设备可能是硬件,也可能是软件模拟的 
 
3、Cluster Network(集群地址,也称为 service network),这个地址是虚拟的地址(virtual ip),没有配置在某个接口上,只是出现在 service 的规则当中。 
[root@xianchaomaster1 ~]# kubectl get svc 
[root@k8smaster node]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   15d

write at the end

It is not easy to create, if you think the content is helpful to you, please give me a three-link follow to support me! If there are any mistakes, please point them out in the comments and I will change them in time!
The series that is currently being updated: learn k8s from scratch.
Thank you for watching. The article is mixed with personal understanding. If there is any error, please contact me and point it out~
insert image description here

Guess you like

Origin blog.csdn.net/qq_45400861/article/details/126659678