Kubernetes notes (09) - Service concept, YAML definition of Service, creation of Service, test of Service load balancing effect, use of Service by domain name, external exposure of service

Deploymentand DaemonSetthese two APIobjects, they are both online businesses, but deploy applications with different strategies:

  • DeploymentCreate any number of instances;
  • DaemonSetCreate an instance for each node;

1. Service concept

ServiceIt is the load balancing mechanism inside the cluster, which is used to solve the key problem of service discovery.

The life cycle in Kubernetesthe cluster Podis relatively "short-lived". Although Deploymentand DaemonSetcan maintain the stability of the Podoverall number, it is inevitable that during operation, there will be Poddestruction and reconstruction, which will cause Podthe collection to be in a dynamic change.

This kind of "dynamic stability" is very fatal to the current popular microservice architecture. Just imagine, Podthe IPaddress of the background is always changing, how should the client access it? If you don't deal with this problem well, it's worthless to manage it well .DeploymentDaemonSetPod

In fact, this problem is not difficult. The industry has long had a solution to such an "unstable" back-end service, that is, "load balancing". Typical applications include LVS, Nginxetc. They add a "middle layer" between the front-end and the back-end to shield the back-end from changes and provide a stable service for the front-end.

However LVS, Nginxafter all, it is not a cloud-native technology, Kubernetesso according to this idea, a new APIobject is defined: Service.

ServiceIts working principle is LVSsimilar Nginxto that of . KubernetesIt will assign a static IPaddress , and then it will automatically manage and maintain Podthe set of dynamic changes later. When the client accesses Service, it will forward the traffic to a later one according to a certain strategy Pod.

image.png

Image Source

ServiceThe technology is used here iptables. kube-proxyThe component automatically maintains iptablesthe rules. Customers no longer care Podabout the specific address of the. As long as they access Servicethe fixed IPaddress of the , the request Servicewill be forwarded to the multiple managed by it according to iptablesthe rules Pod. This is a typical load balancing architecture.

However, Servicenot only can be used iptablesto achieve load balancing, it also has two other implementation techniques: poorer performance userspaceand better performance ipvs, but these are all low-level details, we don't need to pay attention to them.

2. Use YAML to describe Service

We can still use the command kubectl api-resourcesto view its basic information, and we can know its abbreviation is svc, apiVersionyes v1. Note that this means that it is the Podsame as , belongs Kubernetesto the core object of , and is not related to business applications, and Jobis Deploymentdifferent from .

$ kubectl api-resources | grep services
services                          svc          v1                                     true         Service

It is easy to Servicewrite YAMLthe file header of


apiVersion: v1
kind: Service
metadata:
  name: xxx-svc

We can use another command kubectl exposeto automatically create a templateService for us, maybe I think it can better express the meaning of "exposing" the service address.YAMLKubernetesexposeService

Because the service is provided Kubernetesin andPod can be deployed with objects, it supports creating services from various objects, including , , and .PodDeployment/DaemonSetkubectl exposePodDeploymentDaemonSet

When using kubectl exposethe command, you also need to use the parameters --portand to specify the mapping port and container port --target-portrespectively , and Servicethe own IPaddress and the addressPod of the backend can be automatically generated. The usage is very similar to the command line parameters of , but it is a little more troublesome.IPDocker-p

For example, if we want to generate for the previous ngx-depobject Service, the command would be written like this:


export out="--dry-run=client -o yaml"
kubectl expose deploy ngx-dep --port=80 --target-port=80 $out

The resulting is Service YAMLprobably something like this:


apiVersion: v1
kind: Service
metadata:
  name: ngx-svc
  
spec:
  selector:
    app: ngx-dep
    
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP

specThere are only two key fields in , selectorand ports.

  • selectorDeployment/DaemonSetIt has the same function as in and is used to filter out those to be proxied Pod. Because we specified that we want a proxy ,Deployment the tag is automatically filled in for us, and all the objects deployed by this object . Although the labeling mechanism of is very simple, it is very powerful and effective, and it is easy to associate with .Kubernetesngx-depDeployment PodKubernetesDeploymentPod
  • portsThe three fields inside represent the external port, internal port and the protocol used. Here, both internal and external 80ports protocol is TCP.

In order to let you see Serviceclearly Podthe relationship between and it references, I drew these two YAMLobjects in the picture below, and the key thing to pay attention to selectoris the relationship withtargetPort :Pod
image.png

3. Using Service in K8s

First, we create a configuration fragment thatConfigMap defines a , which will output basic information such as the server address, host name, and requested . The content is as follows:NginxURIngx-conf.yml


apiVersion: v1
kind: ConfigMap
metadata:
  name: ngx-conf

data:
  default.conf: |
    server {
      listen 80;
      location / {
        default_type text/plain;
        return 200
          'srv : $server_addr:$server_port\nhost: $hostname\nuri : $request_method $host $request_uri\ndate: $time_iso8601\n';
      }
    }

create ConfigMapobject

kubectl apply -f ngx-conf.yml

Then Deploymentwe template.volumesdefine the storage volume in the , and then volumeMountsuse to load the configuration file Nginxinto the container. ngx-dep.ymlThe content is as follows:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: ngx-dep

spec:
  replicas: 2
  selector:
    matchLabels:
      app: ngx-dep

  template:
    metadata:
      labels:
        app: ngx-dep
    spec:
      volumes:
      - name: ngx-conf-vol
        configMap:
          name: ngx-conf

      containers:
      - image: nginx:alpine
        name: nginx
        ports:
        - containerPort: 80

        volumeMounts:
        - mountPath: /etc/nginx/conf.d
          name: ngx-conf-vol

create deploymentobject

kubectl apply -f ngx-dep.yml

Check whether the deployed configmapand deploymentobjects are normal:
get cm
as before, we want to generate for this ngx-depobject Service, so the command should be written as follows:

export out="--dry-run=client -o yaml"
kubectl expose deploy ngx-dep --port=80 --target-port=80 $out

The generated is Service YAMLprobably like this, we save the file as svc.yml:

apiVersion: v1
kind: Service
metadata:
  name: ngx-svc
  
spec:
  selector:
    app: ngx-dep
    
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP

In this way, we can create Serviceobjects , using kubectl apply:

kubectl apply -f svc.yml

After creation, you kubectl getcan :
get svc

It can be seen that an address "10.108.141.157" is automatically assigned Kubernetesto Servicethe object IP, and this address segment is independent of Podthe address segment (10.10.xx.xx in the previous talk). Moreover, the address of Servicethe object IPhas another characteristic, it is a "virtual address", there is no entity, and it can only be used to forward traffic.

To see which backends are Serviceproxied Pod, you can use kubectl describethe command :

kubectl describe svc ngx-svc

desc svc

The screenshot shows that two Serviceobjects are managed endpoint, namely "10.10.1.2:80" and "10.10.1.3:80". The preliminary judgment is consistent with the definition of Serviceand Deployment, so are these two IPaddresses Nginx Podthe actual addresses of ?

Let's take kubectl get poda look and add parameters -o wide:

kubectl get pod -o wide

get pod
PodComparing the address of with Servicethe information of , we can verify Servicethat indeed IPproxies Podtwo dynamic IPaddresses of with one static address.

4. Test the load balancing effect of Service

Because the addresses of and Serviceare all the internal network segments of the cluster, we need to use to enter the internal network (or log in to the cluster nodes), and then use tools such as to access :PodIPKuberneteskubectl execPodsshcurlService


kubectl exec -ti ngx-dep-6796688696-v97ns -- sh

exec sh

PodIn , use the address ofcurl to access , you will see that it forwards the data to the backend , and the output information will show which responded to the request, which means that has indeed completed the load balancing task for .ServiceIPPodPodServicePod

Let's try to delete one more Podto see Serviceif it will update Podthe information of the backend to realize automatic service discovery:
delete pod

Since Podis Deploymentmanaged by the object, it will be automatically rebuilt after deletion, and Servicewill controller-managermonitor the changes Podof , so it will immediately update IPthe address of its proxy. From the screenshot, you can see that an IPaddress "10.10.1.2" has disappeared and replaced with a new one "10.10.1.4", which is newly created Pod.

You can also try again using pingto test the addressService of :IP
ping

You will find pingthat it doesn't work , because Serviceyour IPaddress is "virtual" and is only used to forward traffic, so pingyou can't get a response packet, and it fails.

4. Use the Service as a domain name

ServiceIPThe address of the object is static and stable, which is really important in microservices, but IPthe address in digital form is still inconvenient to use. At this Kubernetestime DNSthe plug-in comes in handy, it can Servicecreate easy-to-write and easy-to-remember domain names, making it Serviceeasier to use.

namespaceShorthand for is ns, you can use the command kubectl get nsto see which namespaces are in the current cluster, that is API, which groups of objects:
get ns

KubernetesThere is a default namespace called , and objects will live in this namespace defaultunless explicitly specified . And other namespaces have their own purposes, for example , contains core components such as , and so on .APIdefaultkube-systemapiserveretcdPod

Because DNSit is a hierarchical structure, in order to avoid conflicts caused by too many domain names, Kubernetesthe namespace is taken as part of the domain name, reducing the possibility of duplicate names.

ServiceThe full form of the domain name of the object is 对象. 名字空间.svc.cluster.local, but in many cases, the latter part can also be omitted 对象. 名字空间. 对象名It is enough to directly write or even , and the name space where the object is located will be used by default (such as here default).

Now let's test the usage of DNSthe domain name , or first kubectl execenter Pod, and then use curlto access ngx-svc, ngx-svc.defaultetc. domain names:
ngx-src

It can be seen that now we no longer care Serviceabout IPthe address of the object, we only need to know its name, and we can DNSuse the method to access the backend service.

By the way, a domain name Kubernetesis also Podassigned in the form of IP 地址. 名字空间.pod.cluster.local, but you need IPto .change the in the address to -. For example, the address 10.10.1.87 is the corresponding domain name 10-10-1-87.default.pod.

5. Let the Service expose the service to the outside world

Because Serviceit is a load balancing technology, it can not only manage Kubernetesthe services inside the cluster, but also take on the important task of exposing services to the outside of the cluster.

ServiceThe object has a key field typethat indicates what type of load balancing Serviceit is . The usages we have seen above are for load balancing Podwithin , so the value of this field is the default ClusterIP, and Servicethe static IPaddress can only be accessed within the cluster.

In addition to ClusterIP, Servicethree other types are supported, namely ExternalName, LoadBalancer, NodePort. However, the first two types are generally provided by cloud service providers, which are not used in our experimental environment, so we will focus on NodePortthis .

If we add parameters kubectl exposewhen --type=NodePort, or add fields YAMLin type:NodePort, then Servicein addition Podto load balancing the backend, an independent port will be created on each node in the cluster, and this port will be provided externally service, which is where NodePortthe name comes from.

First execute the command to delete the previously created service:

kubectl delete svc ngx-svc

Let's Servicemodify YAMLthe file of the to add the field type:


apiVersion: v1
...
spec:
  ...
  type: NodePort

Then create the object and check its status:
node port

You will TYPEsee ClusterIPthat has changed from before NodePort, and the port information in PORTthe column is also different. In addition to the port 80 used inside the cluster, there is an additional port 31356, which is the dedicated mapped port createdKubernetes for on the node .Service

Because this port number belongs to the node, it can be directly accessed from the outside, so now we can directly Poduse IPthe address of any node outside the cluster without logging in to the cluster node or entering the inside, and then we can access Serviceand the backend services it proxies.

For example, my current server is 172.16.19.54. On this host, use to curlaccess Kubernetestwo nodes 10.116.62.162 and 10.116.62.54 of the cluster, and you can get the response data Nginx Podof :
nginx pod

I drew the corresponding relationship of NodePortand Service, Deploymentand you should be able to better understand its working principle after reading it:
image.pngImage source: https://time.geekbang.org/column/article/536829

However, it also has some disadvantages, as follows:

  • It has a limited number of ports. KubernetesIn order to avoid port conflicts, the default is only randomly assigned in the range of "30000~32767", there are only more than 2000, and none of them are standard port numbers, which is not enough for systems with a large number of business applications.

  • It will open a port on each node, and then use kube-proxyrouting to the real backend Service, which brings some network communication costs for large clusters with many computing nodes, which is not particularly economical.

  • It requires IPthe address , which is not feasible in many cases. For security, it is necessary to set up a reverse proxy outside the cluster, which increases the complexity of the solution.

Despite these shortcomings, it is NodePortstill Kubernetesa simple and easy way to provide services externally, and we can only use it until other better ways appear.

6. Summary

  1. PodThe life cycle of is very short and will be constantly created and destroyed, so it needs to be used Serviceto achieve load balancing. It is Kubernetesassigned a fixed IPaddress by and can shield the backend from Podchanges.
  2. ServiceThe object uses the same fields Deploymentas , to select the backend to be proxied , which is a loosely coupled relationship.DaemonSetselectorPod
  3. Based on DNSthe plug-in , we can access it as a domain name Service, which is more convenient than a static IPaddress .
  4. A namespace is a way Kubernetesto isolate objects and realize logical object grouping, Serviceand the domain name of the domain name contains the namespace limitation.
  5. ServiceThe default type is ClusterIPthat it can only be accessed within the cluster. If it is changed NodePort, a random port number will be opened on the node so that the outside world can also access internal services.

If the mapped port of Servicethe object is the same as the target port, for example, both are 80, kubectl exposeyou can also omit it when using the command --target-port, and only use --portthis parameter, which will be more convenient when creating a template.

In fact, serviceis not directly managed Pod, but the object representing IPthe address Endpoint, but we generally don't use it directly Endpoint, unless it is to check for errors.

NodePortThe default port range for can 30000~32767also apiserverbe changed via configuration, but this increases the risk of node port conflicts.

Problem: After the Service proxies the POD, use exec to enter the POD, and then use curl to access it. I would like to ask: Why is it entering the POD, and the Service acts as a proxy for the POD? Logically, it should be entering the Service.
Answer: Because the domain name and IP address are all in Kubernetes, they cannot be accessed by the outside world, so you can just enter the pod. Service is a virtual address and cannot be accessed. Only pods are entities.

Guess you like

Origin blog.csdn.net/wohu1104/article/details/128905506