Summary of kubedns in Kubernetes

Internal Service Discovery

We can access the services provided by Pod through the ClusterIP (VIP) generated by Service, but there is still a problem when using it: how do we know the VIP of an application? For example, we have two applications, one is an api application and the other is a db application. Both applications are managed through Deployment, and both expose ports to provide services through Service. The api needs to connect to the db application. We only know the name of the db application and the service name corresponding to db, but we don’t know its VIP address. Did we learn in the previous Service course that we can access the back through ClusterIP Pod service, if we know the address of the VIP, is it all right?

apiserver

We know that the backend Endpoints information of the corresponding service can be directly queried from the apiserver, so the easiest way is to query directly from the apiserver. If there is a special application occasionally, we can use the apiserver to query the Endpoints behind the service directly. No problem, but if each application queries the dependent services when it starts, this not only increases the complexity of the application, but also causes our application to depend on Kubernetes, the coupling degree is too high, and it is not universal sex.

environment variable

In order to solve the above problem, in the previous version, Kubernetes adopted the environment variable method. When each Pod starts, it will set the IP and port information of all services through the environment variable, so that the application in the Pod can read the environment Variables to obtain the address information of dependent services. This method is relatively simple to use, but there is a big problem that the dependent services must exist before the Pod starts, otherwise they will not be injected into the environment variables. For example, we first create an Nginx service: (test-nginx.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
  labels:
    k8s-app: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    name: nginx-service
spec:
  ports:
  - port: 5000
    targetPort: 80
  selector:
    app: nginx

Create the above service:

$ kubectl create -f test-nginx.yaml
deployment.apps "nginx-deploy" created
service "nginx-service" created
$ kubectl get pods
NAME                                      READY     STATUS    RESTARTS   AGE
...
nginx-deploy-75675f5897-47h4t             1/1       Running   0          53s
nginx-deploy-75675f5897-mmm8w             1/1       Running   0          53s
...
$ kubectl get svc
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
...
nginx-service   ClusterIP   10.107.225.42    <none>        5000/TCP         1m
...

We can see that two Pods and a service named nginx-service have been successfully created. The service listens on port 5000, and it will forward traffic to all Pods it proxies (we have the app: nginx label here two Pods).

Now let's create an ordinary Pod again, and observe whether the environment variable in the Pod contains the service information of the above nginx-service: (test-pod.yaml)

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: test-service-pod
    image: busybox
    command: ["/bin/sh", "-c", "env"]

Then create a Pod for that test:

$ kubectl create -f test-pod.yaml
pod "test-pod" created

After the Pod is created, we check the log information:

$ kubectl logs test-pod
...
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=test-pod
HOME=/root
NGINX_SERVICE_PORT_5000_TCP_ADDR=10.107.225.42
NGINX_SERVICE_PORT_5000_TCP_PORT=5000
NGINX_SERVICE_PORT_5000_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_SERVICE_SERVICE_HOST=10.107.225.42
NGINX_SERVICE_PORT_5000_TCP=tcp://10.107.225.42:5000
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
NGINX_SERVICE_SERVICE_PORT=5000
NGINX_SERVICE_PORT=tcp://10.107.225.42:5000
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
...

We can see that a lot of environment variable processing has been printed, including the nginx-service we just created, including HOST, PORT, PROTO, ADDR, etc., as well as other existing Service environment variables. Now if we need to To access the nginx-service service in this Pod, can we directly pass NGINX_SERVICE_SERVICE_HOST and NGINX_SERVICE_SERVICE_PORT, but we also know that if the nginx-service service has not been started when the Pod is started, in the environment variable we are If this information cannot be obtained, of course we can use methods such as initContainer to ensure that the nginx-service starts before starting the Pod, but this method increases the complexity of Pod startup after all, so this is not the optimal method.

CubeDNA

Due to the limitations of the above environment variables, we need a more intelligent solution. In fact, we can learn an ideal solution by ourselves: that is, we can directly use the name of the Service, because the name of the Service will not change. We don't need to care about the allocated ClusterIP address, because this address is not fixed, so it would be great if we directly use the Service name, and then the conversion of the corresponding ClusterIP address can be done automatically. We know that the direct conversion between name and IP is very similar to the websites we usually visit? The conversion function between them can be solved through DNS. Similarly, Kubernetes also provides a DNS solution to solve the above service discovery problem.

Introduction to kubedns

DNS service is not an independent system service, but exists as an addon plug-in, that is to say, it is not required to be installed in the Kubernetes cluster. Of course, we strongly recommend installing it. This plug-in can be regarded as an addon running on the Kubernetes cluster. It has always been a special application, and now there are two recommended plugins: kube-dns and CoreDNS. We installed the kube-dns plug-in directly when we used kubeadm to build the cluster. If you don’t remember it, you can go back and have a look. Of course, it is also very convenient if we want to use CoreDNS, just execute the following command:

$ kubeadm init --feature-gates=CoreDNS=true

The Kubernetes DNS pod includes 3 containers, which can be viewed through the kubectl tool:

$ kubectl get pods -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
...
kube-dns-5868f69869-zp5kz               3/3       Running   0          19d
...

The READY column can be seen as 3/3, and the three containers included in kube-dns can be clearly seen by using the following command:

$ kubectl describe pod kube-dns-5868f69869-zp5kz -n kube-system

What are the functions of the three containers kube-dns, dnsmasq-nanny, and sidecar?

  • kubedns: Based on the SkyDNS library, kubedns monitors the change events of Service and Endpoints through apiserver and synchronizes them to the local Cache, realizing a real-time DNS service discovery of Service and Pod in the Kubernetes cluster
  • dnsmasq: The dsnmasq container implements the DNS caching function (reserve a place with a default size of 1G in the memory to save the most commonly used DNS query records at present. If there is no record to be found in the cache, it will query in kubedns. and cache the results), dynamically generate configuration by listening to ConfigMap
  • sider: The sidecar container implements configurable DNS detection, collects corresponding monitoring indicators and exposes them for use by prometheus
    insert image description here

Impact on Pods

The DNS Pod has a static IP and is exposed as a Kubernetes service. After this static IP is assigned, the kubelet will --cluster-dns = <dns-service-ip>pass DNS configured with parameters to each container. The DNS name also requires a domain name, and the local domain can be --cluster-domain = <default-local-domain>configured in the kubelet using parameters.

We say that the dnsmasq container dynamically generates configuration by listening to ConfigMap, and can customize stub domains and upstream and downstream domain name servers.

For example, the following ConfigMap establishes a DNS configuration with a single stub domain and two upstream nameservers:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
data:
  stubDomains: |
    {"acme.local": ["1.2.3.4"]}
  upstreamNameservers: |
    ["8.8.8.8", "8.8.4.4"]

DNS requests with the **.acme.local** suffix are forwarded to DNS 1.2.3.4 as explained above. Google public DNS servers serve upstream queries. The following table describes how queries with specific domain names are mapped to their target DNS servers:

domain name The server that responded to the query
kubernetes.default.svc.cluster.local to dns
foo.acme.local Custom DNS (1.2.3.4)
widget.com Upstream DNS (one of 8.8.8.8, 8.8.4.4)

In addition, we can also set DNS policies for each Pod. Currently Kubernetes supports two Pod-specific DNS policies: "Default" and "ClusterFirst". These policies can be specified via the dnsPolicy flag.

Note: Default is not the default DNS policy. If no dnsPolicy is specified explicitly , ClusterFirst will be used

  • If dnsPolicy is set to "Default", the name resolution configuration is inherited from the node the Pod is running on. Custom upstream nameservers and stub domains cannot be used with this policy
  • If dnsPolicy is set to "ClusterFirst", this depends on whether stub domains and upstream DNS servers are configured
    • No custom configuration: Any request that does not match the cluster domain name suffix configured above, such as "www.kubernetes.io", will be forwarded to the upstream name server inherited from the node.
    • Make a custom configuration: If stub domains and upstream DNS servers are configured (similar to what was configured in the previous example), DNS queries will route requests based on the following process:
      • Queries are first sent to the DNS caching layer in kube-dns.
      • From the cache layer, check the suffix of the request and forward it to the corresponding DNS according to the following conditions:
        • Name with cluster suffix (e.g. ".cluster.local"): requests are sent to kubedns.
        • Name with stub domain suffix (eg ".acme.local"): the request is sent to the configured custom DNS resolver (eg listen on 1.2.3.4).
        • Failed to match the suffixed name (eg "widget.com"): the request is forwarded to the upstream DNS (eg: Google public DNS servers, 8.8.8.8 and 8.8.4.4).
          insert image description here

domain name format

We said earlier that if the Service we build supports domain name resolution, we can solve our service discovery function, so what kind of DNS records can be generated by the Service using kubedns?

  • Ordinary Service: The domain name of servicename.namespace.svc.cluster.local will be generated, and it will be resolved to the ClusterIP corresponding to the Service. The call between Pods can be abbreviated as servicename.namespace. If it is under the same namespace, it can even be Just write servicename to access
  • Headless Service: Headless service, that is, if clusterIP is set to None, it will be resolved to the IP list of the specified Pod, and a specific Pod can also be accessed through podname.servicename.namespace.svc.cluster.local.

The functions implemented by CoreDNS are consistent with those of KubeDNS, but all the functions of CoreDNS are integrated in the same container. In the latest version 1.11.0, CoreDNS has been officially recommended. You can also install CoreDNS instead of KubeDNS. Others The usage is the same: https://coredns.io/

test

Now let's use a simple Pod to test the domain name access of the Service:

$ kubectl run --rm -i --tty test-dns --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ #

We enter the Pod, check the contents of **/etc/resolv.conf, we can see the address of the nameserver 10.96.0.10**, this IP address is a fixed static IP assigned by the cluster when the kubedns plugin is installed Address, we can view it with the following command:

$ kubectl get svc kube-dns -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   62d

That is to say, the default nameserver of our Pod is the address of kubedns. Now let's visit the nginx-service service we created earlier:

/ # wget -q -O- nginx-service.default.svc.cluster.local

It can be seen that when we used the wget command to access the domain name of the nginx-service service, we were hanged and did not get the expected results. This is because the port exposed when we created the Service above was 5000:

/ # wget -q -O- nginx-service.default.svc.cluster.local:5000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
    
    
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Add port 5000, and you can access the service normally. Try to access: nginx-service.default.svc, nginx-service.default, nginx-service. Not surprisingly, these domain names can access the expected results normally.

At this point, have we realized the communication with each other through the domain name of Service within the cluster? Let's try to access services under different namespaces by ourselves.


Guess you like

Origin blog.csdn.net/u010674953/article/details/129838288