Operation scene
By running NodeLocal DNS Cache in the form of Daemonset on the cluster nodes , it can greatly improve the DNS resolution performance in the cluster and effectively avoid the five-second DNS delay caused by conntrack conflicts .
Principle of Operation
Deploy a hostNetwork Pod on each node of the cluster through DaemonSet. The Pod is a node-cache and can cache the DNS requests of the Pod on the node. If there are cache misses, the Pod will request the upstream kube-dns service through TCP to obtain it. The schematic diagram is as follows:
NodeLocal DNS Cache does not have High Availability (HA), there will be a single point of nodelocal dns cache failure (Pod Evicted/ OOMKilled/ConfigMap error/DaemonSet Upgrade), but this phenomenon is actually any single point proxy (such as kube-proxy, cni pod) will have common failure problems.
<!--more-->
Precondition
A cluster with Kubernetes version 1.15 and above is created, and there are nodes in the cluster.
Steps
wget -O nodelocaldns.yaml "https://github.com/kubernetes/kubernetes/raw/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml"
The resource manifest file contains several variables, including:
__PILLAR__DNS__SERVER__
: Indicateskube-dns
ClusterIP this Service, you can commandkubectl get svc -n kube-system | grep kube-dns | awk '{ print $3 }'
get__PILLAR__LOCAL__DNS__
:Indicates the local IP of DNSCache, the default is 169.254.20.10__PILLAR__DNS__DOMAIN__
: Indicates the cluster domain, the default iscluster.local
Modify related configuration
sed -i 's/k8s.gcr.io/harbor.emarbox.com/g' nodelocaldns.yaml && \
sed -i 's/__PILLAR__DNS__SERVER__/10.96.0.10/g' nodelocaldns.yaml && \
sed -i 's/__PILLAR__LOCAL__DNS__/169.254.20.10/g' nodelocaldns.yaml && \
sed -i 's/__PILLAR__DNS__DOMAIN__/cluster.local/g' nodelocaldns.yaml
Check if the installation is successful
$ kubectl get pods -n kube-system | grep node-local-dns
node-local-dns-658t4 1/1 Running 0 19s
node-local-dns-6bsjv 1/1 Running 0 19s
node-local-dns-wcxpw 1/1 Running 0 19s
It should be noted that the use of DaemonSet to deploy node-local-dns
hostNetwork=true
will occupy port 8080 of the host, so you need to ensure that this port is not occupied. Or you can modify health 169.254.20.10:8080
Modify the configuration, enable dns cache
This article provides the following two configuration methods, please choose according to the actual situation:
If kube-proxy component uses ipvs mode, then we also need to modify kubelet of
--cluster-dns
parameters to point to169.254.20.10
, Daemonset will create a network card in each node to tie the IP, Pod request to the node that IP send DNS, a cache miss It will then proxy to the upstream cluster DNS for query.iptables
Pod mode or the DNS request to the original cluster, there is the IP node monitor, the unit will be blocked, then the DNS request to an upstream cluster, there is no need to change the--cluster-dns
parameters.
- Execute the following commands in sequence to modify the kubelet startup parameters and restart.
sed -i 's/10.96.0.10/169.254.20.10/g' /var/lib/kubelet/config.yaml
systemctl restart kubelet
- Configure the dnsconfig of a single Pod according to requirements and restart. The core part of YAML is referenced as follows:
- The nameserver needs to be configured as 169.254.20.10.
- To ensure that the domain name inside the cluster can be resolved normally, searches need to be configured.
- Appropriately lowering the value of ndots will help speed up access to external domain names of the cluster.
- When the Pod does not use the internal domain name of the cluster with multiple dots, it is recommended to set the value to 2
dnsConfig:
nameservers: ["169.254.20.10"]
searches:
- default.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "2"
verification
Until node-local-dns
after the installation configuration, we can deploy a new verify the Pod: (test-node-local- dns.yaml)
apiVersion: v1
kind: Pod
metadata:
name: test-node-local-dns
spec:
containers:
- name: local-dns
image: busybox
command: ["/bin/sh", "-c", "sleep 60m"]
dnsConfig:
nameservers: ["169.254.20.10"]
searches:
- default.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "2"
Direct deployment:
$ kubectl apply -f test-node-local-dns.yaml
$ kubectl exec -it test-node-local-dns /bin/sh
/ # cat /etc/resolv.conf
nameserver 169.254.20.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
We can see nameserver
already become 169.254.20.10
, of course, for historical Pod To use before node-local-dns
it needs to be rebuilt, of course, if you want to track the DNS resolution process, then you can go to be observed by the packet capture.