Highly available Kubernetes cluster-11. Deploy kube-dns

Reference documentation:

  1. Github introduction:https://github.com/kubernetes/dns
  2. Github yaml file: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
  3. DNS for Services and Podshttps://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
  4. https://segmentfault.com/a/1190000007342180

  5. Github example:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
  6. Configure stub domain and upstream DNS servershttps://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/

 

Kube-DNS completes the resolution from the service name to the ClusterIP within the cluster , accesses the service, and provides the basic function of the service discovery mechanism.

  1. environment

  1. basic environment

components

Version

Remark

governor

v1.9.2

 

KubeDNS

V1.4.8

Service discovery mechanism is the same as SkyDNS

  1. principle

  1. Kube-DNS is deployed to the kubernetes cluster system in the form of Pod;
  2. Kube-DNS packaged and optimized SkyDNS, changing from 4 containers to 3;
  3. kubedns container: based on skydns; monitor k8s Service resources and update DNS records; replace etcd, use TreeCache data structure to save DNS records and implement SkyDNS Backend interface; access SkyDNS to provide DNS query service to dnsmasq;
  4. dnsmasq container: provide DNS query service for the cluster, that is, a simple dns server; set kubedns as upstream; provide DNS cache, reduce kubedns load and improve performance;
    1. Deploy Kube-DNS

Kubernetes supports kube-dns to run as a Cluster Add-On. Kubernetes will schedule a DNS Pod and Service in the cluster.

  1. prepare images

When kubernetes deploys the Pod service, in order to avoid the pull image timeout problem during deployment, it is recommended to pull the relevant image to all relevant nodes (experimental) in advance, or build a local image system.

  1. The basic environment has been accelerated by mirroring, please refer to: http://www.cnblogs.com/netonline/p/7420188.html
  2. The image that needs to be pulled from gcr.io has used the "Create Auto-Build GitHub" function of Docker Hub (Docker Hub uses the Dockerfile file on GitHub to build the image). The build is successful in the personal Docker Hub and can be directly pulled to the local use.

 

# The basic pause image shared by the namespace in the Pod;

# The pause image has been specified in the startup parameters of kubelet, and the name is changed after Pull to the local

[root@kubenode1 ~]# docker pull netonline/pause-amd64:3.0

[root@kubenode1 ~]# docker tag netonline/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0

[root@kubenode1 ~]# docker images

 

# kubedns

[root@kubenode1 ~]# docker pull netonline/k8s-dns-kube-dns-amd64:1.14.8

 

# dnsmasq-nanny

[root@kubenode1 ~]# docker pull netonline/k8s-dns-dnsmasq-nanny-amd64:1.14.8

 

# sidecar

[root@kubenode1 ~]# docker pull netonline/k8s-dns-sidecar-amd64:1.14.8

  1. Download the kube-dns template

# https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns

[root@kubenode1 ~]# mkdir -p /usr/local/src/yaml/kubedns

[root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns

[root @ kubenode1 kubedns] # wget -O be-dns.yaml https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/be-dns.yaml.base

  1. Configure kube-dns Service

# kube-dns places 4 services such as Service, ServiceAccount, ConfigMap, and Deployment in one yaml file. The following chapters are modified for each module, and the red bold font is the modified part;

# The writing of the Pod yaml file will not be expanded here, you can refer to other materials, such as "Kubernetes Authoritative Guide";

# Modified kube-dns.yaml: https://github.com/Netonline2016/kubernetes/blob/master/addons/kubedns/kube-dns.yaml

 

# clusterIP can be consistent with the kubelet startup parameter --cluster-dns, and pre-select an address in service cidr as the dns address

[root @ kubenode01 yaml] # vim kube-dns.yaml

apiVersion: v1

kind: Service

metadata:

name: kube-dns

namespace: kube-system

labels:

k8s-app: kube-dns

kubernetes.io/cluster-service: "true"

addonmanager.kubernetes.io/mode: Reconcile

kubernetes.io/name: "KubeDNS"

spec:

selector:

k8s-app: kube-dns

clusterIP: 169.169.0.11

ports:

- name: dns

port: 53

protocol: UDP

- name: dns-tcp

port: 53

protocol: TCP

  1. Configure kube-dns ServiceAccount

The kube-dns ServiceAccount does not need to be modified. The predefined ClusterRoleBinding system:kube-dns of the kubernetes cluster has bound the ServiceAccout kube-dns in the kube-system (system services are generally deployed here) namespace with the predefined ClusterRole system:kube-dns , and ClusterRole system:kube-dns has api permission to access kube-apiserver dns.

See RBAC authorization: https://blog.frognew.com/2017/04/kubernetes-1.6-rbac.html

[root@kubenode1 ~]# kubectl get clusterrolebinding system:kube-dns -o yaml

 

[root@kubenode1 ~]# kubectl get clusterrole system:kube-dns -o yaml

  1. Configure kube-dns ConfigMap

Typical usage of ConfigMap is:

  1. Generate environment variables inside the container;
  2. Set the startup parameters of the container startup command (need to be set as an environment variable);
  3. Mounted as a volume as a file or directory inside the container.

No modification is required to verify the kube-dns function. If you need to customize the DNS and upstream DNS servers, you can modify the ConfigMap, see Chapter 4.

  1. Configure kube-dns Deployment

# The startup images of the three containers on lines 97, 148, and 187;

# The domain name of line 127, 200, 201, the domain name corresponds to "--cluster-domain" in the kubelet startup parameter, pay attention to the "." after the domain name "cluster.local."

[root @ kubenode1 kubedns] # vim kube-dns.yaml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: kube-dns

namespace: kube-system

labels:

k8s-app: kube-dns

kubernetes.io/cluster-service: "true"

addonmanager.kubernetes.io/mode: Reconcile

spec:

# replicas: not specified here:

# 1. In order to make Addon Manager do not reconcile this replicas parameter.

# 2. Default is 1.

# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.

strategy:

rollingUpdate:

maxSurge: 10%

maxUnavailable: 0

selector:

matchLabels:

k8s-app: kube-dns

template:

metadata:

labels:

k8s-app: kube-dns

annotations:

scheduler.alpha.kubernetes.io/critical-pod: ''

spec:

tolerations:

- key: "CriticalAddonsOnly"

operator: "Exists"

volumes:

- name: kube-dns-config

configMap:

name: kube-dns

optional: true

containers:

- name: kubedns

image: netonline/k8s-dns-kube-dns-amd64:1.14.8

resources:

# TODO: Set memory limits when we've profiled the container for large

# clusters, then set request = limit to keep this container in

# guaranteed class. Currently, this container falls into the

# "burstable" category so the kubelet doesn't backoff from restarting it.

limits:

memory: 170Mi

requests:

cpu: 100m

memory: 70Mi

livenessProbe:

httpGet:

path: /healthcheck/kubedns

port: 10054

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

readinessProbe:

httpGet:

path: /readiness

port: 8081

scheme: HTTP

# we poll on pod startup for the Kubernetes master service and

# only setup the /readiness HTTP server once that's available.

initialDelaySeconds: 3

timeoutSeconds: 5

args:

- --domain=cluster.local.

- --dns-port=10053

- --config-dir=/kube-dns-config

- --v=2

env:

- name: PROMETHEUS_PORT

value: "10055"

ports:

- containerPort: 10053

name: dns-local

protocol: UDP

- containerPort: 10053

name: dns-tcp-local

protocol: TCP

- containerPort: 10055

name: metrics

protocol: TCP

volumeMounts:

- name: kube-dns-config

mountPath: /kube-dns-config

- name: dnsmasq

image: netonline/k8s-dns-dnsmasq-nanny-amd64:1.14.8

livenessProbe:

httpGet:

path: /healthcheck/dnsmasq

port: 10054

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

args:

-v=2

- -logtostderr

- -configDir=/etc/k8s/dns/dnsmasq-nanny

- -restartDnsmasq=true

- --

- -k

- --cache-size=1000

- --no-negcache

- --log-facility=-

- --server=/cluster.local./127.0.0.1#10053

- --server=/in-addr.arpa/127.0.0.1#10053

- --server=/ip6.arpa/127.0.0.1#10053

ports:

- containerPort: 53

name: dns

protocol: UDP

- containerPort: 53

name: dns-tcp

protocol: TCP

# see: https://github.com/kubernetes/kubernetes/issues/29055 for details

resources:

requests:

cpu: 150m

memory: 20Mi

volumeMounts:

- name: kube-dns-config

mountPath: /etc/k8s/dns/dnsmasq-nanny

- name: sidecar

image: netonline/k8s-dns-sidecar-amd64:1.14.8

livenessProbe:

httpGet:

path: /metrics

port: 10054

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

args:

- --v=2

- --logtostderr

- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,SRV

- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,SRV

ports:

- containerPort: 10054

name: metrics

protocol: TCP

resources:

requests:

memory: 20Mi

cpu: 10m

dnsPolicy: Default # Don't use cluster DNS.

serviceAccountName: kube-dns

  1. start kube-dns

[root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns/

[root @ kubenode1 kubedns] # kubectl create -f be-dns.yaml

  1. Verify Kube-DNS

  1. kube-dns Deployment&Service&Pod

# kube-dns Pod 3 containers are "Ready", services, deployment, etc. are also started normally

[root@kubenode1 kubedns]# kubectl get pod -n kube-system -o wide

[root@kubenode1 kubedns]# kubectl get service -n kube-system -o wide

[root@kubenode1 kubedns]# kubectl get deployment -n kube-system -o wide

  1. kube-dns query

# pull test image

[root@kubenode1 ~]# docker pull radial/busyboxplus:curl

 

# Start the test Pod and enter the Pod container

[root@kubenode1 ~]# kubectl run curl --image=radial/busyboxplus:curl -i --tty

 

# View /etc/resolv.conf in the Pod container, the dns record has been written to the file;

# nslookup can query the service ip of the kubernetes cluster system

[ root@curl-545bbf5f9c-hxml9:/ ]$ cat /etc/resolv.conf

[ root@curl-545bbf5f9c-hxml9:/ ]$ nslookup kubernetes.default

  1. Custom DNS and Upstream DNS Servers

Starting from kubernetes v1.6, users can configure private DNS zones (generally called stub domains) and external upstream domain name services within the cluster.

  1. principle

  1. Two DNS policies are supported in the Pod definition: Default and ClusterFirst. The default dnsPolicy is ClusterFirst; if dnsPolicy is set to Default, the domain name resolution configuration is completely inherited from the node where the Pod is located (/etc/resolv.conf);
  2. When the pod's dnsPolicy is set to ClusterFirst, DNS queries are first sent to the DNS cache layer of kube-dns;
  3. The domain name suffix is ​​checked at the DNS cache layer, and the domain name suffix is ​​sent to the cluster's own DNS server, a custom stub domain, or an upstream domain name server.
    1. Custom DNS method

# Cluster administrators can use ConfigMap to specify a custom stub domain upstream DNS server;

[root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns/

 

# Directly modify the ConfigMap service part of the kube-dns.yaml template

# stubDomains: optional, stub domain definition, json format; key is the DNS suffix, value is a json array, representing a set of DNS server addresses; the target domain name server can be the kubernetes service name;

# upstreamNameservers: An array of DNS addresses, specifying up to 3 ip addresses in json format; if this value is specified, the value inherited from the node's domain name service settings (/etc/resolv.conf) will be overwritten

[root @ kubenode1 kubedns] # vim kube-dns.yaml

apiVersion: v1

kind: ConfigMap

metadata:

name: kube-dns

namespace: kube-system

labels:

addonmanager.kubernetes.io/mode: EnsureExists

data:

stubDomains: |

{"out.kubernetes": ["172.20.1.201"]}

upstreamNameservers: |

["114.114.114.114", "223.5.5.5"]

  1. Rebuild kube-dns ConfigMap

# First delete the original kube-dns, and then create a new kube-dns;

# You can also just delete the ConfigMap service in the original kube-dns, and then create a new ConfigMap service separately

[root @ kubenode1 kubedns] # kubectl delete -f -n be dns -n be system

[root @ kubenode1 kubedns] # kubectl create -f be-dns.yaml

 

# View the dnsmasq log, the stub domain and upstreamserver have taken effect;

# The two logs of kubedns and sidecar also have the output of stub domain and upstreamserver taking effect

[root@kubenode1 kubedns]# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq

  1. custom dns server

# Install the dnsmasq service on the stub domain 172.20.1.201 customized in the configmap

[root@hanode01 ~]# yum install dnsmasq -y

 

# Generate custom DNS record file

[root@hanode01 ~]# echo "192.168.100.11 server.out.kubernetes" > /tmp/hosts

 

# Start DNS service;

# -q: output query records;

# -d: Start in debug mode, run in the foreground, and observe the output log;

# -h: do not use /etc/hosts;

# -R: do not use /etc/resolv.conf;

# -H: Use a custom DNS record file;

# The warning in the startup output log indicates that the upstream DNS server is not set; at the same time, read the custom DNS record file

[root@hanode01 ~]# dnsmasq -q -d -h -R -H /tmp/hosts

 

# iptables release udp 53 port

[root@hanode01 ~]# iptables -I INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT

  1. start the pod

# download mirror

[root@kubenode1 ~]# docker pull busybox

 

# Configure the Pod yaml file;

# dnsPolicy is set to ClusterFirst, the default is also ClusterFirst

[root@kubenode1 ~]# touch dnstest.yaml

[root @ kubenode1 ~] # vim dnstest.yaml

apiVersion: v1

kind: Pod

metadata:

name: dnstest

namespace: default

spec:

dnsPolicy: ClusterFirst

containers:

- name: busybox

image: busybox

command:

- sleep

- "3600"

imagePullPolicy: IfNotPresent

restartPolicy: Always

 

# Create Pod

[root@kubenode1 ~]# kubectl create -f dnstest.yaml

  1. Verify custom DNS configuration

# nslookup queries server.out.kubernetes and returns the defined ip address

[root@kubenode1 ~]# kubectl exec -it dnstest -- nslookup server.out.kubernetes

 

Observe the output of the dnsmasq service on the stub domain 172.20.1.201: the kube node 172.30.200.23 (the node where the Pod is located, the flannel network, and the snat outgoing node) query server.out.kubenetes, and dnsmasq returns the predefined host address.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324494025&siteId=291194637