(B) Kubernetes kubeadm cluster deployment k8s

 kubeadm Introduction

kubeadmIs a Kubernetesproject that comes with building and cluster tools, is responsible for the implementation of building a minimized available clusters and the basic steps necessary to start it, etc., kubeadmit is Kubernetesa cluster lifecycle management tool that can be used to achieve the deployment of clusters, upgrading, demotion and removal . kubeadmDeployment Kubernetescluster is the most resources to podrunning the way, for example kube-proxy( kube-controller-manager, kube-scheduler, kube-apiserver, , flannel) are based on podoperating mode.

KubeadmConcerned only how to initialize and start the cluster, the rest of the other operations, such as installing Kubernetes Dashboardadditional components necessary monitoring system, log system, etc. are not within the scope of its consideration, the administrator needs to deploy its own.

KubeadmIntegrated Kubeadm initand kubeadm jointools such procedures, which kubeadm initused to fast initialize a cluster, the core function of the various components of the deployment of Master node, and kubeadm jointhen quickly added to the node for the specified cluster, which is to create Kubernetesa cluster best practices "fast path . " Further, kubeadm tokenit may be constructed in the cluster for managing the authentication token (t used joins the cluster oken), and the kubeadm resetfunction of the command to remove a cluster is generated during the build file to reset back to the initial state.

kubeadm project address

kubeadm official documents

Kubeadm cluster deployment Kubernetes

Chart

 

Environmental Planning

operating system IP CPU/Mem CPU name Character
CentOS7.4-86_x64 192.168.1.31 2/2G k8s-master Master
CentOS7.4-86_x64 192.168.1.32 2/2G k8s-node1 Node
CentOS7.4-86_x64 192.168.1.33 2/2G k8s-node2 Node
name version
Docker 18.09.7
kubeadm 1.15.2
omelet 1.15.2
kubectl 1.15.2

Description: The following initialization work environment and master node node node need to be performed

1) turn off the firewall

# systemctl stop firewalld
# systemctl disable firewalld

2) Closeselinux

# sed -i 's/enforcing/disabled/' /etc/selinux/config
# setenforce 0

3) The need to close swap, (since the server would have the configuration is low, there is not closed swap, swap can be given later in the deployment process ignored)

# Swapoff -a # temporary 
# vim / etc / fstab # permanent

4) Time synchronization

# ntpdate 0.rhel.pool.ntp.org

5) hostBinding

# vim /etc/hosts
192.168.1.31    k8s-master
192.168.1.32    k8s-node1
192.168.1.33    k8s-node2

Installation docker

master node and all node nodes need to be performed

1) Configure dockerthe yumwarehouse (Ali cloud warehouse used here)

# yum -y install yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2) Installationdocker

# yum -y install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io

3) Modify docker cgroup driver to systemd

According to the contents of the document CRI installation of, for use systemd as the release init system in Linux using systemd as cgroup driver docker can ensure that the server node is more stable in a tight resource situation, so here modify cgroup driver on each node docker is systemd.
# No before the directory mkdir / etc / docker # did not start Docker 
# Vim /etc/docker/daemon.json # create if not present 
{
   " Exec-the opts " : [ " native.cgroupdriver = systemd " ] 
}

4) Startdocker

# Systemctl restart Docker # start Docker 
# systemctl enable boot from the start Docker # 

# Docker info | grep Cgroup 
Cgroup Driver: systemd

Installation kubeadm

master node and all node nodes need to be performed

1) Configure kubenetesthe yumwarehouse (Ali cloud warehouse used here)

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# yum makecache

2) Installation kubelat, kubectl,kubeadm

# Yum -y install omelet-kubeadm-1.15.2 1.15.2 1.15.2 kubectl- 

# rpm -AQ omelet kubectl kubeadm 
kubectl-1.15.2- 0.x86_64 
omelet -1.15.2- 0.x86_64 
kubeadm -1.15.2 -0.x86_64

3) will be kubeletadded to the boot, the installation is complete here just can not be started directly. (Because there are no clusters have not been established)

# Systemctl enable omelet

Initialization Master

Note: Perform the master node

By kubeadm --helpcan see the help manual can kubeadm initinitialize a masternode, and then through kubeadm joina nodejoin node to the cluster.

[root@k8s-master ~]# kubeadm --help
Usage:
  kubeadm [command]

Available Commands:
  alpha       Kubeadm experimental sub-commands
  completion  Output shell completion code for the specified shell (bash or zsh)
  config      Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster
  help        Help about any command
  init        Run this command in order to set up the Kubernetes control plane
  join        Run this on any machine you wish to join an existing cluster
  reset       Run this to revert any changes made to this host by 'kubeadm init' or 'kubeadm join'
  token       Manage bootstrap tokens
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     Print the version of kubeadm

Flags:
  -h, --help                     help for kubeadm
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity

Use "kubeadm [command] --help" for more information about a command.

1) ignore the swap configuration error

[root@k8s-master ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

2) Initialization master

Version---kubernetes     # specify Kubernetes version 
--image-repository    # Since kubeadm default is downloaded from the official website k8s.grc.io the required image, domestic inaccessible, so here designated as Ali cloud image repository address by --image-repository 
network-CIDR---pod     # specify pod network segment 
--service-CIDR     # designated service network segment 
--ignore-errors = Preflight-Swap     # ignored swap error message
[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.15.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.31:6443 --token a4pjca.ubxvfcsry1je626j \
    --discovery-token-ca-cert-hash sha256:784922b9100d1ecbba01800e7493f4cba7ae5c414df68234c5da7bca4ef0c581

3) According to the above initialization success prompted to create a profile

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[K8S the root-Master @ ~] # after the docker image ls # initialization completion of the required image can be seen also pulls down 
the REPOSITORY the TAG ID CREATED the IMAGE SIZE 
registry.aliyuncs.com / google_containers / Kube-Scheduler v1.15.2 88fa9cb27bd2 2 weeks 81 ago Member .1MB 
registry.aliyuncs.com / google_containers / Kube-Proxy v1.15.2 167bbf6c9338 2 weeks ago Member 82 .4MB 
registry.aliyuncs.com / google_containers / Kube-apiserver v1.15.2 34a53be6c9a7 2 weeks ago Member 207MB 
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.15.2             9f5df470155d        2 weeks ago         159MB
registry.aliyuncs.com/google_containers/coredns                   1.3.1               eb516548c180        7 months ago        40.3MB
registry.aliyuncs.com/google_containers/etcd                      3.3.10              2c4adeb21b4f        8 months ago        258MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        20 months ago       742kB

4) Add the network components flannel flannel project address

Method a 
[@ K8S the root -master ~] # kubectl Apply https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml -f 
[K8S the root-Master @ ~] # kubectl GET PODS -n kube-system | grep flannel # flannel network authentication plug have been deployed successfully (Running ie success) 

# because flannel pull default image from abroad, it is often not pulled, so the use of the following two installation methods 

method II 
[ @ K8S the root -master ~] # wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 
[K8S the root-Master @ ~] # Sed -i 'S # quay.io # quay-mirror.qiniu.com # g 'kube-flannel.yml # replacement address of the warehouse 
[K8S the root-Master @ ~] # kubectl Apply -f Kube-flannel.yml

Join Node node

Add new nodes to the cluster, perform kubeadm join in kubeadm init command output, and then add the same error parameter is ignored swap later.

1) ignore the swap configuration error

[root@k8s-node1 ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

[root@k8s-node2 ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

2) Add node node1

[root@k8s-node1 ~]# kubeadm join 192.168.1.31:6443 --token a4pjca.ubxvfcsry1je626j --discovery-token-ca-cert-hash sha256:784922b9100d1ecbba01800e7493f4cba7ae5c414df68234c5da7bca4ef0c581 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
    [WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3) adding a node node2

[root@k8s-node2 ~]# kubeadm join 192.168.1.31:6443 --token a4pjca.ubxvfcsry1je626j --discovery-token-ca-cert-hash sha256:784922b9100d1ecbba01800e7493f4cba7ae5c414df68234c5da7bca4ef0c581 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
    [WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check the cluster status

1) Enter the commands to check the state of the cluster master node returns the following results of the cluster is normal

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   Ready      master   9m40s   v1.15.2
k8s-node1    NotReady   <none>   28s     v1.15.2
k8s-node2    NotReady   <none>   13s     v1.15.2

When focus your content is STATUS Ready, then the cluster status to normal.

2) Check the cluster client and server version information

[root@k8s-master ~]# kubectl version --short=true
Client Version: v1.15.2
Server Version: v1.15.2

3) View cluster information

[root@k8s-master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.31:6443
KubeDNS is running at https://192.168.1.31:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

4) Check each node downloaded image

master节点:
[root@k8s-master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.15.2             34a53be6c9a7        2 weeks ago         207MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.15.2             9f5df470155d        2 weeks ago         159MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.15.2             88fa9cb27bd2        2 weeks ago         81.1MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.15.2             167bbf6c9338        2 weeks ago         82.4MB
quay-mirror.qiniu.com/coreos/flannel                              v0.11.0-amd64       ff281650a721        6 months ago        52.6MB
registry.aliyuncs.com/google_containers/coredns                   1.3.1               eb516548c180        7 months ago        40.3MB
registry.aliyuncs.com/google_containers/etcd                      3.3.10              2c4adeb21b4f        8 months ago        258MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        20 months ago       742kB

node1节点
[root@k8s-node1 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.15.2             167bbf6c9338        2 weeks ago         82.4MB
quay-mirror.qiniu.com/coreos/flannel                 v0.11.0-amd64       ff281650a721        6 months ago        52.6MB
registry.aliyuncs.com/google_containers/coredns      1.3.1               eb516548c180        7 months ago        40.3MB
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        20 months ago       742kB

node2
[root@k8s-node2 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.15.2             167bbf6c9338        2 weeks ago         82.4MB
quay-mirror.qiniu.com/coreos/flannel                 v0.11.0-amd64       ff281650a721        6 months ago        52.6MB
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        20 months ago       742kB

Delete Node

Sometimes node fails, the need to remove the node, as follows

1) executed on the master node

# kubectl drain <NODE-NAME> --delete-local-data --force --ignore-daemonsets
# kubectl delete node <NODE-NAME>

2) On the node to be removed

# kubeadm reset

 

Guess you like

Origin www.cnblogs.com/yanjieli/p/11793073.html