Kubernetes container cluster management environment - Node node removal and addition of

 

First, how to remove Node from Kubernetes cluster
such as removing k8s-node03 this Node node from the cluster approach as follows:

1)先在master节点查看Node情况
[root@k8s-master01 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-node01   Ready    <none>   47d   v1.14.2
k8s-node02   Ready    <none>   47d   v1.14.2
k8s-node03   Ready    <none>   47d   v1.14.2

2)接着查看下pod情况
[root@k8s-master01 ~]# kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
dnsutils-ds-5sc4z           1/1     Running   963        40d   172.30.56.3    k8s-node02   <none>           <none>
dnsutils-ds-h546r           1/1     Running   963        40d   172.30.72.5    k8s-node03   <none>           <none>
dnsutils-ds-jx5kx           1/1     Running   963        40d   172.30.88.4    k8s-node01   <none>           <none>
kevin-nginx                 1/1     Running   0          27d   172.30.72.11   k8s-node03   <none>           <none>
my-nginx-5dd67b97fb-69gvm   1/1     Running   0          40d   172.30.72.4    k8s-node03   <none>           <none>
my-nginx-5dd67b97fb-8j4k6   1/1     Running   0          40d   172.30.88.3    k8s-node01   <none>           <none>
nginx-7db9fccd9b-dkdzf      1/1     Running   0          27d   172.30.88.8    k8s-node01   <none>           <none>
nginx-7db9fccd9b-t8njb      1/1     Running   0          27d   172.30.72.10   k8s-node03   <none>           <none>
nginx-7db9fccd9b-vrp9f      1/1     Running   0          27d   172.30.56.6    k8s-node02   <none>           <none>
nginx-ds-4lf8z              1/1     Running   0          41d   172.30.56.2    k8s-node02   <none>           <none>
nginx-ds-6kfsw              1/1     Running   0          41d   172.30.72.2    k8s-node03   <none>           <none>
nginx-ds-xqdgw              1/1     Running   0          41d   172.30.88.2    k8s-node01   <none>           <none>

3)封锁k8s-node03这个node节点,排干该node节点上的pod资源
[root@k8s-master01 ~]# kubectl drain k8s-node03 --delete-local-data --force --ignore-daemonsets
node/k8s-node03 cordoned
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/kevin-nginx; ignoring DaemonSet-managed Pods: default/dnsutils-ds-h546r, default/nginx-ds-6kfsw, kube-system/node-exporter-zmb68
evicting POD "metrics-Server-54997795d9-rczmc" 
evicting POD "Kevin-Nginx" 
evicting POD "Nginx-7db9fccd9b-t8njb" 
evicting POD "coredns-5b969f4c88-pd5js" 
evicting POD "Kubernetes-Dashboard-7976c5cb9c-4jpzb" 
evicting pod "my-nginx-5dd67b97fb-69gvm"
pod/my-nginx-5dd67b97fb-69gvm evicted
POD / coredns pd5js evicted--5b969f4c88 
POD / Nginx-7db9fccd9b t8njb evicted- 
POD / Kubernetes-Dashboard-7976c5cb9c 4jpzb evicted- 
POD / Nginx evicted Kevin- 
POD / metrics-Server-54997795d9-rczmc evicted 
Node / K8S-node03 evicted 

. 4) then remove K8S -node03 this node 
[root @ K8S-master01 ~] # kubectl the Delete node k8s-node03 
node "k8s-node03" deleted 

5) View pod cases and found that the pod on k8s-node03 has been dispatched to the other node node retained on the 
[root @ k8s-master01 ~] # kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
dnsutils-ds-5sc4z           1/1     Running   963        40d   172.30.56.3   k8s-node02   <none>           <none>
dnsutils-ds-jx5kx           1/1     Running   963        40d   172.30.88.4   k8s-node01   <none>           <none>
my-nginx-5dd67b97fb-8j4k6   1/1     Running   0          40d   172.30.88.3   k8s-node01   <none>           <none>
my-nginx-5dd67b97fb-kx2pc   1/1     Running   0          98s   172.30.56.7   k8s-node02   <none>           <none>
nginx-7db9fccd9b-7vbhq      1/1     Running   0          98s   172.30.88.7   k8s-node01   <none>           <none>
nginx-7db9fccd9b-dkdzf      1/1     Running   0          27d   172.30.88.8   k8s-node01   <none>           <none>
nginx-7db9fccd9b-vrp9f      1/1     Running   0          27d   172.30.56.6   k8s-node02   <none>           <none>
nginx-ds-4lf8z              1/1     Running   0          41d   172.30.56.2   k8s-node02   <none>           <none>
nginx-ds-xqdgw              1/1     Running   0          41d   172.30.88.2   k8s-node01   <none>           <none>

[root@k8s-master01 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-node01   Ready    <none>   47d   v1.14.2
k8s-node02   Ready    <none>   47d   v1.14.2 
[k8s-node03 the root @ ~] # systemctl STOP-Proxy flanneld Docker kubelet Kube Kube Kube-Proxy-Nginx

6) Finally perform cleanup operations on the node k8s-node03:
 
[root@k8s-node03 ~]# source /opt/k8s/bin/environment.sh
[root@k8s-node03 ~]# mount | grep "${K8S_DIR}" | awk '{print $3}'|xargs sudo umount
[root@k8s-node03 ~]# rm -rf ${K8S_DIR}/kubelet
[root@k8s-node03 ~]# rm -rf ${DOCKER_DIR}
[root@k8s-node03 ~]# rm -rf /var/run/flannel/
[root@k8s-node03 ~]# rm -rf /var/run/docker/
[root@k8s-node03 ~]# rm -rf /etc/systemd/system/{kubelet,docker,flanneld,kube-nginx}.service
[root@k8s-node03 ~]# rm -rf /opt/k8s/bin/*
[root@k8s-node03 ~]# rm -rf /etc/flanneld/cert /etc/kubernetes/cert
 
[root@k8s-node03 ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
[root@k8s-node03 ~]# ip link del flannel.1
[root@k8s-node03 ~]# ip link del docker0

Second, how the node is added to the Node Kubernetes cluster
such as the previously removed k8s-node03 K8S node to rejoin the cluster (the following operations are performed on k8s-master01 node)

1) modify variables in the script file /opt/k8s/bin/environment.sh k8s-node03 NODE node to node, and then distributed. 
[K8S the root-master01 @ ~] /opt/k8s/bin/environment.sh /opt/k8s/bin/environment.sh.bak1 CP # 
[@ K8S the root-master01 ~] # Vim / opt / K8S / bin / Environment .sh 
........ 
# cluster all IP nodes in the cluster array node 
Export NODE_NODE_IPS = (172.16.60.246) 
# IP node corresponding to the cluster array node hostname 
Export NODE_NODE_NAMES = (K8S-node03) 

[the root @ K8S ~ -master01] # the diff /opt/k8s/bin/environment.sh /opt/k8s/bin/environment.sh.bak1   
17c17 
<Export NODE_NODE_IPS = (172.16.60.246) 
--- 
> Export NODE_NODE_IPS = (172.16.60.244 172.16 172.16.60.246 .60.245) 
19c19 
<Export NODE_NODE_NAMES = (K8S-node03) 
---
> Export NODE_NODE_NAMES = (K8S-node01 K8S-node02 K8S-node03) 

2) previously on k8s-master01 node production certificate file distributed to the newly added node node 
[root @ k8s-master01 work] # for node_node_ip in $ NODE_NODE_IPS {[@]} 
  do 
    echo ">>> node_node_ip $ {}" 
    SSH node_node_ip the root} {$ @ "mkdir -p / etc / Kubernetes / CERT" 
    SCP .pem CA-CA * @ $ {config.json the root node_node_ip }: / etc / Kubernetes / CERT 
  DONE 

. 3) vessel network Flannel 
[@ K8S the root-master01 Work] # CD / opt / K8S / Work 
[@ K8S the root-master01 Work] # Source /opt/k8s/bin/environment.sh 
[@ K8S the root-master01 Work] # for node_node_ip in $ {NODE_NODE_IPS [@]} 
  do 
    echo ">>> node_node_ip $ {}" 
    SCP flannel / {flanneld, the root mk-docker-opts.sh}} @ $ {node_node_ip :/opt/k8s/bin/
    ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "mkdir -p /etc/flanneld/cert"
    scp flanneld*.pem root@${node_node_ip}:/etc/flanneld/cert
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    scp flanneld.service root@${node_node_ip}:/etc/systemd/system/
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "systemctl status flanneld|grep Active"
  done

4)部署node节点运行组件

->  安装依赖包
[root@k8s-master01 ~]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 ~]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "yum install -y epel-release"
    ssh root@${node_node_ip} "yum install -y conntrack ipvsadm ntp ntpdate ipset jq iptables curl sysstat libseccomp && modprobe ip_vs "
  done

->  部署docker组件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    scp docker/*  root@${node_node_ip}:/opt/k8s/bin/
    ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    scp docker.service root@${node_node_ip}:/etc/systemd/system/
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "mkdir -p  /etc/docker/ ${DOCKER_DIR}/{data,exec}"
    scp docker-daemon.json root@${node_node_ip}:/etc/docker/daemon.json
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "systemctl status docker|grep Active"
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0"
  done

->  部署kubelet组件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    scp kubernetes/server/bin/kubelet root@${node_node_ip}:/opt/k8s/bin/
    ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"
  done

-> Create token (! Before you create has expired, token is only valid for 24h, that is valid for only one day) 
[root @ K8S-master01 Work] # cd / opt / K8S / Work 
[root @ K8S-master01 Work] # Source / opt /k8s/bin/environment.sh 
[root @ K8S-master01 Work] # for node_node_name in $ {NODE_NODE_NAMES [@]} 
  do 
    echo "$ {node_node_name >>>}" 
    
    # create a token 
    Export BOOTSTRAP_TOKEN = $ (kubeadm the create token \ 
      kubelet on Bootstrap-token---description \ 
      --groups System: bootstrappers: node_node_name} {$ \ 
      --kubeconfig ~ / .kube / config) 
    
    # set cluster parameters 
    kubectl sET-cluster Kubernetes config \ 
      --certificate-Authority = / etc /kubernetes/cert/ca.pem \ 
      --embed to true-certs = \ 
      --server KUBE_APISERVER = $ {} \
      = kubelet-on Bootstrap --kubeconfig - $ {} .kubeconfig node_node_name 
    
    # Set client authentication parameters 
    kubectl config-SET-Credentials kubelet on Bootstrap \ 
      --token BOOTSTRAP_TOKEN = $ {} \ 
      --kubeconfig = kubelet-on Bootstrap - $ {} node_node_name .kubeconfig 
    
    # set context parameters 
    kubectl context-sET default config \ 
      --cluster = Kubernetes \ 
      --user = kubelet-on Bootstrap \ 
      --kubeconfig = kubelet-on Bootstrap - $ {} node_node_name .kubeconfig 
    
    # set the default context 
    kubectl config use-context --kubeconfig = kubelet-on Bootstrap default - $ {} .kubeconfig node_node_name 
  DONE 

view kubeadm new node is created for each token: 
[@ K8S the root-master01 Work] # token kubeadm List --kubeconfig ~ / .kube / config
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
sdwq5g.llzr9ytm32h1mnh1   23h       2019-08-06T11:47:47+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-node03

[root@k8s-master01 work]# kubectl get secrets  -n kube-system|grep bootstrap-token
bootstrap-token-sdwq5g                           bootstrap.kubernetes.io/token         7      77s

[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
  do
    echo ">>> ${node_node_name}"
    scp kubelet-bootstrap-${node_node_name}.kubeconfig root@${node_node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done

->  分发 bootstrap kubeconfig 文件到新增node节点
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
  do
    echo ">>> ${node_node_name}"
    scp kubelet-bootstrap-${node_node_name}.kubeconfig root@${node_node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done

->  分发 kubelet 参数配置文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    sed -e "s/##NODE_NODE_IP##/${node_node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_node_ip}.yaml.template
    scp kubelet-config-${node_node_ip}.yaml.template root@${node_node_ip}:/etc/kubernetes/kubelet-config.yaml
  done

->  分发 kubelet systemd unit 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
  do
    echo ">>> ${node_node_name}"
    sed -e "s/##NODE_NODE_NAME##/${node_node_name}/" kubelet.service.template > kubelet-${node_node_name}.service
    scp kubelet-${node_node_name}.service root@${node_node_name}:/etc/systemd/system/kubelet.service
  done

->  启动 kubelet 服务
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@${node_node_ip} "/usr/sbin/swapoff -a"
    ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
-> Deployment kube-proxy component
  DONE

[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    scp kubernetes/server/bin/kube-proxy root@${node_node_ip}:/opt/k8s/bin/
    ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"
  done

[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
  do
    echo ">>> ${node_node_name}"
    scp kube-proxy.kubeconfig root@${node_node_name}:/etc/kubernetes/
  done

[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}
  do
    echo ">>> ${node_node_name}"
    scp kube-proxy.service root@${node_node_name}:/etc/systemd/system/
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${node_node_ip} "modprobe ip_vs_rr"
    ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done

[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}
  do
    echo ">>> ${node_node_ip}"
    ssh root@${node_node_ip} "systemctl status kube-proxy|grep Active"
-> manually approve server cert csr
  DONE

[root @ K8S-master01 Work] # kubectl GET csr 
NAME AGE REQUESTOR CONDITION 
csr-5fwlh 3m34s System: Bootstrap: sdwq5g Approved, Issued 
csr-t547p 3m21s System: the Node: K8S-node03 the Pending 

[root @ K8S-master01 Work] # kubectl csr-t547p the Approve Certificate 
certificatesigningrequest.certificates.k8s.io/csr-t547p Approved 

[root @ K8S-master01 Work] # kubectl csr GET 
NAME AGE REQUESTOR CONDITION 
csr-5fwlh 3m53s System: Bootstrap: sdwq5g Approved, Issued 
csr-t547p 3m40s System : the node: k8s-node03 Approved, Issued 

-> View cluster status and found k8s-node03 node has been re-added to the cluster, and resources have been allocated pod. 
[root @ k8s-master01 work] # kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-node01   Ready    <none>   47d   v1.14.2
k8s-node02   Ready    <none>   47d   v1.14.2
k8s-node03   Ready    <none>   1s    v1.14.2

[root@k8s-master01 work]# kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP            NODE         NOMINATED NODE   READINESS GATES
dnsutils-ds-5sc4z           1/1     Running   965        40d    172.30.56.3   k8s-node02   <none>           <none>
dnsutils-ds-gc8sb           1/1     Running   1          94m    172.30.72.2   k8s-node03   <none>           <none>
dnsutils-ds-jx5kx           1/1     Running   966        40d    172.30.88.4   k8s-node01   <none>           <none>
my-nginx-5dd67b97fb-8j4k6   1/1     Running   0          40d    172.30.88.3   k8s-node01   <none>           <none>
my-nginx-5dd67b97fb-kx2pc   1/1     Running   0          174m   172.30.56.7   k8s-node02   <none>           <none>
nginx-7db9fccd9b-7vbhq      1/1     Running   0          174m   172.30.88.7   k8s-node01   <none>           <none>
nginx-7db9fccd9b-dkdzf      1/1     Running   0          27d    172.30.88.8   k8s-node01   <none>           <none>
nginx-7db9fccd9b-vrp9f      1/1     Running   0          27d    172.30.56.6   k8s-node02   <none>           <none>
nginx-ds-4lf8z              1/1     Running   0          41d    172.30.56.2   k8s-node02   <none>           <none>
1/1-DS-jn759 Nginx Running 0 94m-node03 K8S 172.30.72.3 <none> <none> 
Nginx xqdgw 1/1 Running-DS-0-amdha01 K8S 41d 172.30.88.2 <none> <none> 

[K8S the root @ Work -master01] Top kubectl Node # 
NAME the CPU (Cores) the MEMORY% the CPU (bytes) the MEMORY%    
K8S amdha01 96M-55% 2% 2123Mi        
K8S amdha02 133M-46 is 1772Mi%. 3%        
K8S-46M. 1% 4859Mi 61 is node03% 

== ================================================== ================================================== ====== 
Note a 
If you are adding a new node to the cluster above k8s, practices are as follows:
1) do a good job environment initialization ready node node, as well K8s-master01 new node to ssh no trust relationship password to log in; etc / hosts in good bindings; turn off the firewall and so on. 
2) In /opt/k8s/bin/environment.sh script variables, and the NODE_NODE_IPS NODE_NODE_NAMES variable correspondence information into the new node Node 
3) above was added according to the number of nodes k8s-node03 adding step performed again to all 

================================================== ================================================== ======== 
Note two 
using the above binary manner in accordance with k8s cluster. If k8s kubeadmin cluster tool to create, re-enable operation of the cluster node is added as follows: 

that the node joins the cluster command format (node Node operations, user root): 
# kubeadm the Join --token <token> <Master-IP >: <master-port> --discovery -token-ca-cert-hash sha256: <hash> 

If forgotten token Master node can use the following command to view (operating on the master node): 
# kubeadm token List 

default , the token is valid for 24 hours, and if the token has expired, then the following command can be used to regenerate (operation on the master node); 
# Create token kubeadm

If --discovery token ca cert-hash-- - value can not be found, you can use the following command to generate (operating on the master node): 
# OpenSSL X509 -pubkey -IN /etc/kubernetes/pki/ca.crt | OpenSSL rsa -pubin -outform der 2> / dev / null | openssl dgst -sha256 -hex | sed 's /^.* //' 

after adding the node, wait a moment, to see node is added (operation on the master node)

Guess you like

Origin www.cnblogs.com/kevingrace/p/11302555.html
Recommended