K8s operation and maintenance guide

A, Node isolation and recovery
in hardware upgrades, maintenance, etc., we need to isolate certain Node, from k8s cluster scheduling range. k8s provides a mechanism may be included in the scheduling Node range, from the scheduling Node can range.
Create a profile unschedule_node.yaml, designated unschedulable ask spec true in part:

[root@master node]# cat unschedule_node.yaml 
apiVersion: v1
kind: Node
metadata:
  name: 192.168.0.222
  labels:
    kubernetes.io/hostname: k8s-node-1
spec: 
  unschedulable: true

Then, to complete the changes to the Node state by kubectl replace command:

[root@master node]# kubectl replace -f unschedule_node.yaml 
node/192.168.0.222 replaced
[root@master node]# kubectl get nodes
NAME            STATUS                     ROLES     AGE       VERSION
192.168.0.144   Ready                      <none>    25d       v1.11.6
192.168.0.148   Ready                      <none>    25d       v1.11.6
192.168.0.222   Ready,SchedulingDisabled   <none>    5d        v1.11.6

查看Node的状态,可以观察到在Node的状态中增加了一项SchedulingDisabled,对于后续创建的Pod,系统将不会再向该Node进行调度。也可以不使用配置文件,直接使用 kubectl patch 命令完成:
kubectl patch node k8s-node-1 -p '{"spec": {"unschedulaable": true}}'
需要注意的是,将某个Node脱离调度范围时,在其上运行的pod并不会自动停止,管理员需要手动停止在改Node是上运行的pod.
同样,如果需要将某个node重新纳入集群调度范围,则将unschedulable 设置为false,再次执行 kubectl replace 或者kubectl  patch 命令就能恢复系统对改node的调度。

第三种方法:
使用kubectl cordon <node_name> 对某个Node 进行隔离调度操作
[root@master node]# kubectl cordon 192.168.0.148
node/192.168.0.148 cordoned
[root@master node]# kubectl get nodes
NAME            STATUS                     ROLES     AGE       VERSION
192.168.0.144   Ready                      <none>    25d       v1.11.6
192.168.0.148   Ready,SchedulingDisabled   <none>    25d       v1.11.6
192.168.0.222   Ready,SchedulingDisabled   <none>    5d        v1.11.6

恢复调度操作:
[root@master node]# kubectl uncordon 192.168.0.222
node/192.168.0.222 uncordoned
[root@master node]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.0.144   Ready     <none>    25d       v1.11.6
192.168.0.148   Ready     <none>    25d       v1.11.6
192.168.0.222   Ready     <none>    5d        v1.11.6

Two, Node expansion
in k8s cluster, a new node is added is very simple. Mounted on the new node node Docker, kubelet and kube-proxy service, and then configure the startup parameters kubelet kube-proxy, copying the certificate, and finally start the services. Rong kubelet default auto-enrollment mechanism, a new node will be automatically added to the existing k8s cluster.
within k8s master after accepting registration of new Node it will automatically be included in the scope of the current cluster scheduling, when after the container is created, it can be scheduled as a new Node.
Through this mechanism, the cluster Node expansion.
Three, Namespace: cluster environment shared with isolation
1. Create a namespace

   namespace-devlopment.yaml
   apiVersion: v1
   kind: Namespace
   metadata:
        name: development
        
 # kubectl create -f namespace-development.yaml
    namespaces/developmentx

Guess you like

Origin blog.csdn.net/zhutongcloud/article/details/92796312