ノードを追加1.
現在のノードを表示する(マスター)
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 153m v1.14.0
k8s-node1 Ready <none> 117m v1.14.0
前のコマンドは、ホスト名を変更するには、ip変更すると、クローンは、実行kubeadm初期化ステートマシンをK8S
node2の操作
hostnamectl set-hostname k8s-node2
reboot
(node2で操作)K8S-ノード2ノードを追加
[root@k8s-node2 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@k8s-node2 ~]# kubeadm join 192.168.111.130:6443 --token ymtl8s.933t59qfezi9gjcq --discovery-token-ca-cert-hash sha256:7816d0b2572e6c569ed8e63ece15a7a08d06ed3fc89698245bf2aaa6acc345d7
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
マスターが正常に追加するかどうかを確認してください
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 178m v1.14.0
k8s-node1 Ready <none> 142m v1.14.0
k8s-node2 Ready <none> 27s v1.14.0
root@k8s-master ~]# kubectl get po --all-namespaces -o wide ##-o查看pod运行在那个node上,--namespace=kube-system查看系统pod
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-fb8b8dccf-ltlx4 1/1 Running 0 3h12m 10.244.0.4 k8s-master <none> <none>
kube-system coredns-fb8b8dccf-q949f 1/1 Running 0 3h12m 10.244.0.5 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 2 3h11m 192.168.111.130 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 2 3h12m 192.168.111.130 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 2 3h11m 192.168.111.130 k8s-master <none> <none>
kube-system kube-flannel-ds-amd64-2gr2v 1/1 Running 0 177m 192.168.111.130 k8s-master <none> <none>
kube-system kube-flannel-ds-amd64-bsrkx 1/1 Running 0 157m 192.168.111.131 k8s-node1 <none> <none>
kube-system kube-flannel-ds-amd64-xdg5p 1/1 Running 0 15m 192.168.111.132 k8s-node2 <none> <none>
kube-system kube-proxy-2mj4q 1/1 Running 0 157m 192.168.111.131 k8s-node1 <none> <none>
kube-system kube-proxy-ffd8s 1/1 Running 0 15m 192.168.111.132 k8s-node2 <none> <none>
kube-system kube-proxy-qp5k7 1/1 Running 0 3h12m 192.168.111.130 k8s-master <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 2 3h12m 192.168.111.130 k8s-master <none> <none>
[root@k8s-master ~]#
ノードを削除します。2.
の実装上のマスター:
kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node2
node2で実行します。
kubeadm reset
ノードを追加する必要がある場合は、ノードマシンを再起動し、「kubeadm参加...」再実行
3.ノードの再起動
特別な操作なしに、そのノードkubeletブートがライン上にあることを確認します
systemctl enable kubelet
systemctl restart kubelet 或者 service kubelet start
################################################
削除ノードおよび保守サービスをK8S
#将 node 节点标记为不可调度,不影响现有 pod。注意 daemonSet 不受影响
kubectl cordon node-name
#驱逐该节点的 pod
kubectl drain node-name
#维护结束,节点重新投入使用
kubectl uncordon node-name
#删除节点
kubectl delete node node-name
#加入新节点,在master节点上执行,将输出再到新节点上执行
kubeadm token create --print-join-command