Kubernetes certification exam self-study series | Manually specify the pod running location

Book source: "CKA/CKAD Test Guide: From Docker to Kubernetes Complete Raiders"

Organize the reading notes while studying, and share them with everyone. If the copyright is violated, it will be deleted. Thank you for your support!

Attach a summary post: Kubernetes certification exam self-study series | Summary_COCOgsta's Blog-CSDN Blog


When we run a pod, the master will schedule the pod to run on which node according to its own algorithm, specifically on which node, we only know after the pod is created.

5.6.1 Set labels for nodes

We can manually specify which node the pod runs on by setting some labels on each node and then specifying that the pod runs on a node with a specific label.

The format of the label: key=value, the value of the key can include the symbol "/" or ".", and multiple labels are separated by commas.

Step 1: View the labels of all nodes.

[root@vms10 ~]# kubectl get nodes --show-labels
NAME           STATUS   ROLES                   AGE    VERSION     LABELS 
vms10.rhce.cc   Ready   control-plane, master   30h    v1.21.1     ... 省略 ...
vms11.rhce.cc   Ready   <none>                  30h    v1.21.1     ... 省略 ...
vms12.rhce.cc   Ready   <none>                  30h    v1.21.1     ... 省略 ...
[root@vms10 ~]#

Step 2: View the labels for a particular node.

[root@vms10 ~]# kubectl get nodes vms12.rhce.cc --show-labels 
NAME          STATUS    ROLES    AGE    VERSION     LABELS 
vms12.rhce.cc  Ready    <none>   30h    v1.21.1     ... 省略 ...
[root@vms10 ~]#

The syntax for setting a label to a node is as follows.

kubectl label node 节点名 key=value

Step 3: Set a label diskxx=ssdxx for the vms12 node.

[root@vms10 ~]# kubectl label node vms12.rhce.cc diskxx=ssdxx 
node/vms12.rhce.cc labeled 
[root@vms10 ~]#

Step 4: Check whether the label is valid.

[root@vms10 ~]# kubectl get nodes vms12.rhce.cc --show-labels
NAME            STATUS     ROLES      AGE     VERSION      LABELS 
vms12.rhce.cc    Ready     <none>     30h     v1.21.1      ...,diskxx=ssdxx,...
[root@vms10 ~]# 

If you want to cancel a label of a node, the syntax is as follows.

kubectl label node 节点名 key-

Note: Add - after the key, and do not have a space in front of -.

Step 5: Now untick the diskxx-ssdxx label of vms12.

[root@vms10 ~]# kubectl label node vms12.rhce.cc diskxx-
node/vms12.rhce.cc labeled 
[root@vms10 ~]#

Step 6: Look at the label of vms12 again.

[root@vms10 ~]# kubectl get nodes vms12.rhce.cc --show-labels
NAME            STATUS     ROLES      AGE     VERSION      LABELS
vms12.rhce.cc    Ready     <none>     30h      V1.21.1    ... 省略 ...
[root@vms10 ~]#

You can see that the label diskxx no longer exists.

If you want to set labels for all nodes, the syntax is as follows.

kubectl label node --all key=value

There is a special label here, the format is node-role.kubernetes.io/name.

This label is used to set the column value of ROLES in the result of kubectl get nodes. For example, control-plane and master will be displayed on the master node, and other nodes will be displayed as .

[root@vms10 pod]# kubectl get nodes
NAME           STATUS    ROLES                     AGE     VERSION 
vms10.rhce.cc  Ready     control-plane, master     30h     v1.21.1
vms11.rhce.cc  Ready     <none>                    30h     v1.21.1
vms12.rhce.cc  Ready     <none>                    30h     v1.21.1
[root@vms10 pod]#

Here, control-plane and master will be displayed on vms10, because the system automatically sets the labels
node-role.kubernetes.io/control-plane and node-role.kubernetes.io/master, where node-role.kubernetes.io follows Part is displayed under ROLES.

It doesn't matter whether this key has a value or not. If you don't set a value, you can directly use "" to replace the value part. Assume that the vms11 ROLES position is set to worker1, and the vms12 Roles position is set to worker2.

Step 7: Set the node-role.kubernetes.io label for the two workers, and remove the control-plane on the master.

[root@vms10 pod]# kubectl label nodes vms11.rhce.cc node-role.kubernetes.io/worker1="" # 给vms11添加worker1标记
node/vms11.rhce.cc labeled
[root@vms10 pod]# kubectl label nodes vms12.rhce.cc node-role.kubernetes.io/worker2="" # 给vms12添加worker2标记
node/vms12.rhce.cc labeled 
[root@vms10 pod]# kebectl label nodes vms10.rhce.cc node-role.kubernetes.io/control-plane- # 去掉master的control-plane标记
[root@vms10 pod]#

Step 8: View the results.

[root@vms10 pod]# kubectl get nodes
NAME                STATUS   ROLES    AGE    VERSION
vms10.rhce.cc       Ready    master   30h    v1.21.1 
vms11.rhce.cc       Ready    worker1  30h    v1.21.1
vms12.rhce.cc       Ready    worker2  30h    v1.21.1
[root@vms10 pod]#

Step 9: If you want to cancel the name, it is the same as canceling the normal label.

[root@vms10 pod]# kubectl label nodes vms11.rhce.cc node-role.kubernetes.io/worker1-
node/vms11.rhce.cc labeled 
[root@vms10 pod]# kubectl label nodes vms12.rhce.cc node-role.kubernetes.io/worker2-
node/vms12.rhce.cc labeled 
[root@vms10 pod]#

Step 10: Set the label diskxx=ssdxx to vms12 again.

[root@vms10 ~]# kubectl label node vms12.rhce.cc diskxx=ssdxx
node/vms12.rhce.cc labeled
[root@vms10 ~]#

5.6.2 Create a pod to run on a specific node

The nodeSelector in the pod allows the pod to run on a node with a specific label.

Create a new pod and let it run on the vms12 node.

Step 1: Create the yaml file podlabel.yaml required by the pod, the content is as follows.

[root@vms10 pod]# cat podlabel.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: web1
  labels:
    role: myrole 
spec:
  nodeSelector:
    diskxx: ssdxx 
  containers:
  - name: web 
    image: nginx
    imagePullPolicy: IfNotPresent 
[root@vms10 pod]#

In this way, web1 will only run on nodes with the label diskxx=ssdxx. If there are multiple nodes with the label diskxx=ssdxx, then k8s will run on one of these nodes.

Note that the indentation of nodeSelector is at the same level as containers.

Step 2: Create a pod.

[root@vms10 pod]# kubectl apply -f podlabel.yaml 
pod/web1 created 
[root@vms10 pod]#

Step 3: View the nodes where the pod is running.

[root@vms10 pod]# kubectl get pods -o wide 
NAME   READY    STATUS     RESTARTS   AGE       IP            NODE        ...
web1     1/1    Running       0       29s   10.244.3.9   vms12.rhce.cc    ...
[root@vms10 pod]#

Step 4: Delete this pod yourself.

5.6.3 Annotations settings

Whether it is node or pod, including other objects (such as deployment) described later, there is also an attribute Annotations, which can be understood as annotations.

Step 1: Now view the Annotations property of vms12.rhce.cc.

[root@vms10 pod]# kubectl describe nodes vms12.rhce.cc 
Name:           vms12.rhce.cc 
Roles:          <none>
...
Annotations:    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                node.alpha.kubernetes.io/ttl: 0
                projectcalico.org/IPv4Address: 192.168.26.12/24
                projectcalico.org/IPv4IPIPTunnelAddr: 192.168.14.0
                volumes.kubernetes.io/controller-managed-attach-detach: true 
[root@vms10 pod]#

Step 2: To set the Annotations of this node, you can set it through the following command.

[root@vms10 pod]# kubectl annotate nodes vms12.rhce.cc aa=123
node/vms12.rhce.cc annotated
[root@vms10 pod]# 

Step 3: View the properties of the node vms12.rhce.cc.

[root@vms10 pod]# kubectl describe nodes vms12.rhce.cc
Name:             vms 12.rhce.cc 
Roles:            <none>
...
Annotations:       aa: 123
                 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                                ... 输出 ...                 
                 volumes.kubernetes.io/controller-managed-attach-detach: true
... 输出 ...
[root@vms10 pod]#

Step 4: To cancel, use the following command.

[root@vms10 pod]# kubectl annotate nodes vms12.rhce.cc aa-
node/vms12.rhce.cc annotated 
[root@vms10 pod]#

おすすめ

転載: blog.csdn.net/guolianggsta/article/details/130668049