Kubernetes -Pod node assigned to the scheduler (node-pod affinity fixed node)

1, demand
constraints Pod only run on a particular Node (s), or preferentially run on a particular node. There are several ways to achieve this, the recommended method is to use the tag selector to select. Typically such constraints are not necessary because the scheduler will automatically be placed reasonably (for example, dispersing the pod to the node rather than nodes can be placed on the pod insufficient resources, etc.), but in some cases, For more pod docking control node, e.g., to ensure that the pod is connected eventually falls SSD machines, or two from the service and a large barrier pod placed in the same communication area is available.

2, nodeSelector
nodeSelector node selection is constrained simplest form of recommendations. nodeSelector is a field of PodSpec. It specifies the key-value mapping pairs.
In order to make the pod can be run on the node, each node must have a specified value pairs as label (which may also have other label), the most commonly used is a pair of key-value pair.

Add tags to the node
kubectl label nodes <node-name> <label-key> = <label-value> command to add a label to the nodes of your choice
kubectl get nodes --show-labels the current node has a view of the label

Creating pod to run on a given node label
[root @ k8smaster node] # More nodeselector.yaml 
apiVersion: v1
kind: Pod
the Metadata:
  name: nodeselector-pod
spec:
  Containers:
  - name: nodeselector-pod-CTN
    Image: 192.168.23.100: 5000 / Tomcat: v2
    imagePullPolicy: IfNotPresent
  restartPolicy: Never
  nodeSelector:
    disktype: SSD

[root @ k8smaster the Node] # kubectl the Create -f nodeselector.yaml 
POD / POD-nodeselector the Created
[root @ k8smaster the Node] # kubectl GET PODS Wide # -o Since the node labels lead to an absence has been the Pending 
NAME AGE RESTARTS the sTATUS READY Nominated IP nODE nODE READINESS GATES
nodeselector-pod   0/1     Pending   0          2m26s   <none>   <none>   <none>           <none>
[root@k8smaster node]# kubectl describe pod nodeselector-pod
Name:         nodeselector-pod
Namespace:    default
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
Containers:
  nodeselector-pod-ctn:
    Image:        192.168.23.100:5000/tomcat:v2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vt7pl (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-vt7pl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vt7pl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  disktype=ssd
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  22s (x2 over 104s)  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.
[root@k8smaster node]# 
[root@k8smaster node]# kubectl label nodes k8snode01 disktype=ssd  #给节点打标签
node/k8snode01 labeled    
[root@k8smaster node]# kubectl get nodes --show-labels #显示节点标签
NAME        STATUS   ROLES    AGE   VERSION   LABELS
k8smaster   Ready    master   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode01   Ready    <none>   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode01,kubernetes.io/os=linux
k8snode02   Ready    <none>   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode02,kubernetes.io/os=linux
[root@k8smaster node]# 
[root@k8smaster node]# kubectl get pod -o wide  #pod正常运行到指定节点
NAME               READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nodeselector-pod   1/1     Running   0          11m   10.244.1.28   k8snode01   <none>           <none>
[root@k8smaster node]# 

[root@k8smaster node]# kubectl label nodes k8snode01 disktype=ssd2 --overwrite=true #node标签修改不会驱除pod
node/k8snode01 labeled
[root@k8smaster node]# kubectl get nodes --show-labels
NAME        STATUS   ROLES    AGE   VERSION   LABELS
k8smaster   Ready    master   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode01   Ready    <none>   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode01,kubernetes.io/os=linux
k8snode02 Ready <none> 8d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode02,kubernetes.io/os Linux =
[root @ k8smaster the Node] # 
[root @ k8smaster the Node] # kubectl GET pod -o Wide #node label modified to the mismatch condition does not affect the pod running
NAME AGE RESTARTS the sTATUS READY Nominated IP NODE NODE READINESS GATES
nodeselector- Running 1/1 0 35m 10.244.1.28 POD k8snode01 <none> <none>
[@ k8smaster the root Node] # 

3, node affinity
similar affinity nodeSelector node concept that allows you to constrain the pod can be scheduled according to the label on the node to which node. There are two types of affinity nodes, respectively requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution, you can see them as hard and soft, former specifies the rules of the pod scheduled on a node * must * be met (as nodeSelector but with more expressive syntax), which specifies the scheduler will attempt to perform a single preference can not be guaranteed * * . IgnoredDuringExecution part of the name implies, similar works nodeSelector, if the node labels change occurs at runtime, and thus no longer meet affinity rule on the pod, then the pod will still continue to run on that node. And requiredDuringSchedulingRequiredDuringExecution, it is similar to requiredDuringSchedulingIgnoredDuringExecution, except that it will be expelled from the node pod pod no longer meet the requirements of the node affinity.    

Affinity node specified by the fields under nodeAffinity PodSpec field of affinity.
Affinity node syntax supports the following operators: In (a value of the label in the list), NotIn (a value of the label is not in the list), Exists (presence of a label), DoesNotExist (a label does not exist) , Gt (label value greater than a certain value), Lt (label value greater than a certain value). You can use NotIn and DoesNotExist to achieve pro and anti-node behavior, or using the tainted node pod expulsion from a particular node.
If both nodeSelector and nodeAffinity, * both * must be satisfied in order to dispatch the pod on the candidate node.
If a plurality of associated nodeAffinity nodeSelectorTerms type is specified, wherein if it satisfies a nodeSelectorTerms, POD will be scheduled on the node.
If more matchExpressions associated with nodeSelectorTerms specified, then only when all matchExpressions met, then, pod will be scheduled on the node.
If you modify or delete the label pod scheduled to nodes, pod will not be deleted. In other words, affinity selection valid only during scheduled pod.
preferredDuringSchedulingIgnoredDuringExecution range field values in weight from 1 to 100. For each scheduling node meets all the requirements (resource request, RequiredDuringScheduling affinity expressions, etc.), the scheduler will traverse the elements of the field sum is calculated, and if the node matches the corresponding MatchExpressions, add "weight" to the sum. This score is then combined with other priority function score of the node. Node with the highest score is the most preferred.

Creating pod does not run on ssd disktype = tag [must] meet
[root @ k8smaster the Node] #   kubectl label Nodes k8snode01 disktype = --overwrite ssd ssd # reset the label disktype =
the Node / k8snode01 Labeled
[root @ k8smaster the Node] #   kubectl label nodes k8snode02 disktype = ssd --overwrite # = SSD reset tag disktype
Node / k8snode02 Labeled
[@ k8smaster the root Node] # More noderequire.yaml 
apiVersion: V1
kind: Pod
Metadata:
  name: POD-noderequire
spec:
  Containers:
  - name : noderequire-POD-CTN
    Image: 192.168.23.100:5000/tomcat:v2
    imagePullPolicy: IfNotPresent
  restartPolicy: Never
  Affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: NotIn
            values:
            - ssd

[root@k8smaster node]# kubectl create -f noderequire.yaml 
pod/noderequire-pod created
[root@k8smaster node]# kubectl get pod -o wide #没有满足的标签导致pod一直是Pending状态
NAME              READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
noderequire-pod   0/1     Pending   0          68s   <none>   <none>   <none>           <none>
[root@k8smaster node]# kubectl describe pod noderequire-pod
Name:         noderequire-pod
Namespace:    default
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
Containers:
  noderequire-pod-ctn:
    Image:        192.168.23.100:5000/tomcat:v2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vt7pl (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-vt7pl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vt7pl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  31s   default-scheduler  0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match node selector.
[root@k8smaster node]# kubectl label nodes k8snode02 disktype=ssd2 --overwrite #重置标签为disktype=ssd2
node/k8snode02 labeled
[@ k8smaster the root Node] # kubectl POD GET # -o Wide condition is satisfied, pod running
NAME RESTARTS of AGE the IP NODEs the STATUS READY Nominated NODEs READINESS GATES
noderequire Running-POD 1/1 0 2m18s 10.244.2.30 k8snode02 <none> <none>
[@ k8smaster the root Node] # kubectl label Nodes k8snode02 disktype --overwrite # = SSD = SSD reset tag is disktype
Node / k8snode02 labeled
[@ k8smaster the root Node] # kubectl POD GET # -o Wide modified tag is not satisfied, operation the pod affected
NAME RESTARTS of AGE the IP NODEs the STATUS READY Nominated NODEs READINESS GATES
noderequire Running-pod 1/1 0 2m37s 10.244.2.30 k8snode02 <none> <none>
[root@k8smaster node]#

Creating pod try not to run on ssd disktype = [label] to try to meet
[root @ k8smaster the Node] # kubectl label Nodes k8snode02 disktype = SSD2 --overwrite # reset tags for SSD2 disktype = the Node / k8snode02 Labeled [root @ k8smaster the Node] # kubectl label Nodes k8snode01 disktype --overwrite the SSD 2 = # = reset tag to the SSD 2 disktype Node / k8snode01 labeled [@ k8smaster the root Node] # More nodeprefer.yaml  apiVersion: V1 kind: Pod Metadata:   name: POD-nodeprefer spec:   Containers :   - name: nodeprefer-POD-CTN     Image: 192.168.23.100:5000/tomcat:v2     imagePullPolicy: IfNotPresent   restartPolicy: Never Affinity:     nodeAffinity:














 

      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd    
      
[root@k8smaster node]# kubectl create -f nodeprefer.yaml 
pod/nodeprefer-pod created
[root@k8smaster node]# kubectl get pod -o wide #没有满足条件标签的node,pod也正常运行
NAME             READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nodeprefer-pod   1/1     Running   0          18s   10.244.2.31   k8snode02   <none>           <none>
[root@k8smaster node]# 

. 4, the affinity pod
between the pod and the affinity with anti-* based affinity may already running on the node label * pod pod can be scheduled to be constrained node, rather than on the node based on the tag. Formatting rules as "X If the node is already running on one or more of the pod Y rule is satisfied, then the pod should (or should not be the case where the non-affinity) running in the X-node." Y represents a LabelSelector optional list associated with the command space; different node, as defined namespace pod (pod are therefore the label on the namespace qualified), acting on the pod label tag selector must specify selection which application namespace. Conceptually, X is a topological domains, such as the node, rack, cloud providers regions, cloud providers and other regions. TopologyKey may be used to represent it, topologyKey node labeled keys to be used to represent such a system topological domains.
Note:
inter-Pod affinity and anti-affinity requires a lot of processing, which could significantly slow down the dispatch of large-scale cluster. We do not recommend using them in more than hundreds of nodes in the cluster.
Pod affinity anti-node and the need for markers consistent, i.e. each node in the cluster must be matched with the appropriate label topologyKey. If some or all of the nodes specified topologyKey missing label, may cause unexpected behavior.

Affinity with the node, there are currently two types of affinity and anti pod affinity, i.e. requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution, sub-table represent "hard" and "soft" requirement. See the previous node and the pro portion of the description. One example is affinity requiredDuringSchedulingIgnoredDuringExecution "service A and service B pod placed in the same area as the cross between them a lot", and examples of the anti-preferredDuringSchedulingIgnoredDuringExecution affinity and will be "inter-regional distribution of this pod and services" ( It is a mandatory requirement does not make sense, because you may have a few more than the area of the pod number).
Pod affinity between podAffinity specified by the field under PodSpec affinity field. Anti affinity between the pod PodSpec specified by the field under the affinity podAntiAffinity field.
Pod affinity and anti-affinity legitimate operators are In, NotIn, Exists, DoesNotExist.

创建mypod2依赖mypod1[必须满足]
[root@k8smaster pod]# kubectl get nodes  --show-labels 
NAME        STATUS   ROLES    AGE   VERSION   LABELS
k8smaster   Ready    master   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode01   Ready    <none>   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode01,kubernetes.io/os=linux
k8snode02   Ready    <none>   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode02,kubernetes.io/os=linux
[root@k8smaster pod]# more pod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: mypod1
  labels:
    app: mypod1
spec:
  containers:
  - name: mypod1-ctn
    image: 192.168.23.100:5000/tomcat:v2
    imagePullPolicy: IfNotPresent
  restartPolicy: Never
  nodeSelector:
    disktype: ssd

[root@k8smaster pod]# more pod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: mypod2
  labels:
    app: mypod2
spec:
  containers:
  - name: mypod2-CTN
    Image: 192.168.23.100:5000/tomcat:v2
    imagePullPolicy: IfNotPresent
 Affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - LabelSelector:
          matchExpressions:
          - Key: App
            operator: the In
            values:
            - mypod1
        topologyKey: disktype

  restartPolicy: Never
[the root k8smaster pod @] # kubectl the Create -f pod2.yaml 
pod / mypod2 the Created
[root @ k8smaster pod] # kubectl GET pod --show-Wide # -o labels due label app: mypod1 the pod is not running lead pod has been Pengding
NAME     READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES   LABELS
mypod2   0/1     Pending   0          21s   <none>   <none>   <none>           <none>            app=mypod2
[root@k8smaster pod]# kubectl describe pod mypod2 #查看Pengding原因
Name:         mypod2
Namespace:    default
Priority:     0
Node:         <none>
Labels:       app=mypod2
Annotations:  <none>
Status:       Pending
IP:           
Containers:
  mypod2-ctn:
    Image:        192.168.23.100:5000/tomcat:v2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vt7pl (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-vt7pl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vt7pl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  38s   default-scheduler  0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules, 2 node(s) didn't match pod affinity/anti-affinity.
[root@k8smaster pod]# kubectl create -f pod1.yaml 
pod/mypod1 created
[root@k8smaster pod]# kubectl get pod --show-labels -o wide #标签app: mypod1​​​​​​​的pod运行后,mypod2才能正常运行
NAME     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES   LABELS
mypod1   1/1     Running   0          13s   10.244.2.35   k8snode02   <none>           <none>            app=mypod1
mypod2   1/1     Running   0          67s   10.244.2.34   k8snode02   <none>           <none>            app=mypod2
[root@k8smaster pod]# 

创建mypod3依赖mypod2[尽量满足]
[root@k8smaster pod]# more pod3.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: mypod3
  labels:
    app: mypod3
spec:
  containers:
  - name: mypod3-ctn
    image: 192.168.23.100:5000/tomcat:v2
    imagePullPolicy: IfNotPresent
  affinity:
    podAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - mypod2
          topologyKey: disktype

  restartPolicy: Never
[root@k8smaster pod]# kubectl create -f pod3.yaml 
pod/mypod3 created
[root@k8smaster pod]# kubectl get pod --show-labels -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES   LABELS
mypod1   1/1     Running   0          10m   10.244.2.35   k8snode02   <none>           <none>            app=mypod1
mypod2   1/1     Running   0          11m   10.244.2.34   k8snode02   <none>           <none>            app=mypod2
mypod3   1/1     Running   0          22s   10.244.2.36   k8snode02   <none>           <none>            app=mypod3
[root@k8smaster pod]# kubectl delete pod mypod2
pod "mypod2" deleted
[root@k8smaster pod]# kubectl get nodes --show-labels 
NAME        STATUS   ROLES    AGE   VERSION   LABELS
k8smaster   Ready    master   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8smaster,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8snode01   Ready    <none>   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd2,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode01,kubernetes.io/os=linux
k8snode02   Ready    <none>   8d    v1.15.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8snode02,kubernetes.io/os=linux
[root@k8smaster pod]# 
[root@k8smaster pod]# more pod2.yaml  #mypod2指定到disktype: ssd2节点运行
apiVersion: v1
kind: Pod
metadata:
  name: mypod2
  labels:
    app: mypod2
spec:
  containers:
  - name: mypod2-ctn
    image: 192.168.23.100:5000/tomcat:v2
    imagePullPolicy: IfNotPresent
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - Key: App
            operator: the In
            values:
            - mypod1
        topologyKey: disktype
  restartPolicy: Never
 nodeSelector:
    disktype: the SSD 2

[POD k8smaster the root @] # 
[@ k8smaster the root POD] # kubectl Create -f pod2.yaml 
POD / mypod2 Created
[ @ k8smaster pod root] # kubectl GET pod --show-Wide Labels -o # mypod2 assigned to disktype: ssd2 nodes running, due to mypod1 [app = mypod1 ] not in the same domain leads pod has been in the Pending
NAME AGE IP RESTARTS the STATUS READY NODE NOMINATED NODE READINESS GATES LABELS
mypod1   1/1     Running   0          12m     10.244.2.35   k8snode02   <none>           <none>            app=mypod1
mypod2   0/1     Pending   0          6s      <none>        <none>      <none>           <none>            app=mypod2
mypod3   1/1     Running   0          2m56s   10.244.2.36   k8snode02   <none>           <none>            app=mypod3
[root@k8smaster pod]# kubectl describe pod mypod2 #查看原因
Name:         mypod2
Namespace:    default
Priority:     0
Node:         <none>
Labels:       app=mypod2
Annotations:  <none>
Status:       Pending
IP:           
Containers:
  mypod2-ctn:
    Image:        192.168.23.100:5000/tomcat:v2
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vt7pl (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-vt7pl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vt7pl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  disktype=ssd2
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  24s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match node selector.
[root@k8smaster pod]#

. 5, the nodeName
the nodeName node selecting method is the easiest constraints, but due to its own limitations, it is usually not used. nodeName is a field of PodSpec. If it is not empty, the scheduler will ignore the pod, and it runs on the specified node kubelet process tries to run pod. Thus, if the specified nodeName PodSpec, then it takes precedence over the node selection method above.

Use nodeName node to select some limitations:
if the specified node does not exist,
if the specified node does not have the resources to accommodate the pod, pod schedule will fail and the reason will be displayed as such or OutOfmemory OutOfcpu.
Node name cloud environment is not always predictable or stable.

Creating pod runs on k8snode01
[root @ k8smaster pod] # More pod4.yaml 
apiVersion: v1
kind: Pod
the Metadata:
  name: mypod4
  Labels:
    App: mypod1
spec:
  Containers:
  - name: mypod4-CTN
    Image: 192.168.23.100:5000 / Tomcat: v2
    imagePullPolicy: IfNotPresent
  nodeName: k8snode01
  restartPolicy: Never
[root @ k8smaster POD] # kubectl the Create -f pod4.yaml 
POD / mypod4 the Created
[root @ k8smaster POD] # kubectl GET POD --show-Wide Labels -o # after mypod4 operation, mypod2 up and normal operation, because mypod4 k8snode01 nodes running a label app = mypod1
NAME     READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES   LABELS
mypod1   1/1     Running   0          16m     10.244.2.35   k8snode02   <none>           <none>            app=mypod1
mypod2   1/1     Running   0          3m55s   10.244.1.31   k8snode01   <none>           <none>            app=mypod2
mypod3   1/1     Running   0          6m45s   10.244.2.36   k8snode02   <none>           <none>            app=mypod3
mypod4   1/1     Running   0          13s     10.244.1.30   k8snode01   <none>           <none>            app=mypod1
[root@k8smaster pod]# 。

Kubernetes scheduler
official website: https: //kubernetes.io/zh/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

Published 60 original articles · won praise 20 · views 4601

Guess you like

Origin blog.csdn.net/zhaikaiyun/article/details/104514658