scheduler

scheduler scheduling process:

predicate --> priority --> select

 Preferably pre-selected

 

Scheduling:

1, the node scheduling node affinity propensity

2, pod affinity pod pod affinity scheduling affinity scheduling trans

3, spots and stains tolerance scheduling taints (stain), tolerations (tolerance)

 

Source: https: //github.com/kubernetes/kubernetes/tree/master/pkg/scheduler/algorithm

Primary Election System commonly used default:

A veto way

Check Node Condition

GeneralPredicates:

HostName pod.spec.hostname check whether the object is defined in the hostname pod

PodFitsHostPorts: pods.spec.containers.ports.hostPort

MatchNodeSelector: pods.spec.nodeSelector

podFitsRescources: resource requirements pod check whether the node is satisfied kubectl describe nodes node1

 

Check whether NoDiskConflict pod dependent storage volumes to meet the needs not enabled by default

PodToleratesNodeTaints: Check the pod spec.tolerations tolerable stains stain is completely contained in the node,

PodToleratesNodeNoExecuteTaints: spot-checking pod tolerance can not be executed is not enabled by default

CheckNodeLabelPresence: specify the default presence of the label on the check node is not enabled by the node label scheduling i.e.

CheckServiceAffinity: the same service as far as possible be placed in the pod on the same node is not enabled by default

 

MaxEBSVolumeCount: cloud storage mount the number corresponding to the node that is stored on the Amazon Elastic

MaxGCEPDVolumeCount: Google Cloud Storage

MaxAzureDiskVolumeCount: Microsoft's cloud storage

 

Pvc binding on the number of check node: CheckVolumeBinding

NoVolumeZoneConflict: storage volume of the remaining share of the examination region

 

CheckNodeMemoryPressure: check node memory resource exists pressure

CheckNodePIDPressure: the number of the inspection process is too large

CheckNodeDiskPressure: Check the hard disk storage pressure is too large

 

MatchInterPodAffinity: check node affinity

 

The default function is preferably (a scheduling node)

All priority function is enabled, the sum score according to each preferred function, i.e. the highest total score best

LeastRequested: the ratio of the remaining amount of resources the higher the score the higher priority = (cpu (capacity-sum (requested)) * 10 / capacity + memory (capacity-sum (requested)) * 10 / capacity) / 2

balanced_resource_allocation: cpu and memory usage is similar to score higher

node_prefer_avoid_pods: annotation information node "scheduler.alpha.kubernetes.io/preferAvoidPods" match the more the higher the score

taint_toleration: The spec.tolerations list item node and list item taints pod inspection object to be matching, the more matching entries, the lower the priority

selector_spreading: node with the current pod pod belong label where the more lower

interpod_affinity: traversing affinity pod, to meet the more entries affinity, the higher the score

node_affinity: node The affinity matching nodeselector node checks the POD, the more the number of matches, the higher the score

 

The smaller the amount of idle most_requested higher the score is not enabled by default

node_label: According to the node labels to judge score is not enabled by default

image_locality: meet on node pod has mirrored the total volume of the higher high score is not enabled by default

 

Advanced scheduling default mechanism

Node Selector: nodeSelector nodeName

Node affinity scheduling: nodeAffinity

 

nodeSelector strong constraint

kubectl explain pods.spec.nodeSelector

 

mkdir schedule

cp ../pod-sa-demo.yaml ./

vim sub-demo.yaml

apiVersion: v1

kind: Pod

metadata:

  name: pod-demo

  namespace: default

  labels:

    app: myapp

    tier: frontend

spec:

  containers:

  - name: myapp

    image: ikubernetes/myapp:v1

    ports:

    - name: myapp

      containerPort: 80

  nodeSelector:

disktype: ssd

 

kubectl apply -f pod-demo.yaml

kubectl label nodes node01 disktype=ssd

kubectl get nodes --show-label

kubectl get pods pod newly created will run on playing tag node

 

affinity

kubectl explain pods.spec.affinity

kubectl explain pods.spec.affinity.nodeAffinity

preferredDuringSchedulingIgnoredDuringExecution <[] Object> soft affinity try to satisfy not satisfy okay

requiredDuringSchedulingIgnoredDuringExecution <Object> hard affinity must meet will run at that node

 

Examples

cp-demo.yaml pod pod-node-affinity demo.yaml

vim pod-node-affinity demo.yaml

 

apiVersion: v1

kind: Pod

metadata:

  name: pod-node-affinity-demo

  namespace: default

  labels:

    app: myapp

    tier: frontend

spec:

  containers:

  - name: myapp

    image: ikubernetes/myapp:v1

    ports:

    - name: myapp

      containerPort: 80

  affinity: affinity preferably

    nodeAffinity: node affinity

      requiredDuringSchedulingIgnoredDuringExecution: hard affinity, must meet affinity

        nodeSelectorTerms: node label group

        - matchExpressions: matching expression

         - key: zone  --.key

          operator: In    àoperator

          values: values

          - foo value 1

          - bar value 2

 

kubectl apply -f pod-nodeaffinity-demo.yaml

 

vim pod-node-affinity demo 2.yaml

apiVersion: v1

kind: Pod

metadata:

  name: pod-node-affinity-demo-2

  namespace: default

  labels:

    app: myapp

    tier: frontend

spec:

  containers:

  - name: myapp

    image: ikubernetes/myapp:v1

    ports:

    - name: myapp

      containerPort: 80

  affinity:

    nodeAffinity:

      preferredDuringSchedulingIgnoredDuringExecution: have the following affinity priority, if not, can run

      - preference:

         matchExpressions:

         - key: zone

          operator: In

          values:

          - foo

          - bar

       weight: 60

 

kubectl apply -f pod-nodeaffinity-demo-2.yaml

 

podAffinity scheduling pod affinity

podAntiAffinity pod affinity to the anti-node where the first pod to pod subsequent evaluation as the node where the manner in which the same node requires determination pod, which is not in the same node

 

kubectl explain pods.spec.affinity.podAffinity pod affinity for both hard and also soft affinity affinity

preferredDuringSchedulingIgnoredDuringExecution <[] Object> soft affinity

 

requiredDuringSchedulingIgnoredDuringExecution <[] Object> hard affinity

topologyKey <string> position determination

labelSelector <Object> determination with which one or more pod affinity

namespaces <[] string> named affinity pod namespace, if not specified, the name of the pod to create a space that is currently generally not cross-reference other namespaces pod

 

kubectl explain pods.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector

matchExpressions <[] Object> set selector

matchLabels <map [string] string> equivalence selector

 

Examples

cp pod-demo.yaml pod-required-affinity-demo.yaml

vim pod-required-affinity-demo.yaml

apiVersion: v1

kind: Pod

metadata:

  name: pod-first

  namespace: default

  labels:

    app: myapp

    tier: frontend

spec:

  containers:

  - name: myapp

    image: ikubernetes/myapp:v1

---

apiVersion: v1

kind: Pod

metadata:

  name: pod-second

  namespace: default

  labels:

    app: backend

    tier: db

spec:

  containers:

  - name: busybox

    image: busybox:latest

    imagePullPolicy: IfNotPresent

    command: ["sh","-c","sleep 3600"]

  affinity:

    podAffinity: pod affinity and preferably

      requiredDuringSchedulingIgnoredDuringExecution: Specifies the hard affinity

        - labelSelector: pod tag selector, select the pod this label as target affinity

           matchExpressions: match pod label expression

           - {key: app, operator: In, values: ["myapp"]}  标签 app=myapp

         topologyKey: on Kubernetes.io/hostname affinity pod with the only run key label specified node identifier made on which node, then the same values ​​values ​​and affinity of the pod, so designated and affinity of a node in the same pod or run on the same type of node

 

apiVersion: v1

kind: Pod

metadata:

  name: pod-first

  labels:

    app: myapp

    tier: frontend

spec:

  containers:

  - name: myapp

    image: ikubernetes/myapp:v1

---

apiVersion: v1

kind: Pod

metadata:

  name: pod-second

  labels:

    app: backend

    tier: db

spec:

  containers:

  - name: busybox

    image: busybox:latest

    imagePullPolicy: IfNotPresent

    command: ["sh","-c","sleep 3600"]

  affinity:

    podAffinity:

      requiredDuringSchedulingIgnoredDuringExecution:

      - labelSelector:

          matchExpressions:

          - {key: app, operator: In, values: ["myapp"]}

        topologyKey: kubernetes.io/hostname

 

kubectl delete -f pod-required-affinity-demo.yaml

kubectl apply -f pod-required-affinity-demo.yaml

 

podAntiAffinity pod pro and anti

Examples

cp pod-required-affinity-demo.yaml pod-required-Antiaffinity-demo.yaml

vim pod-required-Antiaffinity-demo.yaml

 

apiVersion: v1

kind: Pod

metadata:

  name: pod-three

  labels:

    app: myapp

    tier: frontend

spec:

  containers:

  - name: myapp

    image: ikubernetes/myapp:v1

---

apiVersion: v1

kind: Pod

metadata:

  name: pod-four

  labels:

    app: backend

    tier: db

spec:

  containers:

  - name: busybox

    image: busybox:latest

    imagePullPolicy: IfNotPresent

    command: ["sh","-c","sleep 3600"]

  affinity:

    podAntiAffinity: pod anti-affinity preferred strategy following the other parameter name and the same pod affinity

      requiredDuringSchedulingIgnoredDuringExecution:

      - labelSelector:

          matchExpressions:

          - {key: app, operator: In, values: ["myapp"]}

        topologyKey: kubernetes.io/hostname

 

kubectl delete -f pod-required-affinity-demo.yaml

kubectl apply -f pod-required-Antiaffinity-demo.yaml

Because only one node, but anti-affinity, all states can only pengding

 

 Stain dispatched to the active node options Node Properties

kubectl get nodes node01 -o yaml

kubectl explain nodes.spec

 

taints

kubectl explain nodes.spec.taints stain

kubectl explain nodes.spec.taints.effect

Behavior is defined effect <string> -required- when the pod intolerable take effect on the rejection of the pod

NoExecute: not only affect the schedule, but also affect the existing objects pod; pod intolerance node object will be expelled

NoSchedule: only affect the scheduling process, does not affect the existing pod objects; do not tolerate over scheduling

PreferNoSchedule: only affect the scheduling process, it does not affect the existing pod objects; scheduling will not tolerate over,

But also have to schedule over the line

 

Master stain

kubectl describe nodes master

Taints:       node-role.kubernetes.io/master:NoSchedule

             Stain effect

Pod tolerate this stain, is not scheduled to come

 

Pod tolerance

Kubectl get pods -n kube-system

kubectl describe pods -n kube-system kube-apiserver-master

Tolerations:       :NoExecute、

 

kubectl describe pods -n kube-system kube-flannel-ds-amd64-99ccn

Tolerations:     :NoSchedule

                 node.kubernetes.io/disk-pressure:NoSchedule

                 node.kubernetes.io/memory-pressure:NoSchedule

                 node.kubernetes.io/network-unavailable:NoSchedule

                 node.kubernetes.io/not-ready:NoExecute

                 node.kubernetes.io/pid-pressure:NoSchedule

                 node.kubernetes.io/unreachable:NoExecute

                 node.kubernetes.io/unschedulable:NoSchedule       

 

 Stains management node

kubectl taint -help tainted

kubectl taint node node01 node-type=production:NoSchedule

kubectl taint node node01 node-type- delete stain

             Node01 stain stain designated key = value: not scheduling effect will not be tolerated

 

Taints:             node-type=production:NoSchedule

 

vim-deploy demo.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: myapp-deploy

  namespace: default

spec:

  replicas: 2

  selector:

    matchLabels:

      app: myapp

      release: canary

  template:

    metadata:

      labels:

        app: myapp

        release: canary

 

    spec:

      containers:

      - name: myapp

        image: ikubernetes/myapp:v1

        ports:

        - name: http

          containerPort: 80

 

kubectl apply -f deploy-demo.yaml because no plus tolerance stain on the pod, so it will not run, the status is pending

 

kubectl taint node node02 node-type=qa:NoExecute

                      Stain stain key value pod tolerate the expulsion effect

 

    spec:

      containers:

      - name: myapp

        image: ikubernetes/myapp:v1

        ports:

        - name: http

          containerPort: 80

      tolerations: pod tolerate less stains can be run either in this stain

      - key: "node-type" node stain key

        operrator: "Equal" relatively equal and equivalent node stain exactly exist in the presence of node stain

        value: "production"  node污点value

        effect: the degree of "NoExecute" pod can be tolerated

        tolerationSeconds: 60 expulsion of time

 

kubectl apply -f-deploy demo.yaml

 

    spec:

      containers:

      - name: myapp

        image: ikubernetes/myapp:v1

        ports:

        - name: http

          containerPort: 80

      tolerations: tolerate less stains were both run on the stain

      - key: "node-type"

        operator: "Exists" stain in the presence of equivalent comparison

        value: ""

        effect: "NoSchedule" degree can be tolerated

 

kubectl apply -f-deploy demo.yaml

 

    spec:

      containers:

      - name: myapp

        image: ikubernetes/myapp:v1

        ports:

        - name: http

          containerPort: 80

      tolerations: tolerate less stains were both run on the stain

      - key: "node-type"

        operator: "Exists" stain in the presence of equivalent comparison

        value: ""

 

     effect: "" represents all can tolerate a degree

kubectl apply -f-deploy demo.yaml

 

Tolerance: NoExecute> NoSchedule> PreferNoSchedule

The maximum tolerance: NoExcute

Guess you like

Origin www.cnblogs.com/leiwenbin627/p/11348699.html