Kubernetes调度器,预选策略以及优选函数以及高级调度

版权声明:知识就是为了传播! https://blog.csdn.net/weixin_36171533/article/details/82757713

k8s调度器是允许自定义的

1.k8s的调度算法会从众多node选出适用的调度算法

2.预选过程是排除那些不符合Pod运行的环境

3.优选节点然后绑定

也可以自己进行调度,如ssd,GPU等
思路:加上标签就可以

调度器:
    预选策略:
        CheckNodeCondition:先去检查node,磁盘是否正常
        GeneralPredicates:通用预选策略
            Hostname:检查Pod对象是否定义了Pod,spec,hostname
            PodFirstHostPorts:Pods.spec.containers.ports.hostPort
            MatchNodeSelector:Pods.spec.nodeSelector
            PodFitsResources:检查Pod的需求资源是否节点可以满足
        NoDiskConflict:检查存储卷是否满足需求
        PodToleratesNodeTainsts:检查Pod上spec。tolerations可以容忍的五点是否完全包含节点上的污点。
        PodToleratesNodeNoExecuteTaints:是否可以接受NoExecute污点
        CheckNodeLabelPresence:检查标签的存在性
        CheckServiceAffinity:将相同角色的pod分配到一块

参考:
https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler/algorithm/priorities

优先函数:

LeastRequested:

(cpu((capacity-sum(requested))10/capacity)+memory((capacity-sum(requested))*10/capacity ))/2

BalancedResourceAllocation:

CPU和内存资源占用率相近的胜出;

NodePreferAvoidPods:

节点注解信息“scheduler.alpha.kubernetes.io/preferAvoidPods”

TaintToleration: 将Pod对象的spec.tolerations列表项进行匹配检查,匹配条越多,得分越低

SeletorSpereading

InterPodAffinity

MostRequested

NodeLabel

ImageLocality 根据满足当前Pod对象需求的已有镜像的体积大小之和
 

K8s的高级调度方式
3种影响调度方式:
节点选择器:nodeSelector,nodeName
节点亲和调度:nodeAffinity

第一种方式:节点选择器 nodeSelector,nodeName
实验:
[root@master schedule]# cat pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
  nodeSelector:
    desktype: ssd  #标签选择器,选择标签是desktype: ssd的nodes

将node1设置一个标签
kubectl label nodes node2 desktype=ssd

[root@master schedule]# kubectl get nodes --show-labels
NAME      STATUS    ROLES     AGE       VERSION   LABELS
master    Ready     master    13d       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node1     Ready     <none>    13d       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssh,kubernetes.io/hostname=node1
node2     Ready     <none>    13d       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,desktype=ssd,kubernetes.io/hostname=node2,release=ssd


[root@master schedule]# kubectl get nodes --show-labels
NAME      STATUS    ROLES     AGE       VERSION   LABELS
master    Ready     master    13d       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node1     Ready     <none>    13d       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssh,kubernetes.io/hostname=node1
node2     Ready     <none>    13d       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,desktype=ssd,kubernetes.io/hostname=node2,release=ssd

如果节点没有这个标签,那么创建的时候是无法成功的,因为预选的时候就不成功。

第二种方式:节点亲和调度:nodeAffinity
1.硬亲和性
2.软亲和性

[root@master schedule]# cat pod-nodeaffinty.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity-demo
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: zone
            operator: In
            values:
            - foo
            - bar

[root@master schedule]# kubectl apply -f pod-nodeaffinty.yaml 
pod/pod-node-affinity-demo created

[root@master schedule]# kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
pod-node-affinity-demo   0/1       Pending   0          1m

[root@master schedule]# kubectl delete -f pod-nodeaffinty.yaml 
pod "pod-node-affinity-demo" deleted


[root@master schedule]# cat pod-nodeaffinty-demo-2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity-demo
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:
          - key: zone
            operator: In
            values:
            - foo
            - bar
        weight: 60 


[root@master schedule]# kubectl apply -f pod-nodeaffinty-demo-2.yaml 
pod/pod-node-affinity-demo created

[root@master schedule]# kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
pod-node-affinity-demo   1/1       Running   0          50s
[root@master schedule]# kubectl describe pod pod-node-affinity-demo
Name:               pod-node-affinity-demo
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node02/192.168.1.67
Start Time:         Wed, 22 Aug 2018 09:21:06 +0800
Labels:             app=myapp
                    tier=frontend
Annotations:        cni.projectcalico.org/podIP=10.244.2.4/32
                    kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"myapp","tier":"frontend"},"name":"p
od-node-affinity-demo","namespace":"de...Status:             Running
IP:                 10.244.2.4
Containers:
  myapp:
    Container ID:   docker://8257c32da71b33f7ef8e6a7907b6ac11d5232a3d7b5f1be84541868bddb17adf
    Image:          ikubernetes/myapp:v1
    Image ID:       docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 22 Aug 2018 09:21:09 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4xzt8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-4xzt8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4xzt8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  1m    default-scheduler  Successfully assigned default/pod-node-affinity-demo to node02
  Normal  Pulled     1m    kubelet, node02    Container image "ikubernetes/myapp:v1" already present on machine
  Normal  Created    1m    kubelet, node02    Created container
  Normal  Started    1m    kubelet, node02    Started container


[root@master schedule]# kubectl delete -f pod-nodeaffinty-demo-2.yaml 
pod "pod-node-affinity-demo" deleted
#pod2 亲和到pod1
[root@master schedule]# cat pod-required-affinity-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-first
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-second
  labels:
    app: backend
    tier: db
spec:
  containers:
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 3600"]
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - {key: app, operator: In, values: ["myapp"]}
        topologyKey: kubernetes.io/hostname

[root@master schedule]# kubectl apply -f pod-required-affinity-demo.yaml 
pod/pod-first created
pod/pod-second created

[root@master schedule]# kubectl get pods -o wide
NAME         READY     STATUS    RESTARTS   AGE       IP           NODE
pod-first    1/1       Running   0          1m        10.244.2.5   node02
pod-second   1/1       Running   0          1m        10.244.2.6   node02

#不亲和

[root@master schedule]# cp pod-required-affinity-demo.yaml  pod-required-anti-demo.yaml 
[root@master schedule]# vim pod-required-anti-demo.yaml 
   apiVersion: v1
kind: Pod
metadata:
  name: pod-first
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-second
  labels:
    app: backend
    tier: db
spec:
  containers:
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 3600"]
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - {key: app, operator: In, values: ["myapp"]}
        topologyKey: kubernetes.io/hostname

[root@master schedule]# kubectl apply -f pod-required-anti-demo.yaml 
pod/pod-first created
pod/pod-second created

[root@master schedule]# kubectl get pods -o wide
NAME         READY     STATUS    RESTARTS   AGE       IP           NODE
pod-first    1/1       Running   0          39s       10.244.3.5   node01
pod-second   1/1       Running   0          39s       10.244.2.7   node02

[root@master schedule]# kubectl delete -f pod-required-anti-demo.yaml 
pod "pod-first" deleted
pod "pod-second" deleted

#同一标签不亲和调度

[root@master schedule]# kubectl get nodes --show-labels
NAME      STATUS    ROLES     AGE       VERSION   LABELS
master    Ready     master    7d        v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node01    Ready     <none>    7d        v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node01
node02    Ready     <none>    7d        v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=harddisk,kubernetes.io/hostname=node02
[root@master schedule]# kubectl label nodes node01 zone=foo
node/node01 labeled
[root@master schedule]# kubectl label nodes node02 zone=foo
node/node02 labeled

[root@master schedule]# vim pod-required-anti-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-first
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-second
  labels:
    app: backend
    tier: db
spec:
  containers:
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["sh","-c","sleep 3600"]
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - {key: app, operator: In, values: ["myapp"]}
        topologyKey: zone

[root@master schedule]# kubectl apply  -f pod-required-anti-demo.yaml 
pod/pod-first created
pod/pod-second created

[root@master schedule]# kubectl get pods -o wide
NAME         READY     STATUS    RESTARTS   AGE       IP           NODE
pod-first    1/1       Running   0          23s       10.244.2.8   node02
pod-second   0/1       Pending   0          23s       <none>       <none>

[root@master schedule]# kubectl delete -f pod-required-anti-demo.yaml 
pod "pod-first" deleted
pod "pod-second" deleted

污点调度

[root@master schedule]# kubectl taint node  node01  node-type=producction:NoSchedule
node/node01 tainted

[root@master schedule]# vim deploy-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  ports:
  - name: http
    targetPort: 80
    port: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80

[root@master schedule]# kubectl apply -f deploy-demo.yaml 
service/myapp created
deployment.apps/myapp-deploy created
[root@master schedule]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE
myapp-deploy-67f6f6b4dc-drl57   1/1       Running   0          10s       10.244.2.9    node02
myapp-deploy-67f6f6b4dc-fbbgx   1/1       Running   0          10s       10.244.2.11   node02
myapp-deploy-67f6f6b4dc-gtl52   1/1       Running   0          10s       10.244.2.10   node02

[root@master schedule]# kubectl taint node node02 node-type=dev:NoExecute
node/node02 tainted
[root@master schedule]# kubectl get pods -o wide
NAME                            READY     STATUS        RESTARTS   AGE       IP           NODE
myapp-deploy-67f6f6b4dc-6hxj8   0/1       Pending       0          6s        <none>       <none>
myapp-deploy-67f6f6b4dc-drl57   1/1       Terminating   0          1m        10.244.2.9   node02
myapp-deploy-67f6f6b4dc-fbbgx   0/1       Terminating   0          1m        <none>       node02
myapp-deploy-67f6f6b4dc-gtl52   0/1       Terminating   0          1m        <none>       node02
myapp-deploy-67f6f6b4dc-lwz94   0/1       Pending       0          6s        <none>       <none>
myapp-deploy-67f6f6b4dc-mtkgg   0/1       Pending       0          6s        <none>       <none>






[root@master schedule]# cat deploy-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  ports:
  - name: http
    targetPort: 80
    port: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80
      tolerations:
      - key: "node-type"
        operator: "Equal"
        value: "producction"
        effect: "NoSchedule"

[root@master schedule]# kubectl apply -f deploy-demo.yaml 
service/myapp unchanged
deployment.apps/myapp-deploy configured

[root@master schedule]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP           NODE
myapp-deploy-698b44b9d7-2bjkc   1/1       Running   0          37s       10.244.3.7   node01
myapp-deploy-698b44b9d7-2ctvc   1/1       Running   0          43s       10.244.3.6   node01
myapp-deploy-698b44b9d7-htdxx   1/1       Running   0          34s       10.244.3.8   node01





[root@master schedule]# cat deploy-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  ports:
  - name: http
    targetPort: 80
    port: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80
      tolerations:
      - key: "node-type"
        operator: "Exists"
        value: ""
        effect: "NoSchedule"
[root@master schedule]# kubectl apply -f deploy-demo.yaml 
service/myapp unchanged
deployment.apps/myapp-deploy configured

[root@master schedule]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE
myapp-deploy-559f559bcc-8lp4d   1/1       Running   0          21s       10.244.3.10   node01
myapp-deploy-559f559bcc-qcnz9   1/1       Running   0          17s       10.244.3.11   node01
myapp-deploy-559f559bcc-vwdzb   1/1       Running   0          24s       10.244.3.9    node01


[root@master schedule]# cat deploy-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  ports:
  - name: http
    targetPort: 80
    port: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80
      tolerations:
      - key: "node-type"
        operator: "Exists"
        value: ""
        effect: ""

[root@master schedule]# kubectl apply -f deploy-demo.yaml 
service/myapp unchanged
deployment.apps/myapp-deploy configured

[root@master schedule]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE
myapp-deploy-5d9c6985f5-7mgtd   1/1       Running   0          51s       10.244.2.13   node02
myapp-deploy-5d9c6985f5-b78h8   1/1       Running   0          46s       10.244.3.12   node01
myapp-deploy-5d9c6985f5-lljrz   1/1       Running   0          55s       10.244.2.12   node02

猜你喜欢

转载自blog.csdn.net/weixin_36171533/article/details/82757713