K8S scheduling mechanism and Pod basic troubleshooting

1. Analysis of scheduling constraint process

Insert picture description here

1. First of all, users can create resources and manage resources through kubectl commands or dashborad, API calls (for development) (Kubernetes uses the watch mechanism to collaborate with each component, and the design between each component is realized Decoupling)
2. The user submits a request to create a resource to the API Server. The API Server writes the meta information (attribute information) of the created resource into etcd. etcd records the current creating state and returns the information to the API Server. The API Server will The information is returned to the user
3. API Server uses the watch mechanism to call other components (these components are independent of each other in K8S), first call the Scheduler scheduler, let the Scheduler calculate which node node the resource is created on, and return the information To the API Server, the API Server writes the address of the created resource to etcd, and the information is returned to the API Server after the etcd record is completed. The API Server returns information to the Scheduler, telling the Scheduler that the information recording is completed, and the previously calculated data will also be considered when the next computing resource is created.
4. API Server calls the controller-manager (controller) through the watch mechanism to record the creation of the resource Type, and then schedule the kubelet. After the kubelet receives the command, it will proxy the API Server to schedule the docker engine, docker run creates resources, docker returns information to the kubelet (the container is running), and the kubelet returns the running status of the container to the API Server (the kubelet does not have the right to directly Scheduling etcd), API Server writes the information to etcd, updates the information, and returns the information to API Server after etcd recording is completed, and API Server then schedules kubelet for subsequent management operations

1.1. Scheduling method

■ nodeName is used to schedule the Pod to the specified Node name (skip the scheduler and assign it directly)

■ nodeSelector is used to schedule Pod to the Node matching Label

1.2, example 1 nodeName

[root@master01 demo]# vim pod1.yaml
apiVersion: v1
kind: Pod  
metadata:
  name: pod-example  
  labels:
    app: nginx  
spec:
  nodeName: 192.168.140.30     #跳过调度器直接分配
  containers:
  - name: nginx  
    image: nginx:1.15
[root@master01 demo]# kubectl create -f pod1.yaml 
[root@master01 demo]# kubectl describe pod pod-example
[root@master01 demo]# kubectl get pods

Insert picture description here

[root@master01 demo]# kubectl delete -f.   #清空当前目录下的所有pod(慎用!)

1.3, example 2 nodeSelector

[root@master01 demo]# kubectl get nodes
[root@master01 demo]# kubectl label nodes 192.168.140.20 abc=a
[root@master01 demo]# kubectl label nodes 192.168.140.30 abc=b
[root@master01 demo]# kubectl get nodes --show-labels

Insert picture description here

[root@master01 demo]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-example
  labels:
    app: httpd
spec:
  nodeSelector:
    abc: b
  containers:
  - name: httpd
    image: httpd
[root@master01 demo]# kubectl apply -f pod.yaml 
[root@master01 demo]# kubectl describe pod pod-example
[root@master01 demo]# kubectl get pods -o wide

Insert picture description here

2. Troubleshooting

2.1, failure phenomenon

Insert picture description here

2.2, troubleshooting ideas

■ View pod events

kubectl describe TYPE NAME_PREFIX  

■ View pod log (under Failed state)

kubectl logs POD_NAME

■ Enter pod (the status is running, but the service is not provided)

kubectl exec –it POD_NAME bash

Guess you like

Origin blog.csdn.net/weixin_50344814/article/details/115283251