k8s study notes (6): Limit the use of pod memory resources

Limit the use of pod memory resources

Refer to the official documentation: https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/

Allocate a memory request and limit to a container. We guarantee that the container has the amount of memory it requests, but is not allowed to use more memory than the limit

1. Install metrics-server

Because you need to use the kubectl top command to check the resource usage of the node, but this command needs to install metrics-server in advance before it can be used.

1. Download the yaml file of metrics-server

[root@k8s-master ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

2. Modify the yaml file and add two lines of configuration

[root@k8s-master ~]# vim components.yaml

The args directive at line 134 of the yaml file is modified as follows:

Insert image description here

3. Create a pod of metrics-server

[root@k8s-master ~]# kubectl apply -f components.yaml 

Reminder: Because you have to go to Google to download the image, it is very likely that it will not be downloaded, causing the pod to fail to start.

Suggestion: It is best to prepare the metrics-server image first, and then import the image using docker load -i

Here I just use the prepared image directly.

4. Upload the mirror package of metrics-server to linux, and import the mirror package with docker load

Insert image description here

[root@k8s-master ~]# docker load -i metrics-server-v0.6.3.tar

5. View the pod of metrics-server

[root@k8s-master ~]# kubectl apply -f components.yaml 
[root@k8s-master ~]# kubectl get pod -n kube-system

Insert image description here

6. Check node resource usage

[root@k8s-master ~]# kubectl top node
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   421m         10%    1135Mi          66%       
k8s-node-1   199m         4%     702Mi           40%       
k8s-node-2   254m         6%     671Mi           39% 
[root@k8s-master ~]# kubectl top pod
NAME                   CPU(cores)   MEMORY(bytes)   
quota-mem-cpu-demo-2   2m           1Mi 

Found that kubectl can be used normally!

2. Create a namespace

[root@k8s-master ~]# kubectl create namespace mem-example
namespace/mem-example created

3. Specify memory requests and limits

Create a namespace to isolate the resources created in this exercise from the rest of the cluster

1.Create pod

Create a Pod with one container. The container will request 100 MiB of memory and will be limited to 200 MiB. This is the Pod configuration file

[root@k8s-master ~]# kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit.yaml --namespace=mem-example

2. Check whether the pod is running

[root@k8s-master ~]# kubectl get pod memory-demo -n mem-example
NAME          READY   STATUS    RESTARTS   AGE
memory-demo   1/1     Running   0          69s

3. View pod details

[root@k8s-master ~]# kubectl get pod memory-demo --output=yaml -n mem-example

Insert image description here

You have applied for 100M of memory, and you can use up to 200M of memory.

4.kubectl top to view pod indicator data

[root@k8s-master ~]# kubectl top pod -n mem-example
NAME          CPU(cores)   MEMORY(bytes)   
memory-demo   61m          150Mi 

The output shows that the memory being used by the Pod is approximately 162,900,000 bytes, which is approximately 150 MiB. This is larger than the 100 MiB requested by the Pod, but within the 200 MiB limit of the Pod.

5. Delete pod

[root@k8s-master ~]# kubectl delete pod memory-demo -n mem-example

4. Memory exceeding container limit

1.Create pod

Create a Pod that attempts to allocate more memory than its limit. This is the configuration file for a Pod that owns a container with a memory request of 50 MiB and a memory limit of 100 MiB

[root@k8s-master ~]# kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-2.yaml --namespace=mem-example

2. Check the pod running status multiple times

[root@k8s-master ~]# kubectl get pod memory-demo-2 -n mem-example

Insert image description here

Insert image description here

It was found that the pod was being killed and restarted repeatedly.

3. View detailed information about pod running

[root@k8s-master ~]# kubectl describe pod memory-demo-2 -n mem-example

Insert image description here

You can see that the pod is being killed and restarted repeatedly!

4. Delete pod

[root@k8s-master ~]# kubectl delete pod memory-demo-2 -n mem-example

5. Memory exceeding the capacity of the entire node

1.Create pod

Create a Pod whose memory requirements exceed the memory available on any node in your cluster. Here is the configuration file for the pod, which has a container requesting 1000 GiB of memory, which should be more than the capacity of any node in your cluster

[root@k8s-master ~]# kubectl apply -f https://k8s.io/examples/pods/resource/memory-request-limit-3.yaml --namespace=mem-example
pod/memory-demo-3 create

2. Check pod status

[root@k8s-master ~]# kubectl get pod memory-demo-3 -n mem-example
NAME            READY   STATUS    RESTARTS   AGE
memory-demo-3   0/1     Pending   0          60s

It is found that the pod is in the pending state and the suspended state, which means that the memory resources requested exceed the memory resources of the node and are in the suspended state.

3. View pod details

[root@k8s-master ~]# kubectl describe pod memory-demo-3 -n mem-example

输出结果: Warning FailedScheduling 55s (x3 over 119s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.

The container cannot be scheduled due to insufficient node memory

4. Delete pod

[root@k8s-master ~]# kubectl delete pod memory-demo-3 -n mem-example
pod "memory-demo-3" deleted

6. Delete namespace

[root@k8s-master ~]# kubectl delete namespace mem-example
namespace "mem-example" deleted

Guess you like

Origin blog.csdn.net/qq_57629230/article/details/131424086