Kubernetes configures Pods and containers—adjusts CPU and memory resources allocated to containers

This page explains how to adjust the CPU and memory resources allocated to a running Pod's containers without restarting the Pod or its containers. Kubernetes nodes  requests allocate resources to Pods based on Pods, and  limits limit the resource usage of Pods based on the specifications specified in the Pod's container.

For adjusting Pod resources in place:

  • requests The sum  of containers for CPU and memory resources  limits is changeable .
  • containerStatuses The fields in  the Pod state  allocatedResources reflect the resources allocated to the Pod's containers.
  • containerStatuses The fields in  the Pod state  resources reflect the actual resource sum configured for the running container as reported by the container  requests runtime  limits.
  • The field in the Pod Status  resize shows the status of the last requested adjustment pending. This field can have the following values:
    • Proposed: This value indicates that the requested adjustment has been acknowledged, and the request has been verified and logged.
    • InProgress: This value indicates that the node has accepted the adjustment request and is applying it to the Pod's containers.
    • Deferred: This value means that the requested adjustment cannot be approved at this time and the node will continue to retry. Adjustments may actually be implemented when other pods exit and release node resources.
    • Infeasible: This value is a signal that the node cannot undertake the requested adjustment value. This can happen if the requested adjustment exceeds the maximum resources a node can allocate to a pod.

ready to start

You must have a Kubernetes cluster, and you must configure the kubectl command-line tool to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not hosts of the control plane.

Your Kubernetes server version must be at least version 1.27. For version information, enter  kubectl version.

Container Sizing Strategy

Tuning policies allow finer-grained control over how containers in a Pod are tuned for CPU and memory resources. For example, a container application can handle CPU resource adjustments without restarting, but memory adjustments may require an application restart, and thus the container must also be restarted.

To achieve this, the container specification allows user specification  resizePolicy. The following restart policies can be set for tuning CPU and memory:

  • NotRequired: Adjust the container's resources at runtime.
  • RestartContainer: Restart the container and apply new resources after restart.

If not specified  resizePolicy[*].restartPolicy, it defaults to  NotRequired.

illustrate:

If the Pod's  restartPolicy is  Never, then the adjusted restart policy of all containers in the Pod must be set to  NotRequired.

The example below shows a Pod where CPU can be tuned without restarting the container, but memory tuning requires a container restart.

apiVersion: v1
kind: Pod
metadata:
  name: qos-demo-5
  namespace: qos-example
spec:
  containers:
    - name: qos-demo-ctr-5
      image: nginx
      resizePolicy:
        - resourceName: cpu
          restartPolicy: NotRequired
        - resourceName: memory
          restartPolicy: RestartContainer
      resources:
        limits:
          memory: "200Mi"
          cpu: "700m"
        requests:
          memory: "200Mi"
          cpu: "700m"

illustrate:

In the above example, if the required CPU and memory requests or limits have changed, the container will be restarted to adjust its memory.

Create a Pod with resource requests and limits

​You can create Pods with Guaranteed or Burstable QoS classes by specifying requests and/or limits for the Pod's containers.

Consider the following manifest for a Pod containing one container.

pods/qos/qos-pod-5.yaml

apiVersion: v1
kind: Pod
metadata:
  name: qos-demo-5
  namespace: qos-example
spec:
  containers:
  - name: qos-demo-ctr-5
    image: nginx
    resources:
      limits:
        memory: "200Mi"
        cpu: "700m"
      requests:
        memory: "200Mi"
        cpu: "700m"

Create the Pod in  qos-example the namespace:

kubectl create namespace qos-example
kubectl create -f https://k8s.io/examples/pods/qos/qos-pod-5.yaml --namespace=qos-example

This Pod is classified as Guaranteed QoS class, requesting 700M CPU and 200Mi Memory.

View details about a Pod:

kubectl get pod qos-demo-5 --output=yaml --namespace=qos-example

Also note that resizePolicy[*].restartPolicy the value defaults to  NotRequired, meaning that the CPU and memory can be resized while the container is running.

spec:
  containers:
    ...
    resizePolicy:
    - resourceName: cpu
      restartPolicy: NotRequired
    - resourceName: memory
      restartPolicy: NotRequired
    resources:
      limits:
        cpu: 700m
        memory: 200Mi
      requests:
        cpu: 700m
        memory: 200Mi
...
  containerStatuses:
...
    name: qos-demo-ctr-5
    ready: true
...
    allocatedResources:
      cpu: 700m
      memory: 200Mi
    resources:
      limits:
        cpu: 700m
        memory: 200Mi
      requests:
        cpu: 700m
        memory: 200Mi
    restartCount: 0
    started: true
...
  qosClass: Guaranteed

Update Pod's resources

​Assume that the required CPU demand has increased and now requires 0.8 CPU. This is typically determined by an entity such as the VerticalPodAutoscaler (VPA) and may be applied programmatically.

illustrate:

Although you can change a Pod's requests and limits to represent new desired resources, you cannot change the QoS class a Pod was created with.

Now execute the patch command on the Pod's Container, and set the container's CPU request and limit to  800m:

kubectl -n qos-example patch pod qos-demo-5 --patch '{"spec":{"containers":[{"name":"qos-demo-ctr-5", "resources":{"requests":{"cpu":"800m"}, "limits":{"cpu":"800m"}}}]}}'

Query the details of a Pod after it has been patched.

kubectl get pod qos-demo-5 --output=yaml --namespace=qos-example

The following Pod specs reflect the updated CPU requests and limits.

spec:
  containers:
    ...
    resources:
      limits:
        cpu: 800m
        memory: 200Mi
      requests:
        cpu: 800m
        memory: 200Mi
...
  containerStatuses:
...
    allocatedResources:
      cpu: 800m
      memory: 200Mi
    resources:
      limits:
        cpu: 800m
        memory: 200Mi
      requests:
        cpu: 800m
        memory: 200Mi
    restartCount: 0
    started: true

The observed  allocatedResources value has been updated to reflect the new expected CPU request. This indicates that the node is capable of accommodating the increased demand for CPU resources.

In the Container state, the updated CPU resource value shows that the new CPU resource has been applied. Container  restartCount remains unchanged, indicating that the CPU resources of the container have been adjusted without restarting the container.

Guess you like

Origin blog.csdn.net/leesinbad/article/details/131561618