kubernetes scheduling of resource quota

Series catalog

When multiple users or development teams share a fixed node kubernetes cluster, a resource team or a user exceeds the resources that he should use the issue of concern, the quota resources administrator used to solve this problem a tool.

Resource quota, by ResourceQuotadefinition, provides a general constraint on a namespace use of resources. That it may have to limit this namespace how many objects can be created, it can also limit the restrictions on the use of computer resources (mentioned earlier too , computer resources, including cpu, memory, disk space and other resources)

Quota resources by working in a similar manner:

  • Different teams work under a different name space. Kubernetes currently not mandatory to do so is entirely voluntary, but kubernetes team plans to achieve by force the authority to acl do so.

  • Administrator to create a space for each nameResourceQuota(资源配额)

  • Users create a resource in a name space (such as pod, service, etc.), track resource usage quota system to ensure that resource use does not exceed ResourceQuotaa defined amount.

  • If you create or update a violation of a resource quota constraints on resources, the request returns a failure, the failure of the http status code 403 FORBIDDENand there is a message to explain what constraint is violated.

  • If a computer resource quota under the name space, such as CPU and memory is enabled, the user must specify the application or resource limit value or the quota system may prevent the creation of a pod.

Create a resource quota policy example at a certain namespace:

  • In a memory 32G, a cluster core cpu 16, so that the team A used core cpu 10 and memory 20G, 10G so that the team B using 4-core memory and cpu, memory and the remaining 2 2G core reserved for further dispensing cup

  • Restrictions 测试namespace using 1 core 1G, let 生产namespace to use all the remaining resources

When the capacity of the cluster is less than the sum of the quotas for all namespaces, there will be competition for resources, kubernetes will be processed based on the principle of first come, first assigned in this case

Whether competition for resources or resource quota changes will not affect the resources that have been created

Enable resource quota

Many kubernetes release of the resource quota support is enabled by default, when ResourceQuotaa apiserver of --enable-admission-plugins=when one of the values, resource quotas are turned on.

When a namespace that contains ResourceQuotathe resource quotas are in effect under this name space objects.

Computer resource quota

You can limit the sum can be apply for the next namespace computer resources

kubernetes supports the following resource types:

Resource Name Description
cpu Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value.
limits.cpu Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
limits.memory Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
memory Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value.
requests.cpu Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value.
requests.memory Across all pods in a non-terminal state, the sum of memory requests cannot exceed this value.

Expansion of resources, resource quota

In addition to the above mentioned, in kubernetes 1.10, the addition of support for the extended quota resources

Storage resource quota

You can limit the application of the total amount of storage space at a certain namespace

In addition, you can also you can also associated storage-classto restrict the use of storage resources

Resource Name Description
requests.storage Across all persistent volume claims, the sum of storage requests cannot exceed this value.
persistentvolumeclaims The total number of persistent volume claims that can exist in the namespace.
.storageclass.storage.k8s.io/requests.storage Across all persistent volume claims associated with the storage-class-name, the sum of storage requests cannot exceed this value.
.storageclass.storage.k8s.io/persistentvolumeclaims Across all persistent volume claims associated with the storage-class-name, the total number of persistent volume claims that can exist in the namespace.

For example, one operatorwants to want to make gold and bronze application storage space alone, then this operatorcan apply the same quota as follows:

gold.storageclass.storage.k8s.io/requests.storage: 500Gi
bronze.storageclass.storage.k8s.io/requests.storage: 100Gi

In the 1.8 version, to local ephemeral storagesupport quotas is added to the alpha feature inside.

Resource Name Description
requests.ephemeral-storage Across all pods in the namespace, the sum of local ephemeral storage requests cannot exceed this value.
limits.ephemeral-storage Across all pods in the namespace, the sum of local ephemeral storage limits cannot exceed this value.

Object quota

Version 1.9 added support for all standard types of namespaces resource quota by the following syntax

count/<resource>.<group>

The following is an example of a user may want to set a target quota of:

  • count/persistentvolumeclaims

  • count/services

  • count/secrets

  • count/configmaps

  • count/replicationcontrollers

  • count/deployments.apps

  • count/replicasets.apps

  • count/statefulsets.apps

  • count/jobs.batch

  • count/cronjobs.batch

  • count/deployments.extensions

When using a count/*type of quota resources, resource objects exist on the server will have to be controlled. This will help prevent server storage resources are exhausted. For example, if stored on the server's secretsresource object is too large, you might want to limit it the number excessive secretscould cause the server to fail to start! you may also limit jobthe number of poor design to prevent some of the scheduled task is created to cause too many job service is denied

The following types of resource limits are supported

Resource Name Description
configmaps The total number of config maps that can exist in the namespace.
persistentvolumeclaims The total number of persistent volume claims that can exist in the namespace.
pods The total number of pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if .status.phase in (Failed, Succeeded) is true.
replicationcontrollers The total number of replication controllers that can exist in the namespace.
resourcequotas The total number of resource quotas that can exist in the namespace.
services The total number of services that can exist in the namespace.
services.loadbalancers The total number of services of type load balancer that can exist in the namespace.
services.nodeports The total number of services of type node port that can exist in the namespace.
secrets The total number of secrets that can exist in the namespace.

例如,pod配额限制了一个名称空间下非terminal状态的pod总数量.这样可以防止一个用户创建太多小的pod以至于耗尽集群分配给pod的所有IP

配额范围

每一个配额都可以包含一系列相关的范围.配额只会在匹配列举出的范围的交集时才计算资源的使用.

当一个范围被添加到配额里,它将限制它支持的,属于范围的资源.指定的资源不在支持的集合里时,将会导致验证错误

Scope Description
Terminating Match pods where .spec.activeDeadlineSeconds >= 0
NotTerminating Match pods where .spec.activeDeadlineSeconds is nil
BestEffort Match pods that have best effort quality of service.
NotBestEffort Match pods that do not have best effort quality of service.

BestEffort范围限制配额只追踪pods资源

Terminating,NotTerminatingNotBestEffort范围限制配额追踪以下资源:

  • cpu

  • limits.cpu

  • limits.memory

  • memory

  • pods

  • requests.cpu

  • requests.memory

每一个PriorityClass的资源配额

此特征在1.12片本中为beta

pod可以以指定的优先级创建.你可以通过pod的优先级来控制pod对系统资源的使用,它是通过配额的spec下的scopeSelector字段产生效果的.

只有当配额spec的scopeSelector选择了一个pod,配额才会被匹配和消费

你在使用PriorityClass的配额的之前,需要启用ResourceQuotaScopeSelectors

以下示例创建一个配额对象,并且一定优先级的pod会匹配它.

  • 集群中的pod有以下三个优先级类之一:low,medium,high

  • 每个优先级类都创建了一个资源配额

apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    name: pods-high
  spec:
    hard:
      cpu: "1000"
      memory: 200Gi
      pods: "10"
    scopeSelector:
      matchExpressions:
      - operator : In
        scopeName: PriorityClass
        values: ["high"]
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    name: pods-medium
  spec:
    hard:
      cpu: "10"
      memory: 20Gi
      pods: "10"
    scopeSelector:
      matchExpressions:
      - operator : In
        scopeName: PriorityClass
        values: ["medium"]
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    name: pods-low
  spec:
    hard:
      cpu: "5"
      memory: 10Gi
      pods: "10"
    scopeSelector:
      matchExpressions:
      - operator : In
        scopeName: PriorityClass
        values: ["low"]

使用kubectl create来用户以上yml文件

kubectl create -f ./quota.yml
resourcequota/pods-high created
resourcequota/pods-medium created
resourcequota/pods-low created

使用kubectl describe quota来查看

kubectl describe quota
Name:       pods-high
Namespace:  default
Resource    Used  Hard
--------    ----  ----
cpu         0     1k
memory      0     200Gi
pods        0     10


Name:       pods-low
Namespace:  default
Resource    Used  Hard
--------    ----  ----
cpu         0     5
memory      0     10Gi
pods        0     10


Name:       pods-medium
Namespace:  default
Resource    Used  Hard
--------    ----  ----
cpu         0     10
memory      0     20Gi
pods        0     10

创建一个具有high优先级的pod,把以下内容保存在high-priority-pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: high-priority
spec:
  containers:
  - name: high-priority
    image: ubuntu
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello; sleep 10;done"]
    resources:
      requests:
        memory: "10Gi"
        cpu: "500m"
      limits:
        memory: "10Gi"
        cpu: "500m"
  priorityClassName: high

使用kubectl create来应用

kubectl create -f ./high-priority-pod.yml

这时候再用kubectl describe quota来查看

Name:       pods-high
Namespace:  default
Resource    Used  Hard
--------    ----  ----
cpu         500m  1k
memory      10Gi  200Gi
pods        1     10


Name:       pods-low
Namespace:  default
Resource    Used  Hard
--------    ----  ----
cpu         0     5
memory      0     10Gi
pods        0     10


Name:       pods-medium
Namespace:  default
Resource    Used  Hard
--------    ----  ----
cpu         0     10
memory      0     20Gi
pods        0     10

scopeSelector支持operator字段的以下值:

  • In

  • NotIn

  • Exist

  • DoesNotExist

配额资源的申请与限制

当分配计算机资源时,每一个容器可能会指定对cpu或者内存的申请或限制.配额可以配置为它们中的一个值

这里是说配额只能是申请或者限制,而不能同时出现

如果配额指定了requests.cpurequests.memory那么它需要匹配的容器必须显式指定申请这些资源.如果配额指定了limits.cpulimits.memory,那么它需要匹配的容器必须显式指定限制这些资源

查看和设置配额

kubectl支持创建,更新和查看配额

kubectl create namespace myspace
cat <<EOF > compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources
spec:
  hard:
    pods: "4"
    requests.cpu: "1"
    requests.memory: 1Gi
    limits.cpu: "2"
    limits.memory: 2Gi
    requests.nvidia.com/gpu: 4
EOF
kubectl create -f ./compute-resources.yaml --namespace=myspace
cat <<EOF > object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: object-counts
spec:
  hard:
    configmaps: "10"
    persistentvolumeclaims: "4"
    replicationcontrollers: "20"
    secrets: "10"
    services: "10"
    services.loadbalancers: "2"
EOF
kubectl create -f ./object-counts.yaml --namespace=myspace
kubectl get quota --namespace=myspace
NAME                    AGE
compute-resources       30s
object-counts           32s
kubectl describe quota compute-resources --namespace=myspace
Name:                    compute-resources
Namespace:               myspace
Resource                 Used  Hard
--------                 ----  ----
limits.cpu               0     2
limits.memory            0     2Gi
pods                     0     4
requests.cpu             0     1
requests.memory          0     1Gi
requests.nvidia.com/gpu  0     4
kubectl describe quota object-counts --namespace=myspace
Name:                   object-counts
Namespace:              myspace
Resource                Used    Hard
--------                ----    ----
configmaps              0       10
persistentvolumeclaims  0       4
replicationcontrollers  0       20
secrets                 1       10
services                0       10
services.loadbalancers  0       2

kubectl通过count/<resource>.<group>语法形式支持标准名称空间对象数量配额

kubectl create namespace myspace
kubectl create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 --namespace=myspace
kubectl run nginx --image=nginx --replicas=2 --namespace=myspace
kubectl describe quota --namespace=myspace
Name:                         test
Namespace:                    myspace
Resource                      Used  Hard
--------                      ----  ----
count/deployments.extensions  1     2
count/pods                    2     3
count/replicasets.extensions  1     4
count/secrets                 1     4

配额和集群容量

ResourceQuotas独立于集群的容量,它们通过绝对的单位表示.因此,如果你向集群添加了节点,这并不会给集群中的每个名称空间赋予消费更多资源的能力.

有时候需要更为复杂的策略,比如:

  • 把集群中所有的资源按照比例分配给不同团队

  • 允许每个租户根据需求增加资源使用,但是有一个总体的限制以防资源被耗尽

  • 检测名称空间的需求,添加节点,增加配额

这些策略可以通过实现ResourceQuotas来写一个controller用于监视配额的使用,并且通过其它信号来调整每个名称空间的配额

默认限制优先类消费

有时候我们可能希望一定优先级别的pod,例如cluster-services应当被允许在一个名称空间里,当且仅当匹配的配额存在.

通过这种机制,operators可以限制一些高优先级的类只能用于有限数量的名称空间里,并且不是所有的名称空间都可以默认消费它们.

为了使以上生效,kube-apiserver标签--admission-control-config-file应当传入以下配置文件的路径

apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: "ResourceQuota"
  configuration:
    apiVersion: resourcequota.admission.k8s.io/v1beta1
    kind: Configuration
    limitedResources:
    - resource: pods
      matchScopes:
      - scopeName: PriorityClass 
        operator: In
        values: ["cluster-services"]

现在,cluster-services类型的pod仅被允许运行在有匹配scopeSelector的配额资源对象的名称空间里,例如

`yml scopeSelector: matchExpressions: - scopeName: PriorityClass operator: In values: ["cluster-services"]

Guess you like

Origin www.cnblogs.com/tylerzhou/p/11029628.html