Yarn下CGroups对CPU限制的理解

好奇Yarn CGroup限制是怎么样对CPU做限制的?

CGroup对CPU的限制

cpushares隔离: 给我们提供了一种可以按权重比率弹性分配cpu时间资源的手段;当cpu空闲的时候,某一个要占用cpu的cgroup可以完全占用剩余cpu时间,充分利用资源。而当其他cgroup需要占用的时候,每个cgroup都能保证其最低占用时间比率,达到资源隔离的效果。

cpuset隔离: 以分配核心的方式进行资源隔离,可以提供的资源分配最小粒度是核心,不能提供更细粒度的资源隔离,但是隔离之后运算的相互影响最低。需要注意的是在服务器开启了超线程的情况下,要小心选择分配的核心,否则不同cgroup间的性能差距会比较大。

cpuquota隔离: 给我们提供了一种比cpuset可以更细粒度的分配资源的方式,并且保证了cgroup使用cpu比率的上限,相当于对cpu资源的硬限制。

YARN中对Container做Cgroups隔离

发现Hadoop2.7.3中已经支持针对CPU的硬限了。

代码执行逻辑: LinuxContainerExecutor.launchContainer() -> resourcesHandler.preExecute(containerId, container.getResource())

  /*
   * LCE Resources Handler interface
   */

  public void preExecute(ContainerId containerId, Resource containerResource)
              throws IOException {
    setupLimits(containerId, containerResource);
  }

  private void setupLimits(ContainerId containerId,
                           Resource containerResource) throws IOException {
    String containerName = containerId.toString();

    if (isCpuWeightEnabled()) {
      // container申请的Vcores数量
      int containerVCores = containerResource.getVirtualCores();

      // 为该containerName创建/cpu/hadoop/cgroup/{containerName}的cgroup 文件路径
      createCgroup(CONTROLLER_CPU, containerName);

      // cpuShares=1024 * container申请的Vcores
      int cpuShares = CPU_DEFAULT_WEIGHT * containerVCores;

      // 设置cpushares隔离方式,其大小即为containerVcores * 默认系数(时间片)的比例
      updateCgroup(CONTROLLER_CPU, containerName, "shares",
          String.valueOf(cpuShares));

      // 此处,如果开启硬限的话,会相应的设置cpuquota硬限
      if (strictResourceUsageMode) {

      // nodeVCores:  NM物理机器CPU数量;
      // containerVCores: 配置的逻辑虚拟Cores数量;
      // 当CPU有超售时,就需要严格定义每个CPU申请真实硬限
        if (nodeVCores != containerVCores) {

        // containerCPU即为单位物理核数量,此处非虚拟核概念了!
          float containerCPU =
              (containerVCores * yarnProcessors) / (float) nodeVCores;

          // 获取containerCPU申请CPU数量对应的cpuQuota和cpuPeriod
          int[] limits = getOverallLimits(containerCPU);
          updateCgroup(CONTROLLER_CPU, containerName, CPU_PERIOD_US,
              String.valueOf(limits[0]));
          updateCgroup(CONTROLLER_CPU, containerName, CPU_QUOTA_US,
              String.valueOf(limits[1]));
        }
      }
    }
  }



  int[] getOverallLimits(float yarnProcessorsArg) {
    return CGroupsCpuResourceHandlerImpl.getOverallLimits(yarnProcessorsArg);
  }

  @VisibleForTesting
  @InterfaceAudience.Private
  public static int[] getOverallLimits(float yarnProcessors) {

    int[] ret = new int[2];

    if (yarnProcessors < 0.01f) {
      throw new IllegalArgumentException("Number of processors can't be <= 0.");
    }

    // Hadoop设定每台机器最大CPU Quota时间片为 1000 * 1000 = 1M,按照VCores数量进行划分
    int quotaUS = MAX_QUOTA_US;
    // periosUS单位Vcores下获得的时间片数量
    int periodUS = (int) (MAX_QUOTA_US / yarnProcessors);
    if (yarnProcessors < 1.0f) {
      periodUS = MAX_QUOTA_US;
      quotaUS = (int) (periodUS * yarnProcessors);
      if (quotaUS < MIN_PERIOD_US) {
        LOG.warn("The quota calculated for the cgroup was too low."
            + " The minimum value is " + MIN_PERIOD_US
            + ", calculated value is " + quotaUS
            + ". Setting quota to minimum value.");
        quotaUS = MIN_PERIOD_US;
      }
    }

    // cfs_period_us can't be less than 1000 microseconds
    // if the value of periodUS is less than 1000, we can't really use cgroups
    // to limit cpu
    if (periodUS < MIN_PERIOD_US) {
      LOG.warn("The period calculated for the cgroup was too low."
          + " The minimum value is " + MIN_PERIOD_US
          + ", calculated value is " + periodUS
          + ". Using all available CPU.");
      periodUS = MAX_QUOTA_US;
      quotaUS = -1;
    }

    ret[0] = periodUS;
    ret[1] = quotaUS;
    return ret;
  }

后续

内存是怎么限制的?

参考: https://blog.csdn.net/liukuan73/article/details/53358423

猜你喜欢

转载自blog.csdn.net/CRISPY_RICE/article/details/80084777