hadoop 2.7.2 yarn中文文档——编写 YARN Applications

目标
本文在较高层次介绍实现YARN的application的方式。
 概念和流程
通俗讲就是一个application提交客户端提交一个application到YARN Resource Manager(RM)。通过建立YarnClient对象可以完成这个工作。YarnClient启动之后,client之后可以建立application context,准备包含ApplicationMaster(AM)的第一个container,之后提交该application。你需要提供你的application运行所需的详细信息,包括本地文件、jar文件、实际运行的命令(包括必要的命令行参数)、任意操作系统环境变量设置等等。实际上,你需要做的就是描述启动你的ApplicationMaster的Unix 进程。
YARN ResourceManager之后将会在已分配的container中启动特定的ApplicationMaster。Application Master负责与YARN集群通信,并且处理appication的执行。它执行操作采用异步模式。在application启动期间,ApplicationMaster的主要任务是:a) 与ResourceManager通信,为下阶段的容器分配资源;b)container分配以后,与 YARN *NodeManager*(NMs)通信以启动application container。 其中任务a可以通过AMRMClientAsync 对象,利用AMRMClientAsync.CallbackHandler类型的event handler中的事件处理方法进行异步处理。event handler需要明确的设置client。任务b在容器被分配后,可以通过部署一个可运行对象来启动container。作为启动container的一部分,AM需要指定包含启动信息(如命令行格式,环境变量等)的ContainerLaunchContext。
在执行一个application过程中,ApplicationMaster 通过NMClientAsync 对象与NodeManagers通信。所有container事件通过与NMClientAsync相关联的NMClientAsync.CallbackHandler处理。典型的callback handler处理client start,stop,status update和error事件。ApplicatonMaster也通过AMRMClientAsync.CallbackHandler的getProgress()方法给ResourceManager汇报进度。
除了异步client,还有用于特定工作流的同步版本(AMRMClient and NMClient)。推荐使用异步client,因为其用法简单,并且本文将主要介绍异步client。
接口
以下是重要的接口:
  • Client<-->ResourceManager
使用YarnClient 对象。
  • ApplicationMaster<-->ResourceManager
使用AMRMClientAsync对象,利用AMRMClientAsync.CallbackHandler异步处理事件
  • ApplicationMaster<-->NodeManager
启动container.使用NMClientAsync对象与NodeManager通信,通过MClientAsync.CallbackHandler处理container事件。
备注:
  • YARN application的三个主要协议(ApplicationClientProtocol, ApplicationMasterProtocol and ContainerManagementProtocol)依然保留。这三个client封装3个协议,为YARN application提供简单的开发模式。
  • 在非常罕见的情况下,开发者可能想要直接使用这3个协议实现一个application。然而,这样的使用行为在下一代将不再鼓励。
编写一个简单的Yarn Application
 编写简单client
  • Client第一步需要做的就是初始化并启动YarnCient。
YarnClient yarnClient = YarnClient.createYarnClient(); yarnClient.init(conf); yarnClient.start();
  • 一旦client建立完成,client需要创建一个application,获取它的application id。
YarnClientApplication app = yarnClient.createApplication(); GetNewApplicationResponse appResponse = app.getNewApplicationResponse();
  • 来自YarnClientApplication的新application的response额外包含了集群的信息,例如集群的最小/最大 的resource capability.你可以依据此信息正确设定将要启动ApplicationMaster的container。
  • client的主要难点是设置ApplicationSubmissionContext ,它定义了需要RM启动AM需要的所有信息。client需要在context中设置以下信息:
  • Application 信息: id, name。
  • Queue, priority 信息: application将要提交到的目标队列,application分配的优先级。
  • User: 提交application的用户信息。
  • ContainerLaunchContext: 将要启动和运行AM的Container的信息。在之前已经提到,ContainerLaunchContext定义了运行application所有必要信息,例如 local *Resources (二进制, jars, 文件等等.), 环境变量设置 (CLASSPATH 等等.), 执行的Command,以及security Tokens (RECT).

// set the application submission context
ApplicationSubmissionContext appContext = app.getApplicationSubmissionContext();
ApplicationId appId = appContext.getApplicationId();

appContext.setKeepContainersAcrossApplicationAttempts(keepContainers);
appContext.setApplicationName(appName);

// set local resources for the application master
// local files or archives as needed
// In this scenario, the jar file for the application master is part of the local resources
Map<String, LocalResource> localResources = new HashMap<String, LocalResource>();

LOG.info("Copy App Master jar from local filesystem and add to local environment");
// Copy the application master jar to the filesystem
// Create a local resource to point to the destination jar path
FileSystem fs = FileSystem.get(conf);
addToLocalResources(fs, appMasterJar, appMasterJarPath, appId.toString(),
    localResources, null);

// Set the log4j properties if needed
if (!log4jPropFile.isEmpty()) {
  addToLocalResources(fs, log4jPropFile, log4jPath, appId.toString(),
      localResources, null);
}

// The shell script has to be made available on the final container(s)
// where it will be executed.
// To do this, we need to first copy into the filesystem that is visible
// to the yarn framework.
// We do not need to set this as a local resource for the application
// master as the application master does not need it.
String hdfsShellScriptLocation = "";
long hdfsShellScriptLen = 0;
long hdfsShellScriptTimestamp = 0;
if (!shellScriptPath.isEmpty()) {
  Path shellSrc = new Path(shellScriptPath);
  String shellPathSuffix =
      appName + "/" + appId.toString() + "/" + SCRIPT_PATH;
  Path shellDst =
      new Path(fs.getHomeDirectory(), shellPathSuffix);
  fs.copyFromLocalFile(false, true, shellSrc, shellDst);
  hdfsShellScriptLocation = shellDst.toUri().toString();
  FileStatus shellFileStatus = fs.getFileStatus(shellDst);
  hdfsShellScriptLen = shellFileStatus.getLen();
  hdfsShellScriptTimestamp = shellFileStatus.getModificationTime();
}

if (!shellCommand.isEmpty()) {
  addToLocalResources(fs, null, shellCommandPath, appId.toString(),
      localResources, shellCommand);
}

if (shellArgs.length > 0) {
  addToLocalResources(fs, null, shellArgsPath, appId.toString(),
      localResources, StringUtils.join(shellArgs, " "));
}

// Set the env variables to be setup in the env where the application master will be run
LOG.info("Set the environment for the application master");
Map<String, String> env = new HashMap<String, String>();

// put location of shell script into env
// using the env info, the application master will create the correct local resource for the
// eventual containers that will be launched to execute the shell scripts
env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTLOCATION, hdfsShellScriptLocation);
env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTTIMESTAMP, Long.toString(hdfsShellScriptTimestamp));
env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTLEN, Long.toString(hdfsShellScriptLen));

// Add AppMaster.jar location to classpath
// At some point we should not be required to add
// the hadoop specific classpaths to the env.
// It should be provided out of the box.
// For now setting all required classpaths including
// the classpath to "." for the application jar
StringBuilder classPathEnv = new StringBuilder(Environment.CLASSPATH.$$())
  .append(ApplicationConstants.CLASS_PATH_SEPARATOR).append("./*");
for (String c : conf.getStrings(
    YarnConfiguration.YARN_APPLICATION_CLASSPATH,
    YarnConfiguration.DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH)) {
  classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR);
  classPathEnv.append(c.trim());
}
classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR).append(
  "./log4j.properties");

// Set the necessary command to execute the application master
Vector<CharSequence> vargs = new Vector<CharSequence>(30);

// Set java executable command
LOG.info("Setting up app master command");
vargs.add(Environment.JAVA_HOME.$$() + "/bin/java");
// Set Xmx based on am memory size
vargs.add("-Xmx" + amMemory + "m");
// Set class name
vargs.add(appMasterMainClass);
// Set params for Application Master
vargs.add("--container_memory " + String.valueOf(containerMemory));
vargs.add("--container_vcores " + String.valueOf(containerVirtualCores));
vargs.add("--num_containers " + String.valueOf(numContainers));
vargs.add("--priority " + String.valueOf(shellCmdPriority));

for (Map.Entry<String, String> entry : shellEnv.entrySet()) {
  vargs.add("--shell_env " + entry.getKey() + "=" + entry.getValue());
}
if (debugFlag) {
  vargs.add("--debug");
}

vargs.add("1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/AppMaster.stdout");
vargs.add("2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/AppMaster.stderr");

// Get final commmand
StringBuilder command = new StringBuilder();
for (CharSequence str : vargs) {
  command.append(str).append(" ");
}

LOG.info("Completed setting up app master command " + command.toString());
List<String> commands = new ArrayList<String>();
commands.add(command.toString());

// Set up the container launch context for the application master
ContainerLaunchContext amContainer = ContainerLaunchContext.newInstance(
  localResources, env, commands, null, null, null);

// Set up resource type requirements
// For now, both memory and vcores are supported, so we set memory and
// vcores requirements
Resource capability = Resource.newInstance(amMemory, amVCores);
appContext.setResource(capability);

// Service data is a binary blob that can be passed to the application
// Not needed in this scenario
// amContainer.setServiceData(serviceData);

// Setup security tokens
if (UserGroupInformation.isSecurityEnabled()) {
  // Note: Credentials class is marked as LimitedPrivate for HDFS and MapReduce
  Credentials credentials = new Credentials();
  String tokenRenewer = conf.get(YarnConfiguration.RM_PRINCIPAL);
  if (tokenRenewer == null | | tokenRenewer.length() == 0) {
    throw new IOException(
      "Can't get Master Kerberos principal for the RM to use as renewer");
  }

  // For now, only getting tokens for the default file-system.
  final Token<?> tokens[] =
      fs.addDelegationTokens(tokenRenewer, credentials);
  if (tokens != null) {
    for (Token<?> token : tokens) {
      LOG.info("Got dt for " + fs.getUri() + "; " + token);
    }
  }
  DataOutputBuffer dob = new DataOutputBuffer();
  credentials.writeTokenStorageToStream(dob);
  ByteBuffer fsTokens = ByteBuffer.wrap(dob.getData(), 0, dob.getLength());
  amContainer.setTokens(fsTokens);
}

appContext.setAMContainerSpec(amContainer);
 
  • 设置过程完成之后,client已经准备好提交application(设置了优先级和队列)。
// Set the priority for the application master
Priority pri = Priority.newInstance(amPriority);
appContext.setPriority(pri);

// Set the queue to which this application is to be submitted in the RM
appContext.setQueue(amQueue);

// Submit the application to the applications manager
// SubmitApplicationResponse submitResp = applicationsManager.submitApplication(appRequest);

yarnClient.submitApplication(appContext);
 
  • 在这个时候,RM在后台已经接受了application,之后分配满足资源需求的container并建立之,然后在分配的container中启动AM。
  • client有多种方式跟踪实际任务的进度.
  • 它可以通过YarnClient的getApplicationReport()方法与RM通信,请求application的report。
// Get application report for the appId we are interested in
ApplicationReport report = yarnClient.getApplicationReport(appId);
 
接受到的ApplicationReport由如下内容组成:
  • 通用Application信息: Application id, application提交到的队列,提交application的用户,application的启动时间。
  • AM运行的host,监听client请求的rpc端口,client需要与AM通信的token。
  • Application tracking 信息: 如果application支持某种形式的进度跟踪,它可以设置tracking url,client通过ApplicationReport的getTrackingUrl()方法可以监控进度。
  • Application状态 :application的状态可以通过ApplicationReport#getYarnApplicationState看到。如果YarnApplicationState 被设置为FINISHED,client应该参考ApplicationReport#getFinalApplicationStatus 来检查application任务实际的成功/失败。如果是失败的情况,ApplicationReport#getDiagnostics可能包含一些失败的原因或者提示。
  • 如果ApplicationMaster支持,client可以直接通过application report中的 host:rpcport信息查询AM自身获取进度更新。当然也可以使用report中的tracking url查看进度。
  • 在通常情况下,如果application耗时太长,或者因为其他原因,client可能希望杀死该application。YarnClient支持killApplication ,即允许client通过ResourceManager发送一个kill信号到AM。ApplicationMaster也可以设计为支持通过它的rpc层的abort调用,client可以直接通过该手段进行操作。
yarnClient.killApplication(appId);
 
编写ApplicationMaster (AM)
  • AM是job的实际owner。它通过client提供job的所有必要信息和资源,并由RM启动、监管和完成。
  • 作为AM,它在container中启动,该container可能是与其他container共享同一个物理主机,考虑到多租户的性质,除其他问题外,它不能作出任何假设的东西,如预先配置的监听端口。
  • 在AM启动时,几个环境变量中的参数对其是可用的。这些包括AM container的ContainerId,application的提交时间,运行ApplicationMaster的NM(NodeManager)的详情。
  • 与RM的交互需要一个ApplicationAttemptId (每个application在失败的情况下可以有多个attempt)。ApplicationAttemptId 可以通过AM的container id获取。有一些工具性的API可以将从环境变量获得的值转换为对象。
Map<String, String> envs = System.getenv();
String containerIdString =
    envs.get(ApplicationConstants.AM_CONTAINER_ID_ENV);
if (containerIdString == null) {
  // container id should always be set in the env by the framework
  throw new IllegalArgumentException(
      "ContainerId not set in the environment");
}
ContainerId containerId = ConverterUtils.toContainerId(containerIdString);
ApplicationAttemptId appAttemptID = containerId.getApplicationAttemptId();
 
  • AM初始化完成之后,我们可以启动这两个client:一个是与ResourceManager交互,一个是与NodeManagers交互。我们启动它们并赋予定制化的event handler,稍后本文会详细介绍这些event handler。
AMRMClientAsync.CallbackHandler allocListener = new RMCallbackHandler();
  amRMClient = AMRMClientAsync.createAMRMClientAsync(1000, allocListener);
  amRMClient.init(conf);
  amRMClient.start();

  containerListener = createNMCallbackHandler();
  nmClientAsync = new NMClientAsyncImpl(containerListener);
  nmClientAsync.init(conf);
  nmClientAsync.start();
 
  • AM不得不向RM发出心跳以表名AM是存活的。在RM端超时时间间隔设置通过访问YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS获得,默认值通过YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS设置。ApplicationMaster需要注册自身到ResourceManager以启动心跳。
// Register self with ResourceManager
// This will start heartbeating to the RM
appMasterHostname = NetUtils.getHostname();
RegisterApplicationMasterResponse response = amRMClient
    .registerApplicationMaster(appMasterHostname, appMasterRpcPort,
        appMasterTrackingUrl);
 
  • 在注册的应答信息中,可能包含最大资源容量,你可以使用这个信息来校验application的请求。
// Dump out information about cluster capability as seen by the
// resource manager
int maxMem = response.getMaximumResourceCapability().getMemory();
LOG.info("Max mem capabililty of resources in this cluster " + maxMem);

int maxVCores = response.getMaximumResourceCapability().getVirtualCores();
LOG.info("Max vcores capabililty of resources in this cluster " + maxVCores);

// A resource ask cannot exceed the max.
if (containerMemory > maxMem) {
  LOG.info("Container memory specified above max threshold of cluster."
      + " Using max value." + ", specified=" + containerMemory + ", max="
      + maxMem);
  containerMemory = maxMem;
}

if (containerVirtualCores > maxVCores) {
  LOG.info("Container virtual cores specified above max threshold of  cluster."
    + " Using max value." + ", specified=" + containerVirtualCores + ", max="
    + maxVCores);
  containerVirtualCores = maxVCores;
}
List<Container> previousAMRunningContainers =
    response.getContainersFromPreviousAttempts();
LOG.info("Received " + previousAMRunningContainers.size()
        + " previous AM's running containers on AM registration.");
 
  • 基于task请求,AM可以申请一组container来运行它的tasks。我们现在能计算出需要多少container,按需请求container。
List<Container> previousAMRunningContainers =
    response.getContainersFromPreviousAttempts();
List<Container> previousAMRunningContainers =
    response.getContainersFromPreviousAttempts();
LOG.info("Received " + previousAMRunningContainers.size()
    + " previous AM's running containers on AM registration.");

int numTotalContainersToRequest =
    numTotalContainers - previousAMRunningContainers.size();
// Setup ask for containers from RM
// Send request for containers to RM
// Until we get our fully allocated quota, we keep on polling RM for
// containers
// Keep looping until all the containers are launched and shell script
// executed on them ( regardless of success/failure).
for (int i = 0; i < numTotalContainersToRequest; ++i) {
  ContainerRequest containerAsk = setupContainerAskForRM();
  amRMClient.addContainerRequest(containerAsk);
}
 
  • 在setupContainerAskForRM()中,以下两项需要设置:
  • Resource capability: 目前,YARN支持基于内存的资源请求,所以请求信息应该定义需要多少内存。该值需要以MB为单位定义,而且要小于集群的最大容量,并且是最小容量的精确倍数。内存资源会对应于task container所在的物理资源限制。YARN还支持基于计算核模型(vCore)资源,这会在代码中演示。
  • Priority: 当申请一组container时,AM可能会为其中每一个定义不同的优先级。例如,Map-Reduce AM会为Map task的container分配较高的优先级,而对Reduce task container分配较低优先级。
private ContainerRequest setupContainerAskForRM() {
  // setup requirements for hosts
  // using * as any host will do for the distributed shell app
  // set the priority for the request
  Priority pri = Priority.newInstance(requestPriority);

  // Set up resource type requirements
  // For now, memory and CPU are supported so we set memory and cpu requirements
  Resource capability = Resource.newInstance(containerMemory,
    containerVirtualCores);

  ContainerRequest request = new ContainerRequest(capability, null, null,
      pri);
  LOG.info("Requested container ask: " + request.toString());
  return request;
}
 
  • application manager发送container allocation request之后,通过AMRMClientAsync的event handler异步启动container。handler需要实现AMRMClientAsync.CallbackHandler interface接口。
  • 当有container被分配时,handler需要设置一个线程来启动container。这里我们使用LaunchContainerRunnable 来演示。我们将在下面的章节中讨论LaunchContainerRunnable 。
@Override
public void onContainersAllocated(List<Container> allocatedContainers) {
  LOG.info("Got response from RM for container ask, allocatedCnt="
      + allocatedContainers.size());
  numAllocatedContainers.addAndGet(allocatedContainers.size());
  for (Container allocatedContainer : allocatedContainers) {
    LaunchContainerRunnable runnableLaunchContainer =
        new LaunchContainerRunnable(allocatedContainer, containerListener);
    Thread launchThread = new Thread(runnableLaunchContainer);

    // launch and start the container on a separate thread to keep
    // the main thread unblocked
    // as all containers may not be allocated at one go.
    launchThreads.add(launchThread);
    launchThread.start();
  }
}
 
  • 在心跳过程中,event handler上报application的进度。
@Override
public float getProgress() {
  // set progress to deliver to RM on next heartbeat
  float progress = (float) numCompletedContainers.get()
      / numTotalContainers;
  return progress;
}
 
  • container启动线程实际执行在NMs启动container的工作。在container分配到AM之后,它需要遵循类似的处理过程——client为将要在分配的container中运行的task设置ContainerLaunchContext。一旦ContainerLaunchContext 定义,AM可以通过NMClientAsync启动container。
// Set the necessary command to execute on the allocated container
Vector<CharSequence> vargs = new Vector<CharSequence>(5);

// Set executable command
vargs.add(shellCommand);
// Set shell script path
if (!scriptPath.isEmpty()) {
  vargs.add(Shell.WINDOWS ? ExecBatScripStringtPath
    : ExecShellStringPath);
}

// Set args for the shell command if any
vargs.add(shellArgs);
// Add log redirect params
vargs.add("1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout");
vargs.add("2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stderr");

// Get final commmand
StringBuilder command = new StringBuilder();
for (CharSequence str : vargs) {
  command.append(str).append(" ");
}

List<String> commands = new ArrayList<String>();
commands.add(command.toString());

// Set up ContainerLaunchContext, setting local resource, environment,
// command and token for constructor.

// Note for tokens: Set up tokens for the container too. Today, for normal
// shell commands, the container in distribute-shell doesn't need any
// tokens. We are populating them mainly for NodeManagers to be able to
// download anyfiles in the distributed file-system. The tokens are
// otherwise also useful in cases, for e.g., when one is running a
// "hadoop dfs" command inside the distributed shell.
ContainerLaunchContext ctx = ContainerLaunchContext.newInstance(
  localResources, shellEnv, commands, null, allTokens.duplicate(), null);
containerListener.addContainer(container.getId(), container);
nmClientAsync.startContainerAsync(container, ctx);
 
  • NMClientAsync对象与它的event handler一起处理container的event。包括container 启动,停止,状态变更以及发生错误的事件。
  • 在ApplicationMaster决定work已经完成之后,它需要通过 AM-RM client注销自身,之后停止client。
try {
  amRMClient.unregisterApplicationMaster(appStatus, appMessage, null);
} catch (YarnException ex) {
  LOG.error("Failed to unregister application", ex);
} catch (IOException e) {
  LOG.error("Failed to unregister application", e);
}

amRMClient.stop();
 
FAQ
我怎么部署我的application的jar到需要它的YARN集群中的节点上?
你可以使用LocalResource 来添加resource到你的application request中。这样YARN会部署该resource到ApplicationMaster node。如果resource是tgz,zip或者jar——你可以用YARN解压。那么,你所需要做的就是添加解压文件夹到你的classpath。举个例子,当创建你的application请求时:
File packageFile = new File(packagePath);
Url packageUrl = ConverterUtils.getYarnUrlFromPath(
    FileContext.getFileContext.makeQualified(new Path(packagePath)));

packageResource.setResource(packageUrl);
packageResource.setSize(packageFile.length());
packageResource.setTimestamp(packageFile.lastModified());
packageResource.setType(LocalResourceType.ARCHIVE);
packageResource.setVisibility(LocalResourceVisibility.APPLICATION);

resource.setMemory(memory);
containerCtx.setResource(resource);
containerCtx.setCommands(ImmutableList.of(
    "java -cp './package/*' some.class.to.Run "
    + "1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout "
    + "2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stderr"));
containerCtx.setLocalResources(
    Collections.singletonMap("package", packageResource));
appCtx.setApplicationId(appId);
appCtx.setUser(user.getShortUserName);
appCtx.setAMContainerSpec(containerCtx);
yarnClient.submitApplication(appCtx);
 
就像你看到的,setLocalResources 命令传入了一个该resource的map。map中的name在你的application工作目录中变成一个符号链接,所以你可以用./package/*的方式访问构件中的内容。
Note: java的classpath参数是大小写敏感的,确保你的语法完全正确。
我怎么获得ApplicationMaster的ApplicationAttemptId?
ApplicationAttemptId 会通过环境变量传入AM,通过ConverterUtils 的工具方法可以将该值可以转换为ApplicationAttemptId 对象。
为什么我的container被NodeManager杀死了?
这有可能是因为内存使用过高,超过了你申请的container的内存大小。有很多原因会引起这个问题。首先,查看NodeManager杀死你的container时dump出来的进程树。如果你已经超过了物理内存限制,那说明你的app使用了太多的物理内存。如果你运行的一个java app,你可以使用-hprof 来查看是什么占用了heap的空间。如果你已经超出了虚拟内存,则可能需要增加集群配置的变量yarn.nodemanager.vmem-pmem-ratio的值。
How do I include native libraries?
在启动container的命令行中设置-Djava.library.path会引起hadoop使用的native libraries不能被正确加载,最终会导致错误。简洁的用法是使用LD_LIBRARY_PATH 。

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326989191&siteId=291194637