Processo de execução do Spark no fio e análise de log

Enviar comando

${SPARK_HOME}/bin/spark-submit --class org.apache.spark.examples.SparkPi \
    --master yarn \
    --deploy-mode cluster \
    --driver-memory 4g \
    --executor-memory 1g \
    --executor-cores 4 \
    --queue default \
    ${SPARK_HOME}/examples/jars/spark-examples*.jar \
    10

Processo de implementação

  1. O cliente executa spark-submit para enviar o aplicativo, registra-se no resourceManager e solicita recursos.

  2. Depois que o resourceManger recebe a solicitação, ele seleciona um nodeManager no cluster, aloca o primeiro contêiner para o aplicativo e cria o aplicativo mestre nele. Há um driver no aplicativo mestre e começa a executar o driver (na verdade, analisando o programa escrito pelo usuário)

  3. motorista :

    (1) O driver executará o método principal do aplicativo.

    (2) O objeto sparkContext é construído no método principal. Este objeto é muito importante. É a entrada para todos os programas Spark. Dentro do objeto sparkContext, dois objetos DAGScheduler e TaskScheduler também são construídos.

    (3) O programa envolve um grande número de operações de conversão RDD e, finalmente, uma determinada ação desencadeia a execução real. Neste momento, um gráfico acíclico direcionado DAG será gerado com base no relacionamento de rdd no código. A direção do gráfico é a ordem das operações do operador do rdd e, finalmente, o gráfico acíclico direcionado é enviado ao objeto DAGScheduler.

    (4) Depois que o DAGScheduler obtém o gráfico acíclico direcionado, ele divide muitos estágios de acordo com amplas dependências. Cada estágio tem muitas tarefas que podem ser executadas em paralelo e divide essas tarefas em uma coleção taskSet. Finalmente, ele divide cada estágio um por um. A coleção taskSet é enviada para o objeto TaskScheduler.

    (5) Após o TaskScheduler receber muitos taskSets, ele executa as tarefas nele contidas de acordo com as dependências do estágio. Ao executar cada taskSet, TaskSchduler percorre o tasksSet e envia cada tarefa ao executor para execução.

    O driver apenas desmonta as tarefas, e a execução real fica no contêiner do fio.

  4. O mestre da aplicação se registra no resourceManager, para que o status de execução da tarefa possa ser visualizado através do RM. Ao mesmo tempo, o AM solicita recursos para cada tarefa e monitora a conclusão da execução da tarefa.

  5. Depois que AM se aplica ao recurso (contêiner), ele se comunicará com NM e permitirá que NM inicie CoarseGrainedExecutorBackend no contêiner obtido.Quando CoarseGrainedExecutorBackend for iniciado, ele se registrará no sparkContext em AM e se inscreverá para uma tarefa.

  6. O sparkContext em AM atribui a tarefa a CoarseGrainedExecutorBackend. Ao executar a tarefa, CoarseGrainedExecutorBackend relata o progresso e o status da tarefa para AM, para que AM possa acompanhar a execução da tarefa a qualquer momento, para que possa fazer uma segunda tentativa quando a execução da tarefa falha ou quando os recursos do cluster são escassos.Quando a tarefa é encerrada.

  7. Quando a tarefa é concluída, AM envia uma solicitação ao RM para efetuar logout.


registro de execução

22/11/19 17:42:18 WARN util.Utils: Your hostname, macdeMacBook-Pro-3.local resolves to a loopback address: 127.0.0.1; using 10.10.9.250 instead (on interface en0)
22/11/19 17:42:18 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/11/19 17:42:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/11/19 17:42:19 INFO client.RMProxy: Connecting to ResourceManager at sh01/172.16.99.214:8010
22/11/19 17:42:19 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
22/11/19 17:42:19 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
22/11/19 17:42:19 INFO yarn.Client: Will allocate AM container, with 4505 MB memory including 409 MB overhead
22/11/19 17:42:19 INFO yarn.Client: Setting up container launch context for our AM
22/11/19 17:42:19 INFO yarn.Client: Setting up the launch environment for our AM container
22/11/19 17:42:19 INFO yarn.Client: Preparing resources for our AM container
22/11/19 17:42:20 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
22/11/19 17:42:23 INFO yarn.Client: Uploading resource file:/usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-b423d166-c45e-429a-b25a-3efde9c1145c/__spark_libs__2899998199838240455.zip -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205/__spark_libs__2899998199838240455.zip
22/11/19 17:45:52 INFO yarn.Client: Uploading resource file:/usr/local/spark/examples/jars/spark-examples_2.11-2.4.8.jar -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205/spark-examples_2.11-2.4.8.jar
22/11/19 17:45:54 INFO yarn.Client: Uploading resource file:/usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-b423d166-c45e-429a-b25a-3efde9c1145c/__spark_conf__8349177025085739013.zip -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205/__spark_conf__.zip
22/11/19 17:45:56 INFO spark.SecurityManager: Changing view acls to: mac
22/11/19 17:45:56 INFO spark.SecurityManager: Changing modify acls to: mac
22/11/19 17:45:56 INFO spark.SecurityManager: Changing view acls groups to:
22/11/19 17:45:56 INFO spark.SecurityManager: Changing modify acls groups to:
22/11/19 17:45:56 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(mac); groups with view permissions: Set(); users  with modify permissions: Set(mac); groups with modify permissions: Set()
22/11/19 17:45:57 INFO yarn.Client: Submitting application application_1666603193487_2205 to ResourceManager
22/11/19 17:45:57 INFO impl.YarnClientImpl: Submitted application application_1666603193487_2205
22/11/19 17:45:58 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)
22/11/19 17:45:58 INFO yarn.Client:
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1668851157430
	 final status: UNDEFINED
	 tracking URL: http://sh01:8012/proxy/application_1666603193487_2205/
	 user: mac
22/11/19 17:45:59 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)
22/11/19 17:46:00 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)
22/11/19 17:46:01 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)
22/11/19 17:46:02 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)
22/11/19 17:46:03 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)
22/11/19 17:46:03 INFO yarn.Client:
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: sh02
	 ApplicationMaster RPC port: 46195
	 queue: default
	 start time: 1668851157430
	 final status: UNDEFINED
	 tracking URL: http://sh01:8012/proxy/application_1666603193487_2205/
	 user: mac 
22/11/19 17:46:04 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)
22/11/19 17:46:05 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)
22/11/19 17:46:06 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)
22/11/19 17:46:07 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)
22/11/19 17:46:08 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)
22/11/19 17:46:09 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)
22/11/19 17:46:10 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)
22/11/19 17:46:11 INFO yarn.Client: Application report for application_1666603193487_2205 (state: FINISHED)
22/11/19 17:46:11 INFO yarn.Client:
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: sh02
	 ApplicationMaster RPC port: 46195
	 queue: default
	 start time: 1668851157430
	 final status: SUCCEEDED
	 tracking URL: http://sh01:8012/proxy/application_1666603193487_2205/
	 user: mac
22/11/19 17:46:12 INFO yarn.Client: Deleted staging directory hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205
22/11/19 17:46:12 INFO util.ShutdownHookManager: Shutdown hook called
22/11/19 17:46:12 INFO util.ShutdownHookManager: Deleting directory /private/var/folders/pc/mj2v_vln4x14q6jylbtnmvx40000gn/T/spark-b39d7673-82ac-471c-8f8a-f667b8b081f2
22/11/19 17:46:12 INFO util.ShutdownHookManager: Deleting directory /usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-b423d166-c45e-429a-b25a-3efde9c1145c

4-6: Conecte-se ao ResourceManager, solicite um novo aplicativo em um cluster composto por dois NodeManagers e verifique se os recursos de memória solicitados pelo novo aplicativo não excedem os recursos máximos de memória do cluster. Cada contêiner do cluster contém cerca de 8G de memória.

8-14: Aloque um contêiner de tamanho 4505 M para o aplicativo mestre. O que significa "incluindo sobrecarga de 409 MB"? Como mencionado acima, AM contém um driver. Quando enviamos a tarefa, solicitamos 4G (4096 MB) de memória para o driver, 4505 - 4096 = 409. RM alocou mais memória com base no aplicativo. Por que foi alocou mais memória, vamos primeiro. Vamos parar por aqui sem entrar em detalhes. A próxima etapa é construir um ambiente para o contêiner AM, preparar recursos, empacotar as bibliotecas locais dependentes do Spark (verifiquei que tem 244 M), o pacote jar do aplicativo e o arquivo de configuração do Spark e carregá-los no diretório HDFS, aguardando a execução do aplicativo hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205. Este diretório será excluído após a conclusão.

15-19: Verificação de segurança.

20-21: Envie o aplicativo para RM. Observe aqui que o nome do aplicativo é: application_1666603193487_2205, que é igual ao nome do diretório ao fazer upload de recursos para HDFS. Acho que deveria ser a inscrição enviada pela AM.

22-36: AM solicita que um contêiner (recurso) do RM execute a tarefa, portanto a marca de status é ACEITA. Por que você diz isso? Porque às vezes verifica-se que ACEITO vai durar muito tempo, e há tarefas em execução no cluster naquele momento, e não há recursos extras, infere-se que AM está solicitando recursos para a tarefa neste momento.

37-55: A tarefa começa a ser executada. Na linha 41, você pode ver que o contêiner AM está atribuído à máquina sh02. (sh01:RM, sh02:NM, sh03:NM)

56-69: A tarefa foi concluída, o cache no HDFS foi excluído e o cache local foi excluído. Os nomes dos diretórios estão corretos.


A que se referem o cliente e o driver?

  • Cliente: O local onde o comando spark-submit é executado é chamado de cliente.

  • driver: O programa enviado pelo usuário é executado como driver.

Onde está o motorista? Primeiro imagine alguns servidores

服务器  角色
sh01  resourceManger
sh02  nodeManger
sh03  nodeManger
sh04  拥有大数据集群的配置

Envie a tarefa em sh04

  • modo yarn-cluster: É o cenário descrito acima. O driver não está no cliente mas sim no AM do sh02. Durante o processo de execução da tarefa, a transmissão de informações entre o contêiner da tarefa e o AM (o driver no AM) e a transmissão de informações entre o AM e o RM nada têm a ver com o cliente. O cliente recebe apenas os dados enviados do stdout.Mesmo que o cliente tenha desaparecido, a tarefa ainda pode ser executada.

  • Modo fio-cliente: O driver existe no cliente e as tarefas não podem ser executadas sem o cliente. (Quanto às demais informações, o log está no final, você mesmo pode analisá-lo)

No desenvolvimento real, sh01, sh02 e sh03 são geralmente clusters de big data, e sh04 é provavelmente apenas um programa para tarefas de envio de segmentos.


Qual é a relação entre o contêiner e o executor do fio?

No cluster de fios, tanto o executor quanto o mestre do aplicativo devem ser executados em "contêiner". O contêiner aqui não se refere ao docker. Ele representa os recursos de armazenamento e recursos de computação na máquina física. Esses recursos são supervisionados pelo NM e agendados pelo RM. A alocação de recursos do cluster de fios está na unidade do contêiner. O executor e o mestre do aplicativo são processos e só podem ser executados após a alocação de recursos.


log do cliente do fio

${SPARK_HOME}/bin/spark-submit --class org.apache.spark.examples.SparkPi \
    --master yarn \
    --deploy-mode client \
    --driver-memory 4g \
    --executor-memory 1g \
    --executor-cores 4 \
    --queue default \
    ${SPARK_HOME}/examples/jars/spark-examples*.jar \
    10
22/11/19 18:33:36 WARN util.Utils: Your hostname, macdeMacBook-Pro-3.local resolves to a loopback address: 127.0.0.1; using 10.10.9.250 instead (on interface en0)
22/11/19 18:33:36 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/11/19 18:33:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/11/19 18:33:36 INFO spark.SparkContext: Running Spark version 2.4.8
22/11/19 18:33:36 INFO spark.SparkContext: Submitted application: Spark Pi
22/11/19 18:33:36 INFO spark.SecurityManager: Changing view acls to: mac
22/11/19 18:33:36 INFO spark.SecurityManager: Changing modify acls to: mac
22/11/19 18:33:36 INFO spark.SecurityManager: Changing view acls groups to:
22/11/19 18:33:36 INFO spark.SecurityManager: Changing modify acls groups to:
22/11/19 18:33:36 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(mac); groups with view permissions: Set(); users  with modify permissions: Set(mac); groups with modify permissions: Set()
22/11/19 18:33:37 INFO util.Utils: Successfully started service 'sparkDriver' on port 53336.
22/11/19 18:33:37 INFO spark.SparkEnv: Registering MapOutputTracker
22/11/19 18:33:37 INFO spark.SparkEnv: Registering BlockManagerMaster
22/11/19 18:33:37 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/11/19 18:33:37 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/11/19 18:33:37 INFO storage.DiskBlockManager: Created local directory at /usr/local/spark-2.4.8-bin-hadoop2.7/tmp/blockmgr-ea23e012-50a5-4ad2-a2c0-cf40ea020a9e
22/11/19 18:33:37 INFO memory.MemoryStore: MemoryStore started with capacity 2004.6 MB
22/11/19 18:33:37 INFO spark.SparkEnv: Registering OutputCommitCoordinator
22/11/19 18:33:37 INFO util.log: Logging initialized @2435ms to org.spark_project.jetty.util.log.Slf4jLog
22/11/19 18:33:37 INFO server.Server: jetty-9.4.z-SNAPSHOT; built: unknown; git: unknown; jvm 1.8.0_333-b02
22/11/19 18:33:37 INFO server.Server: Started @2564ms
22/11/19 18:33:37 INFO server.AbstractConnector: Started ServerConnector@62b3df3a{
    
    HTTP/1.1, (http/1.1)}{
    
    0.0.0.0:4040}
22/11/19 18:33:37 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@169da7f2{
    
    /jobs,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@757f675c{
    
    /jobs/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2617f816{
    
    /jobs/job,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5d10455d{
    
    /jobs/job/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@535b8c24{
    
    /stages,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a951911{
    
    /stages/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@55b62629{
    
    /stages/stage,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6759f091{
    
    /stages/stage/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33a053d{
    
    /stages/pool,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@14a54ef6{
    
    /stages/pool/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@20921b9b{
    
    /storage,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@867ba60{
    
    /storage/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ba745bc{
    
    /storage/rdd,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@654b72c0{
    
    /storage/rdd/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@55b5e331{
    
    /environment,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6034e75d{
    
    /environment/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15fc442{
    
    /executors,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3f3c7bdb{
    
    /executors/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@456abb66{
    
    /executors/threadDump,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2a3a299{
    
    /executors/threadDump/json,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7da10b5b{
    
    /static,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1da6ee17{
    
    /,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@78d39a69{
    
    /api,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15f193b8{
    
    /jobs/job/kill,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2516fc68{
    
    /stages/stage/kill,null,AVAILABLE,@Spark}
22/11/19 18:33:37 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.10.9.250:4040
22/11/19 18:33:37 INFO spark.SparkContext: Added JAR file:/usr/local/spark/examples/jars/spark-examples_2.11-2.4.8.jar at spark://10.10.9.250:53336/jars/spark-examples_2.11-2.4.8.jar with timestamp 1668854017716
22/11/19 18:33:38 INFO client.RMProxy: Connecting to ResourceManager at sh01/172.16.99.214:8010
22/11/19 18:33:38 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
22/11/19 18:33:38 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
22/11/19 18:33:38 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
22/11/19 18:33:38 INFO yarn.Client: Setting up container launch context for our AM
22/11/19 18:33:38 INFO yarn.Client: Setting up the launch environment for our AM container
22/11/19 18:33:38 INFO yarn.Client: Preparing resources for our AM container
22/11/19 18:33:39 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
22/11/19 18:33:42 INFO yarn.Client: Uploading resource file:/usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-7ecf7a1c-87e6-4f76-8e50-cd1682762c25/__spark_libs__7614795133133378512.zip -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2206/__spark_libs__7614795133133378512.zip
22/11/19 18:37:46 INFO yarn.Client: Uploading resource file:/usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-7ecf7a1c-87e6-4f76-8e50-cd1682762c25/__spark_conf__885526568489264491.zip -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2206/__spark_conf__.zip
22/11/19 18:37:48 INFO spark.SecurityManager: Changing view acls to: mac
22/11/19 18:37:48 INFO spark.SecurityManager: Changing modify acls to: mac
22/11/19 18:37:48 INFO spark.SecurityManager: Changing view acls groups to:
22/11/19 18:37:48 INFO spark.SecurityManager: Changing modify acls groups to:
22/11/19 18:37:48 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(mac); groups with view permissions: Set(); users  with modify permissions: Set(mac); groups with modify permissions: Set()
22/11/19 18:37:49 INFO yarn.Client: Submitting application application_1666603193487_2206 to ResourceManager
22/11/19 18:37:50 INFO impl.YarnClientImpl: Submitted application application_1666603193487_2206
22/11/19 18:37:50 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1666603193487_2206 and attemptId None
22/11/19 18:37:51 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)
22/11/19 18:37:51 INFO yarn.Client:
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1668854270205
	 final status: UNDEFINED
	 tracking URL: http://sh01:8012/proxy/application_1666603193487_2206/
	 user: mac
22/11/19 18:37:52 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)
22/11/19 18:37:53 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)
22/11/19 18:37:54 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)
22/11/19 18:37:55 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)
22/11/19 18:37:55 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> sh01, PROXY_URI_BASES -> http://sh01:8012/proxy/application_1666603193487_2206), /proxy/application_1666603193487_2206
22/11/19 18:37:55 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
22/11/19 18:37:56 INFO yarn.Client: Application report for application_1666603193487_2206 (state: RUNNING)
22/11/19 18:37:56 INFO yarn.Client:
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: 172.16.99.116
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1668854270205
	 final status: UNDEFINED
	 tracking URL: http://sh01:8012/proxy/application_1666603193487_2206/
	 user: mac
22/11/19 18:37:56 INFO cluster.YarnClientSchedulerBackend: Application application_1666603193487_2206 has started running.
22/11/19 18:37:56 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 54084.
22/11/19 18:37:56 INFO netty.NettyBlockTransferService: Server created on 10.10.9.250:54084
22/11/19 18:37:56 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/11/19 18:37:56 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.10.9.250, 54084, None)
22/11/19 18:37:56 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.10.9.250:54084 with 2004.6 MB RAM, BlockManagerId(driver, 10.10.9.250, 54084, None)
22/11/19 18:37:56 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.10.9.250, 54084, None)
22/11/19 18:37:56 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.10.9.250, 54084, None)
22/11/19 18:37:56 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json.
22/11/19 18:37:56 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@238291d4{
    
    /metrics/json,null,AVAILABLE,@Spark}
22/11/19 18:37:56 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
22/11/19 18:37:57 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
22/11/19 18:37:57 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 10 output partitions
22/11/19 18:37:57 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
22/11/19 18:37:57 INFO scheduler.DAGScheduler: Parents of final stage: List()
22/11/19 18:37:57 INFO scheduler.DAGScheduler: Missing parents: List()
22/11/19 18:37:57 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
22/11/19 18:37:57 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.0 KB, free 2004.6 MB)
22/11/19 18:37:58 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1358.0 B, free 2004.6 MB)
22/11/19 18:37:58 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.10.9.250:54084 (size: 1358.0 B, free: 2004.6 MB)
22/11/19 18:37:58 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1184
22/11/19 18:37:58 INFO scheduler.DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
22/11/19 18:37:58 INFO cluster.YarnScheduler: Adding task set 0.0 with 10 tasks
22/11/19 18:37:59 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.16.99.116:48068) with ID 2
22/11/19 18:37:59 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, sh02, executor 2, partition 0, PROCESS_LOCAL, 7741 bytes)
22/11/19 18:37:59 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, sh02, executor 2, partition 1, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:37:59 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, sh02, executor 2, partition 2, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:37:59 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, sh02, executor 2, partition 3, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:38:00 INFO storage.BlockManagerMasterEndpoint: Registering block manager sh02:44398 with 366.3 MB RAM, BlockManagerId(2, sh02, 44398, None)
22/11/19 18:38:02 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on sh02:44398 (size: 1358.0 B, free: 366.3 MB)
22/11/19 18:38:02 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.16.97.106:57790) with ID 1
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, sh03, executor 1, partition 4, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, sh03, executor 1, partition 5, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, sh03, executor 1, partition 6, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, sh03, executor 1, partition 7, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, sh02, executor 2, partition 8, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, sh02, executor 2, partition 9, PROCESS_LOCAL, 7743 bytes)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 2609 ms on sh02 (executor 2) (1/10)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 2608 ms on sh02 (executor 2) (2/10)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2622 ms on sh02 (executor 2) (3/10)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2645 ms on sh02 (executor 2) (4/10)
22/11/19 18:38:02 INFO storage.BlockManagerMasterEndpoint: Registering block manager sh03:45892 with 366.3 MB RAM, BlockManagerId(1, sh03, 45892, None)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 378 ms on sh02 (executor 2) (5/10)
22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 407 ms on sh02 (executor 2) (6/10)
22/11/19 18:38:04 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on sh03:45892 (size: 1358.0 B, free: 366.3 MB)
22/11/19 18:38:05 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 2762 ms on sh03 (executor 1) (7/10)
22/11/19 18:38:05 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 2787 ms on sh03 (executor 1) (8/10)
22/11/19 18:38:05 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 2794 ms on sh03 (executor 1) (9/10)
22/11/19 18:38:05 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 2800 ms on sh03 (executor 1) (10/10)
22/11/19 18:38:05 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
22/11/19 18:38:05 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 8.174 s
22/11/19 18:38:05 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 8.233929 s
Pi is roughly 3.1405671405671405
22/11/19 18:38:05 INFO server.AbstractConnector: Stopped Spark@62b3df3a{
    
    HTTP/1.1, (http/1.1)}{
    
    0.0.0.0:4040}
22/11/19 18:38:05 INFO ui.SparkUI: Stopped Spark web UI at http://10.10.9.250:4040
22/11/19 18:38:05 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
22/11/19 18:38:05 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
22/11/19 18:38:05 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
22/11/19 18:38:05 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
22/11/19 18:38:05 INFO cluster.YarnClientSchedulerBackend: Stopped
22/11/19 18:38:05 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/11/19 18:38:05 INFO memory.MemoryStore: MemoryStore cleared
22/11/19 18:38:05 INFO storage.BlockManager: BlockManager stopped
22/11/19 18:38:05 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
22/11/19 18:38:05 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/11/19 18:38:05 INFO spark.SparkContext: Successfully stopped SparkContext
22/11/19 18:38:05 INFO util.ShutdownHookManager: Shutdown hook called
22/11/19 18:38:05 INFO util.ShutdownHookManager: Deleting directory /private/var/folders/pc/mj2v_vln4x14q6jylbtnmvx40000gn/T/spark-5ece9ef1-aff6-451e-bf36-b637d4afb74d
22/11/19 18:38:05 INFO util.ShutdownHookManager: Deleting directory /usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-7ecf7a1c-87e6-4f76-8e50-cd1682762c25

Acho que você gosta

Origin blog.csdn.net/yy_diego/article/details/127953198
Recomendado
Clasificación