把Hive操作的spark代码丢到yarn上面运行找不到数据库

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/nm-local-dir/usercache/root/filecache/19/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/11/22 10:58:40 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
17/11/22 10:58:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
17/11/22 10:58:42 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1511318642565_0002_000001
17/11/22 10:58:43 INFO spark.SecurityManager: Changing view acls to: root
17/11/22 10:58:43 INFO spark.SecurityManager: Changing modify acls to: root
17/11/22 10:58:43 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
17/11/22 10:58:44 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
17/11/22 10:58:44 INFO yarn.ApplicationMaster: Waiting for spark context initialization
17/11/22 10:58:44 INFO yarn.ApplicationMaster: Waiting for spark context initialization …
17/11/22 10:58:44 INFO spark.SparkContext: Running Spark version 1.6.0
17/11/22 10:58:44 INFO spark.SecurityManager: Changing view acls to: root
17/11/22 10:58:44 INFO spark.SecurityManager: Changing modify acls to: root
17/11/22 10:58:44 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
17/11/22 10:58:44 INFO util.Utils: Successfully started service ‘sparkDriver’ on port 36191.
17/11/22 10:58:44 INFO slf4j.Slf4jLogger: Slf4jLogger started
17/11/22 10:58:45 INFO Remoting: Starting remoting
17/11/22 10:58:45 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:43438]
17/11/22 10:58:45 INFO util.Utils: Successfully started service ‘sparkDriverActorSystem’ on port 43438.
17/11/22 10:58:45 INFO spark.SparkEnv: Registering MapOutputTracker
17/11/22 10:58:45 INFO spark.SparkEnv: Registering BlockManagerMaster
17/11/22 10:58:45 INFO storage.DiskBlockManager: Created local directory at /opt/hadoop/nm-local-dir/usercache/root/appcache/application_1511318642565_0002/blockmgr-5fc2424c-0be7-442a-b358-aec990606e96
17/11/22 10:58:45 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
17/11/22 10:58:45 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/11/22 10:58:45 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/11/22 10:58:45 INFO server.Server: jetty-8.y.z-SNAPSHOT
17/11/22 10:58:45 INFO server.AbstractConnector: Started [email protected]:50918
17/11/22 10:58:45 INFO util.Utils: Successfully started service ‘SparkUI’ on port 50918.
17/11/22 10:58:45 INFO ui.SparkUI: Started SparkUI at http://192.168.174.24:50918
17/11/22 10:58:45 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
17/11/22 10:58:45 INFO util.Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 50119.
17/11/22 10:58:45 INFO netty.NettyBlockTransferService: Server created on 50119
17/11/22 10:58:45 INFO storage.BlockManagerMaster: Trying to register BlockManager
17/11/22 10:58:46 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.174.24:50119 with 517.4 MB RAM, BlockManagerId(driver, 192.168.174.24, 50119)
17/11/22 10:58:46 INFO storage.BlockManagerMaster: Registered BlockManager
17/11/22 10:58:46 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://[email protected]:36191)
17/11/22 10:58:46 INFO yarn.YarnRMClient: Registering the ApplicationMaster
17/11/22 10:58:46 INFO yarn.YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
17/11/22 10:58:46 INFO yarn.YarnAllocator: Container request (host: Any, capability:

org.apache.spark.sql.execution.QueryExecutionException: FAILED: SemanticException [Error 10072]: Database does not exist: spark)

17/11/22 11:01:16 INFO spark.SparkContext: Invoking stop() from shutdown hook
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/sql,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/execution,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/SQL,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
17/11/22 11:01:16 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
17/11/22 11:01:17 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.174.27:33830
17/11/22 11:01:17 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
17/11/22 11:01:17 INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down
17/11/22 11:01:17 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/11/22 11:01:17 INFO storage.MemoryStore: MemoryStore cleared
17/11/22 11:01:17 INFO storage.BlockManager: BlockManager stopped
17/11/22 11:01:17 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/11/22 11:01:18 INFO scheduler.OutputCommitCoordinator OutputCommitCoordinatorEndpoint:OutputCommitCoordinatorstopped!17/11/2211:01:18INFOspark.SparkContext:SuccessfullystoppedSparkContext17/11/2211:01:18INFOyarn.ApplicationMaster:UnregisteringApplicationMasterwithFAILED(diagmessage:Userclassthrewexception:org.apache.spark.sql.execution.QueryExecutionException:FAILED:SemanticException[Error10072]:Databasedoesnotexist:spark)17/11/2211:01:18INFOremote.RemoteActorRefProvider RemotingTerminator: Shutting down remote daemon.
17/11/22 11:01:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
17/11/22 11:01:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
17/11/22 11:01:19 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1511318642565_0002
17/11/22 11:01:19 INFO util.ShutdownHookManager: Shutdown hook called
17/11/22 11:01:19 INFO util.ShutdownHookManager: Deleting directory /opt/hadoop/nm-local-dir/usercache/root/appcache/application_1511318642565_0002/container_1511318642565_0002_02_000001/tmp/spark-4768e495-77d2-47dc-8d22-537593a6912f
17/11/22 11:01:19 INFO util.ShutdownHookManager: Deleting directory /opt/hadoop/nm-local-dir/usercache/root/appcache/application_1511318642565_0002/spark-b4021122-599e-4c35-b1f4-ee93f192ccba


原因:把任务丢到yarn集群上面运行时,执行job的节点可能不是Hive所在的节点。则会找不到Hive的配置文件,造成找不到数据库错误。所以提交时顺便指定一下配置文件

./spark-submit –master yarn-cluster –jars ../lib/datanucleus-api-jdo-3.2.6.jar,../lib/datanucleus-core-3.2.10.jar,../lib/datanucleus-rdbms-3.2.9.jar --files ../conf/hive-site.xml ../lib/tt.jar

猜你喜欢

转载自blog.csdn.net/zhangfengbx/article/details/78702511