Spark 访问HA模式下的HDFS

sc.hadoopConfiguration.set("fs.defaultFS", "hdfs://nameservice1")

sc.hadoopConfiguration.set("dfs.nameservices", "nameservice1")

sc.hadoopConfiguration.set("dfs.ha.namenodes.nameservice1", "nn1,nn2")

sc.hadoopConfiguration.set("dfs.namenode.rpc-address.nameservice1.nn1", "master:8020")

sc.hadoopConfiguration.set("dfs.namenode.rpc-address.nameservice1.nn2", "slave1:8020")

sc.hadoopConfiguration.set("dfs.client.failover.proxy.provider.nameservice1",

  "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider")

转载于:https://www.jianshu.com/p/c597d31f6fe2

猜你喜欢

转载自blog.csdn.net/weixin_33845477/article/details/91321597
今日推荐