windows服务器搭建CDH集群:配置nginx反向代理虚拟机

1, 只有一个windows公网ip, 搭建cdh

  • a, windows主机安装 : nginx , 使用反向代理 (vmware虚拟机的cdh集群)
  • b, 配置nginx反向代理
  • c, 客户端配置域名解析: 218.245.1.xx windows s101
    [client --> windows:7180 ⇒ s101:7180 ]
    [client --> windows:50070 ⇒ s101:50070 ]
    [client --> windows:8088 ⇒ s101:8088 ]
server {
  listen 80;
  server_name 127.0.0.1;
  access_log off;

  location / {
    proxy_pass http://s101:7180;
    proxy_redirect     off;   #或者 proxy_pass http://s101:71800/ /;
    
    proxy_read_timeout 300;
    proxy_connect_timeout 300;
    proxy_set_header   X-Forwarded-Proto $scheme;
    proxy_set_header   Host              $http_host;
    proxy_set_header   X-Real-IP         $remote_addr;
  }
}

2,块丢失 && namenode高可用失效=>两个nn都是active

_1073755565	/hbase/data/default/V_ESBHL_INSPECTION_ROD/.tabledesc/.tableinfo.0000000001
blk_1073755568	/hbase/data/default/V_ESBHL_INSPECTION_ROD/8f316884fd1597ea52bb3589a7877391/.regioninfo
blk_1073755569	/hbase/data/default/V_ESBHL_INSPECTION_ROD/6a9227ab7c8574b12ce5e37a51c98f7a/.regioninfo
blk_1073755579	/hbase/data/hbase/meta/1588230740

解决:

  • a, 解除ha配置, 重新配置ha
  • b, 由于丢块, 导致dfs进入safemode , 用命令退出safemode即可使用了

3, hbase不能使用: hmaster启动失败 或 hregionserver启动失败

解决: 查看hmaster日志: /var/log/hbase/hmaster-xx.out ==》报错是regionserver和master的时间差太大,同步时间即可

4, spark-shell 启动报错

[root@s101 ~]# spark-shell 
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
	at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:123)
	at org.apache.spark.deploy.SparkSubmitArguments$$anonfun$mergeDefaultSparkProperties$1.apply(SparkSubmitArguments.scala:123)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.deploy.SparkSubmitArguments.mergeDefaultSparkProperties(SparkSubmitArguments.scala:123)
	at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:109)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:114)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
Type in expressions to have them evaluated.
Type :help for more information.
18/12/13 05:06:31 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
	at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:281)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:140)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
=====安装的是spark on yarn ,之前更改过hadoop dfs 配置(主机ip域名映射),重新部署过期配置即可

18/12/13 05:06:31 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
	at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:281)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:140)
java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
===调整参数:yarn.scheduler.maximum-allocation-mb' 和 'yarn.nodemanager.resource.memory-mb'的默认值为2g。

NestedThrowables:
java.sql.SQLException: Unable to open a test connection to the given database. JDBC url = jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true, username = APP. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
java.sql.SQLException: Failed to create database '/var/lib/hive/metastore/metastore_db', see the next exception for details.
	at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
=======使用hdfs用户启动spark-shell, 对该目录没有权限:   chown -R hdfs:hdfs /var/lib/hive/metastore/

scala> sc.makeRdd(1 to 10).foreach(println )
[Stage 0:>                                                          (0 + 0) / 2]18/12/13 06:03:58 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
18/12/13 06:04:13 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
====spark on yarn : 集群没有可用资源

5, jps 显示: process information unavailable

[root@s101 ~]# jps
26051 -- process information unavailable
26000 -- process information unavailable
25998 -- process information unavailable
26333 Jps
16382 -- process information unavailable
====没有权限查看: su hdfs / su yarn 

猜你喜欢

转载自blog.csdn.net/eyeofeagle/article/details/84964473
今日推荐