九:Operation category READ/WRITE is not supported in state standby解决案例

1:问题现象:

在IDEA写好程序并打包传到hadoop001上去执行的时候,统计不成功,报如下异常:
各文件目录如下:log.sh :/home/hadoop/shell
G5-Spark-1.0.jar :/home/hadoop/lib
在这里插入图片描述

2:处理过程

2.1依据提示找到相应问题解答:

在启用ha的集群中,DFS客户端无法预先知道在操作的时刻哪个NameNode处于活动状态。因此,当客户端与NameNode联系,而NameNode恰好是备用节点时,读或写操作将被拒绝,此消息将被记录下来。然后,客户端将自动与另一个NameNode联系,并再次尝试该操作。只要集群中有一个活动的NameNode和一个备用的NameNode,这个消息就可以安全地被忽略。如果应用程序配置为始终只联系一个NameNode,则此消息表示应用程序无法执行任何读/写操作。在这种情况下,需要修改应用程序以使用集群的HA配置。JIRA HDFS-3447处理降低此消息的严重性(以及类似消息)。
3.17. What does the message “Operation category READ/WRITE is not supported in state standby” mean?
In an HA-enabled cluster, DFS clients cannot know in advance which namenode is active at a given time. So when a client contacts a namenode and it happens to be the standby, the READ or WRITE operation will be refused and this message is logged. The client will then automatically contact the other namenode and try the operation again. As long as there is one active and one standby namenode in the cluster, this message can be safely ignored.

If an application is configured to contact only one namenode always, this message indicates that the application is failing to perform any read/write operation. In such situations, the application would need to be modified to use the HA configuration for the cluster. The jira HDFS-3447 deals with lowering the severity of this message (and similar ones) to DEBUG so as to reduce noise in the logs, but is unresolved as of July 2015.

2.2:通过查看相应NN节点状态:

[hadoop@hadoop001 shell]$ hdfs haadmin -getServiceState nn1
standby
[hadoop@hadoop001 shell]$ hdfs haadmin -getServiceState nn2
active
确定hadoop001状态确实为standby,所以把shell和包放到hadoop002上去执行;
问题依旧

2.3:仔细核查shell脚本:里面是写为hadoop001,

hdfs://hadoop001:8020/logs/input/ hdfs://hadoop001:8020/logs/output/
继续执行:
Exception in thread “main” org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://hadoop002:8020/logs/output already exists

2.4:主要是文件夹已存在,删除文件或者优化代码后执行:

异常情况如下:
Exception in thread “main” java.lang.IllegalArgumentException: Wrong FS: hdfs://hadoop002:8020/logs/output, expected: hdfs://weizhong***

主要为:core-site.xml文件里的配置异常导致:
[hadoop@hadoop002 hadoop]$ cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> fs.defaultFS hdfs://**weizhonggui**

修改后运行正常:

[hadoop@hadoop002 shell]$ hadoop fs -ls /logs/output
Found 3 items
-rw-r–r-- 3 hadoop hadoop 0 2018-12-18 17:40 /logs/output/_SUCCESS
-rw-r–r-- 3 hadoop hadoop 199 2018-12-18 17:40 /logs/output/part-00000
-rw-r–r-- 3 hadoop hadoop 279 2018-12-18 17:40 /logs/output/part-00001

猜你喜欢

转载自blog.csdn.net/weizhonggui/article/details/85068960