Number of Under-Replicated Blocks问题

Disclaimer: This article is a blogger original article, shall not be reproduced without the bloggers allowed. https://blog.csdn.net/mn_kw/article/details/90017647

He ran a mapreduce found there have been seven Under-Replicated Blocks on the cluster, can be seen on the web page, execute on the primary node:

$ bin/hadoop fsck -blocks

Just fine after deleting files causing the problem.

The problem may have led to two reasons

1. We may have to block a copy of the default factor is 3, but only two of our datanode node 2 or 1 will appear at this time that there is an error

2. We mapperReduce, default submit a copy of the job is 10, but we are not so much datanode node, this error can also occur at this time

This error does not have much impact on cluster operation

The first problem to solve:

$ hadoop fs -setrep -R 1 /

or

update your hdfs-site.xml file property

dfs.replication=1

The second category of problem solving:

Mapper-site modification of this attribute, it defaults to 10, we can give to 3

mapreduce.client.submit.file.replication

 

bash$ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}'

Use this command to see us under replic block specific file

  1. #Set replication
  2. bash$ hadoop fs -setrep 3 /file_name

To copy assignments flie_name

 

Guess you like

Origin blog.csdn.net/mn_kw/article/details/90017647