He ran a mapreduce found there have been seven Under-Replicated Blocks on the cluster, can be seen on the web page, execute on the primary node:
$ bin/hadoop fsck -blocks
Just fine after deleting files causing the problem.
The problem may have led to two reasons
1. We may have to block a copy of the default factor is 3, but only two of our datanode node 2 or 1 will appear at this time that there is an error
2. We mapperReduce, default submit a copy of the job is 10, but we are not so much datanode node, this error can also occur at this time
This error does not have much impact on cluster operation
The first problem to solve:
$ hadoop fs -setrep -R 1 /
or
update your hdfs-site.xml
file property
dfs.replication=1
The second category of problem solving:
Mapper-site modification of this attribute, it defaults to 10, we can give to 3
mapreduce.client.submit.file.replication
bash$ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}'
Use this command to see us under replic block specific file
- #Set replication
- bash$ hadoop fs -setrep 3 /file_name
To copy assignments flie_name