Hadoop cluster configuration stepping pit record

Hadoop cluster configuration stepping pit record

Introduction to Experimental Background

Use three hosts to build a hadoop cluster. Among them, master is the master node, and slave1 and slave2 are slave nodes.

Stepping on the pit record 1: start-dfs.sh failed, and the slave node prompts "there is no such file or directory"

error message

insert image description here

reason

The hadoop installation paths of all hosts in the cluster are inconsistent.

The more specific reason is: I first installed hadoop on the master host and completed the configuration. When I later transferred the entire folder of hadoop to the slave node host, it did not transfer to the same path as the hadoop installation path in the host. That is to say, if the hadoop of the master host is installed in /root/softthe path, the entire folder of hadoop must be transferred to the slave node host /root/soft.

lesson

The hadoop installation path of the master node and the slave node must be consistent.

Stepping on the pit record 2: Unable to transfer the hadoop folder to slave1 and slave2

reason

The order was typed wrong. Type one more space.

lesson

scp -r hadoop-2.7.3 root@slave1:/root/soft

:There does not need to be a space between and /home.

Trample record 3: The nameNode process failed to start

error message

(I thought this error message was not obvious at first, so it is not considered an error message, but in fact, it is already prompting that the input parameter format is incorrect)

insert image description here

reason

Incorrect initialization.

This may be caused by incorrect characters of the entered command. English characters are typed into Chinese characters.

If the datanode process is not started due to this reason, then:

① There is no content under the name folder where the master node data is stored;

② Check the log file (in the logs folder under hadoop-2.7.3), and you will see the error message shown in the figure below.

insert image description here

lesson

The correct writing is as follows:

hdfs namenode -format

It is recommended to manually input the code in the guide file (word file) instead of copying and pasting. Some hidden characters may cause errors. The clerical errors in the guidance documents also need to be carefully screened out.

Of course, paying attention to the contents of the log is also helpful for troubleshooting.

other possible causes

1. Configuration file error

The path I installed hadoop at the beginning was different from the hadoop installation path in the guide file, so if I blindly follow the guide file to modify the configuration file and copy and paste commands, the problem will eventually be exposed at a certain step.

lesson

Remember your hadoop installation path , and be careful /root/soft/hadoop-2.7.3about what appears in the guide file in the future , and replace it with your own hadoop installation path./home

For example:

Creation of folder paths for storing cluster master node data, storing slave node data, and storing temporary data;

core-site.xmlThe configuration of the temporary data storage path of the hadoop cluster in the file;

hdfs-site.xmlConfiguration of the storage path of the master node data (name folder) and slave node data (data folder) in the file;

/etc/profileThe configuration of the hadoop path setting in the file;

Path in command to transfer hadoop folder from master (master node) to slave1 (slave node 1) and slave2 (slave node 2).

2. The slave node name is misspelled in the slave file

Check the slave file under etc/hadoop under hadoop-2.7.3 to see if the spelling of the slave node name is wrong.

Guess you like

Origin blog.csdn.net/Mocode/article/details/123505691