Hadoop environment to build the problems encountered and solutions

1. Before starting hadoop, ssh login slave-free dense normal host, use the command start-all.sh start hadoop, you need to enter a password slave master, indicating ssh file permissions problem, you need to do the following:

1) into the .ssh directory to see if there are public and private key file authorized_keys, id_rsa, id_rsa.pub

 

2) If there is no public and private key file, do ssh-keygen -t rsa generating a secret key (master and slave hosts hosts need to be performed)

3) generating a public key and private key file, run CAT id_rsa.pub >> the authorized_keys , add the generated public key to the authentication file

4) The slave server public write master server authorized_keys file

[root@master ~]# ssh-copy-id -i slave01
[root@master ~]# ssh-copy-id -i slave02

5) Log slave server ( SSH slave01 ), to see if you need to enter a password, if you still need to enter a password, uthorized_keys file permissions problem, then the next step

6) Use ssh -vvv slave01  debug command, debug interface into slave01

7) Set authorized_keys file permissions 600, the chmod 600 ~ / .ssh / authorized_keys

 

8) Similarly, slave01 also similarly set, the command to continue the steps 7 and 6 in the terminal step in the master host

9) After setting the master host, and login slave01 slave02, if the password is not required, the configuration

10) Stop all processes of hadoop: stop-all.sh

11) Restart hadoop: start-all.sh

 

2. After starting hadoop, use jps view, master server is not namenode services, solutions are as follows:

All processes 1) First stop hadoop of: stop-all.sh

2) Formatting NameNode: HDFS NameNode -format

3) Restart hadoop: start-all.sh

4) Enter the jps view, you can see the process namenode

 

 

3. After starting hadoop, use jps view, slave server does not datanode services, solutions are as follows:

All processes 1) First stop hadoop of: stop-all.sh

2) enter hadoop installation directory: / software / hadoop delete a folder hadoopdate

 

3) enter hadoop installation directory: delete logs folder in hadoop-2.7.3 file / software / hadoop in

 

4) In performing step slave02 terminal slave01 host and host 2 and 3, to ensure that steps 2 and 3 file folders are deleted and clean

5) Format NameNode: HDFS NameNode -format

6) Restart hadoop: start-all.sh

7) Check the input terminal slave01 jps host and slave02 host, you can see the process datanode

 

Above all a matter of personal experience in actual combat in built environment, as well as effective solutions, we hope to help you.

AC can also be explored with interested friends.

Guess you like

Origin www.cnblogs.com/zhoushengnan/p/11302161.html