Configuring Lesson2 Hadoop pseudo-distributed mode

I. Overview: 

Second, modify the configuration file:

1. core-site.xml modifications:

Hadoop enter from the installation directory to / etc / hadoop folder, you can find the following files:

The core-site.xml edit, add the following in <configuration> </ configuration> of:

<!--指定HDFS中NameNode的地址-->
<property> 
	<name>fs.defaultFS</name>
	<value>hdfs://nodeb1:9000</value> 
</property>
<!--指定Hadoop运行时产生文件的存储路径-->
<property> 
	<name>hadoop.tmp.dir</name>
	<value>/opt/hadoop-2.7.2/data/tmp</value> 
</property>

In which the fourth line should be filled in after hdfs hostname: 9000, the host name can not have special characters like _

To view or modify the host name, you can access the file / etc / hostname 

2. hadoop-env.sh changes:

Get JAVA_HOME:

 

The resulting path is filled in the position shown in FIG. Hadoop-env.sh:

 

3. hdfs-site.xml changes: 

Open the file, add the following in <configuration> </ configuration> can be in:

<!--指定HDFS副本数量-->
<property> 
	<name>dfs.replication</name>
	<value>1</value> 
</property>

Third, start the cluster:

 1. Format NameNode:

 Hadoop into the installation directory, enter the command:

bin/hdfs namenode -format

2. Start NameNode:

sbin/hadoop-daemon.sh start namenode

 After completing the first skip to step 4 clusters start to see the situation, then NameNode should have been started, if not start the reference step 5. 

3. Start DataNode:

sbin/hadoop-daemon.sh start datanode

4. Check the cluster startup status:

Enter the command jps check for a successful start

5. Solution of error:

Under the logs into the Hadoop installation directory folder to see the corresponding node error log file information to determine an error message and the revised NameNode should be closed and DataNode, then delete the logs after re-initialize the folder.

Published 45 original articles · won praise 30 · views 10000 +

Guess you like

Origin blog.csdn.net/sinat_40471574/article/details/104863006