hadoop pseudo-cluster configuration

1. Set the connection method between hadoop and the host to host-only, and set the static address

 http://oracle-api.iteye.com/admin/blogs/2304613

 

2. Modify the hostname

     vi  /etc/sysconfig/network

   Verification: reboot reboot

 

3. Modify hostname and IP binding

     vi /etc/hosts  

     Add a line: 192.168.56.2 hadoop

    Verification: ping hadoop

 

4. Turn off the firewall

    service iptable  stop

    Verify: service iptable status

 

5 Turn off the automatic operation of the firewall

    chkconfig  iptables off

    Verify: chkconfig --list | grep ipconfig

    Before modification: 0:off 1:off 2:on 3:on 4:on 5:on 6:off

    After modification: 0:off 1:off 2:off 3:off 4:off 5:off 6:off

 

6 SSH password-free login

6.1 Execute commands

      ssh-keygen -t rsa carriage return carriage return,

      The generated password is located in ~/.ssh/ (the root account is located in /root/.ssh/)

      Verification: Two files id_rsa and id_rsa.pub are generated under ~/.ssh/

6.1 Copy the public key file (as a machine requesting login, the public key of the requested machine is located in the ~/.ssh/authorized_keys file of the requesting machine)

      Execute the command under ~/.ssh/: cp id_rsa.pub authorized_keys

      Authentication: Execute the command ssh localhost (log in to localhost with ssh), log out of the login command: exit

 

7.1 Download hadoop-2.6.4.tar.gz and jdk-6u45-linux-i586.bin

http://www.oracle.com/technetwork/java/javase/downloads/index.html

http://mirrors.cnnic.cn/apache/hadoop/core/

7.2 删除/usr/local/下文件  rm -rf  /usr/local/*

7.3 复制安装文件  cp /root/downloads/* /usr/local/

7.4 赋予执行权限

      chmod u+x jdk-6u45-linux-i586.bin

      chmod u+x hadoop-2.6.4.tar.gz

 

8 安装jdk

8.1 执行命令 ./jdk-6u45-linux-i586.bin 解压文件

8.2  设置path,命令 vi /etc/profile , 添加下面两行:

       export JAVA_HOME=/usr/local/jdk1.6.0_45

       export PATH=.:$JAVA_HOME/bin:$PATH

       保存执行命令: source /etc/profile 使之生效

验证: java -version

 

9 安装hadoop

9.1 解压  tar -zxvf hadoop-1.2.1.tar.gz

9.2 设置path,命令 vi /etc/profile , 添加下面两行:

       export HADOOP_HOME=/usr/local/hadoop-1.2.1

       export PATH=.:$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH

       保存执行命令: source /etc/profile 使之生效

 9.2 修改$HADOOP_HOME/conf/hadoop-env.sh

       export JAVA_HOME=/usr/local/jdk1.6.0_45

 9.3  修改$HADOOP_HOME/conf/core-site.xml       

<configuration>
	<property>
		<name>fs.default.name</name>
		<value>hdfs://hadoop:9000</value>
		<description>change your own name</description>
	</property>
	<property>
		<name>hadoop.tmp.dir</name>
		<value>/usr/local/hadoop-1.2.1/tmp</value>
	</property>
</configuration>

 9.4 修改hdfs.site.xml

<configuration>
	<property>
		<name>dfs.replication</name>
		<value>1</value>
	</property>
	<property>
		<name>dfs.permissions</name>
		<value>false</value>
	</property>
</configuration>

 9.5 修改mapred-site.xml

<configuration>
	<property>
		<name>mapred.job.tracker</name>
		<value>hadoop:9001</value>
		<description>change your own hostname</description>
	</property>
</configuration>

 9.6 格式化hdfs

    执行命令:hadoop  namenode -format

 

10 启动hadoop  

     执行命令:start-all.sh

     验证: jps 出现5个进程

     或打开网页: http://192.168.56.2:50070/  和  http://192.168.56.2:50030/

 

11 重新格式化hdfs的方法

    删除/usr/local/hadoop-1.2.1/tmp  后重新格式化

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326573271&siteId=291194637