hadoop cluster configuration the whole process

A, nat configure
a virtual machine - "Edit -" Virtual Network Editor - "Change Settings -" Remove original VMnet8- "plus new VMnet8-" click NAT mode
Bridge mode - "NAT mode, NAT mode initialization about
2 , edit / etc / sysconfig / Network-scripts / the ifcfg-eth0
the DEVICE = "eth0"
BOOTPROTO = "static"
NM_CONTROLLED = "yes"
ONBOOT = "yes"
the TYPE = "Ethernet"
IPADDR # = 192.168.126.10 see virtual machine
NETMASK = 255.255.255.0
GATEWAY = 192.168.126.2 # to see the virtual machine
DNS1 = 202.106.0.20
3, / etc / init.d / network restart restart network

Two, Hadoop installation
1, suspend the virtual machine - "Copy the virtual machine files -" on the virtual machine
2, slave station
edit / etc / sysconfig / Network-scripts / the ifcfg-eth0
the DEVICE = "eth0"
BOOTPROTO = "static"
NM_CONTROLLED = "yes"
ONBOOT = "yes"
the TYPE = "Ethernet"
IPADDR # = 192.168.126.11 see virtual machine
NETMASK = 255.255.255.0
GATEWAY = 192.168.126.2 # to see the virtual machine
DNS1 = 202.106.0.20
3, / etc / init.d / network restart to restart the network
4, delete the virtual machine network adapter card to add
5, master virtual machine with windows file sharing set-6u45-Linux-the JDK x64.bin, hadoop-1.2.1-bin.tar.gz
/ mnt / hgfs / under With shared directory
. 6, / jdk-6u45- linux-x64.bin mounted jdk
The global variable, ~ / .bashrc
Export the JAVA_HOME = / usr / local / the src / jdk1.6.0_45
Export the CLASSPATH =:. $ the CLASSPATH: $ the JAVA_HOME / lib
Export the PATH = $ the PATH:$JAVA_HOME/bin
source ~ / .bashrc restart global variables
7, the remote copy files: SCP--rp JDK 6u45-Linux-x64.bin 192.168.126.11:/usr/local/src/
. 8, the sixth step is repeated extensions Slave

9、master主机解压hadoop-1.2.1-bin.tar.gz
cd hadoop-1.2.1
mkdir tmp
cd conf
vim master :内容为master
vim slaves :内容为
slave1
slave2
vim core-site.xml : 内容为
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/src/hadoop-1.2.1/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.126.10:9000</value>
</property>
</configuration>
vim mapred-site.xml :内容为
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>http://192.168.126.10:9001</value>
</property>
</configuration>
vim hdfs-site.xml :内容为
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
vim hadoop-env.sh :最后添加
export JAVA_HOME=/usr/local/src/jdk1.6.0_45

10, master and slave local network configuration must be configured
Vim / etc / the hosts
192.168.126.10 master
192.168.126.11 Slave1
192.168.126.12 slave2
Vim / etc / sysconfig / Network
HOSTNAME = master (machine according Fill: Slave1 / slave2)
hostname master ( according to the machine to fill: slave1 / slave2)

11, hadoop copied to the extension
SCP--rp Hadoop 1.2.1 192.168.126.11:/usr/local/src/
SCP--rp Hadoop 1.2.1 192.168.126.12:/usr/local/src/

12, turn off the firewall (each machine have to perform)
/etc/init.d/iptables STOP
setenforce 0 # selinux closed

Three, master, slave establish a relationship of mutual trust
ssh-keygen # Return Return
cd ~ / .ssh
CAT id_rsa.pub> authorized_keys # copy the public key file
to add the public key slave2 of slave1 public key to authorized_keys and copy authorized_keys to extension on

ssh slave1 ssh slave2 ssh master to verify the relationship of mutual trust

Fourth, start hadoop cluster
cd /usr/local/src/hadoop-1.2.1/bin/
./hadoop the NameNode -format # initialize
./start-all.sh # start
jps # view the process (each machine View)
./ hadoop fs -put / etc / passwd / # to write the cluster file
./hadoop fs -ls / # View cluster file

 

Guess you like

Origin www.cnblogs.com/longfeiPHP/p/11350565.html