centos7 build hadoop

System centos7

1, the installation jdk

1) Download jdk

(1) Download
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
(2) mounted lrzsz
yum installl -y lrzsz
upload installation package
rz

Here Insert Picture Description
(3) extracting installation package
tar -zxvf jdk-8u25-linux-x64.tar.gz

2) configuration environment variable

vim /etc/profile
Add the following configuration

export JAVA_HOME=/home/hadoop/jdk1.8.0_25/
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

Let configuration to take effect
source /etc/profile

3) View jdk version

java -version
Here Insert Picture Description

2, no password to connect remote

Private keys are generated
ssh-keygen -t rsa
copy the public key to the
cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys
test if configured
Here Insert Picture Description

3, install hadoop

1) Download hadoop

download link
http://mirrors.hust.edu.cn/apache/hadoop/common

2) Installation lrzsz

yum installl -y lrzsz
Upload the installation package
rz
Here Insert Picture Description
to extract the installation package
tar -zxvf hadoop-2.7.1_64bit.tar.gz

3) configuration environment variable

vim /etc/profile
Add the following configuration

export HADOOP_HOME=/home/hadoop/hadoop-2.7.1
export PATH=$PATH:$HADOOP_HOME/bin

Let configuration to take effect
source /etc/profile

4) Check hadoop version

hadoop version
Here Insert Picture Description
#### 4, configuration centos address

1) View hostname

hostname

2) modify the host name

hostnamectl set-hostname master
Here Insert Picture Description
###### 3) is provided for each server address, vi /etc/hoststhe configuration (there are several servers several servers provided for the external network ip address)
Here Insert Picture Description
to restart the network, to take effect
/etc/init.d/network restart
Here Insert Picture Description

5, configure hadoop

cd /home/hadoop/hadoop-2.7.1/etc/hadoop
###### 1) Configuration hadoop-env.sh
vim hadoop-env.sh
Here Insert Picture Description
configuration
export JAVA_HOME=/home/hadoop/jdk1.8.0_25
###### 2) Configuration yarn-env.sh
vim yarn-env.sh
Here Insert Picture Description
###### 3) core-site.xml configuration
to create zero catalog
mkdir -p /home/hadoop/tmp

 <configuration>  
         <property>
             <name>fs.defaultFS</name>
             <value>hdfs://master:9000</value>
          </property>
 <!--用来指定使用hadoop时产生文件的存放目录-->
          <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hadoop/tmp</value> 
          </property>
 </configuration>

Begin configuring
Here Insert Picture Description

4) arranged hdfs-site.xml

Specify MapReduce running on the yarn, the address and port configuration JobTracker

<configuration>  
     <property>
         <name>mapred.job.tracker</name>
         <value>localhost:9001</value>
     </property>
  <property>  
        <name>mapreduce.framework.name</name>  
        <value>yarn</value>  
  </property>  
 </configuration>

Here Insert Picture Description

5) disposed yarn-site.xml
<property>
   <name>yarn.resourcemanager.address</name>
  <value>master:8032</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
 <property>
 <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
 <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>master:8030</value> </property> <property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
  <value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
  <value>master:8033</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value>master:8089</value>
</property>

Here Insert Picture Description

6) modify slaves

Find slaves location
find / -name slaves
Here Insert Picture Description
into the directory
cd /home/hadoop/hadoop-2.7.1/etc/hadoop
configuration slaves
vi slaves
Here Insert Picture Description

6, start hadoop

1) Format namenode

cd /home/hadoop/hadoop-2.7.1/bin
./hdfs namenode -formatHere Insert Picture Description

2) being given, core-site.xml configuration more characters

Here Insert Picture Description

3) into the modified core-site.xml

Here Insert Picture Description
###### 4) reformat NameNode
Here Insert Picture Description
######. 5) to start the cluster (both master)
cd /home/hadoop/hadoop-2.7.1/sbin
./start-all.sh
Here Insert Picture Description

6) Check process

Here Insert Picture Description
#### 7, simple hadoop
Here Insert Picture Description

Guess you like

Origin blog.csdn.net/k393393/article/details/91488362
Recommended