A. Install Java
Java Download
Official website to download the appropriate jdk, I am using jdk-7u79-linux-x64.tar.gz
, then on to the version of jdk an example, the Java environment variable configuration
Create a Java directory
Create a java directory under / usr / local directory for storing decompressed jdk
cd /usr/local
mkdir java
Decompression jdk
Enter java directory
cd java
tar zxvf jdk-7u79-linux-x64.tar.gz
Configuration environment variable
- Edit profile file
cd /etc vim profile
- In the configuration file append the following
export JAVA_HOME=/usr/local/java/jdk1.7.0_79 export JRE_HOME=/usr/local/java/jdk1.7.0_79/jre export PATH=$PATH:/usr/local/java/jdk1.7.0_79/bin export CLASSPATH=./:/usr/local/java/jdk1.7.0_79/lib:/usr/local/jdk7/jdk1.7.0_79/jre/lib
- Refresh profile file
source /etc/profile
II. Installation Hadoop
Download Hadoop
Hadoop Down Page to download the demand according to choose the right version, I downloaded hadoop-2.7.3.tar.gz
, then on to the version of hadoop, for example, install & configure
Create a directory Hadoop
Create a directory in the user hadoop
directory for storing decompressedhadoop
cd /home/username
mkdir hadoop
[注]
username为用户名,需要根据实际情况决定
Configuring Hadoop Env
- Edit
hadoop-env.sh
File
cd /home/username/hadoop/hadoop-2.7.3/etc/hadoop/
chmod +x hadoop-env.sh
vim hadoop-env.sh
-
Modify them
export JAVA_HOME=${JAVA_HOME}
toexport JAVA_HOME=/usr/local/java/jdk1.7.0_79
-
carried out
hadoop-env.sh
./hadoop-env.sh
Placed core-site.xml
vim core-site.xml
HDFS's NameNode specified address, and specify the storage directory is generated when running Hadoop file
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-0:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
</property>
</configuration>
Configuring hdfs-site.xml
vim hdfs-site.xml
Specifies the number of copies of HDFS
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/usr/local/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/usr/local/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-0:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
Configuring mapred-site.xml
1. Heavy naming
mv mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
2. Configure the following
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop-0:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop-0:19888</value>
</property>
</configuration>
Configuring yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-0:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-0:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-0:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-0:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-0:8088</value>
</property>
</configuration>
Configuration masters
vim masters
#内容填写
hadoop-0
Configuration masters
vim slaves
#内容填写
hadoop-1
hadoop-2
hadoop-3
hadoop-4
hadoop-5
After the above configuration hadoop copied to each machine, and then perform the next operation
Formatting namenode, only one format on the line
hadoop namenode -format
cd /hadoop/hadoop-2.7.3/sbin
./start-all.sh
#根据提示输入密码,如果提示Java_Home,表示没修改hadoop-env.sh中的JAVA_HOME路径,修改完后需要再次执行hadoop-env.sh
#console
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop-0]
intel@hadoop-0's password:
hadoop-0: starting namenode, logging to /usr/local/hadoop/hadoop-2.7.3/logs/hadoop-intel-namenode-hadoop-0.out
hadoop-4: starting datanode, logging to /usr/local/hadoop/hadoop-2.7.3/logs/hadoop-intel-datanode-hadoop-4.out
hadoop-1: starting datanode, logging to /usr/local/hadoop/hadoop-2.7.3/logs/hadoop-intel-datanode-hadoop-1.out
hadoop-2: starting datanode, logging to /usr/local/hadoop/hadoop-2.7.3/logs/hadoop-intel-datanode-hadoop-2.out
hadoop-5: starting datanode, logging to /usr/local/hadoop/hadoop-2.7.3/logs/hadoop-intel-datanode-hadoop-5.out
hadoop-3: starting datanode, logging to /usr/local/hadoop/hadoop-2.7.3/logs/hadoop-intel-datanode-hadoop-3.out
Starting secondary namenodes [hadoop-0]
intel@hadoop-0's password:
hadoop-0: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-2.7.3/logs/hadoop-intel-secondarynamenode-hadoop-0.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/hadoop-2.7.3/logs/yarn-intel-resourcemanager-hadoop-0.out
hadoop-4: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.7.3/logs/yarn-intel-nodemanager-hadoop-4.out
hadoop-5: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.7.3/logs/yarn-intel-nodemanager-hadoop-5.out
hadoop-1: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.7.3/logs/yarn-intel-nodemanager-hadoop-1.out
hadoop-2: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.7.3/logs/yarn-intel-nodemanager-hadoop-2.out
hadoop-3: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.7.3/logs/yarn-intel-nodemanager-hadoop-3.out
Test
1. command jps
jps
2. console output at the contents indicate successful installation configuration
# Master上打印输出
7398 NameNode
6898 ResourceManager
7732 SecondaryNameNode
8336 Jps
#Slave上打印输出
5505 Jps
5121 NodeManager
4992 DataNode
test
Open the Web page http: // localhost: 8088 / cluster normal view, hostaname
decided according to the actual situation, for example, I was172.16.1.23
The first task to submit a mapreduce hadoop cluster system (wordcount)
hdfs dfs -mkdir -p /data/input #在虚拟分布式文件系统上创建一个测试目录/data/input
hdfs dfs -put README.txt /data/input #将当前目录下的README.txt 文件复制到虚拟分布式文件系统中
hdfs dfs -ls /data/input #查看文件系统中是否存在我们所复制的文件
#console
Found 1 items
-rw-r--r-- 2 intel supergroup 44 2017-03-27 16:24 /data/input/readme.txt
Submit to the word counting task hadoop
hadoop jar hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /data/input /data/output/result
#console
17/03/27 16:30:06 INFO client.RMProxy: Connecting to ResourceManager at hadoop-0/172.16.1.23:8032
17/03/27 16:30:07 INFO input.FileInputFormat: Total input paths to process : 1
.
.
.
#查看result
hdfs dfs -cat /data/output/result/part-r-00000
#console
! 1
Hi! 1
I 1
a 1
am 1
file 1
for 1
hadoop 1
mapreduce 1
test 1
The main issues! !
Hadoop users will need to be authorized
sudo chown -R group:username hadoop/
sudo chown -R 用户名@用户组 目录名
Original: Big Box Hadoop Distributed Cluster Setup & Configuration