Hadoop single node setup

1. download hadoop from apache.org
http://archive.apache.org/dist/hadoop/core/
选一个稳定的版本下载,例如:hadoop-2.7.1.tar.gz   

2. 把hadoop上传到linux
makdir /host01
还是FTP到/host01

3. 解压到/host01
tar -zxvf /host01/hadoop-2.7.1.tar.gz -C /host01


4. 修改hadoop2.x的配置文件$HADOOP_HOME/etc/hadoop

1)hadoop-env.sh
vim hadoop-env.sh
#设置JAVA_HOME
引用
export JAVA_HOME=/usr/java/jdk1.6.0_45

2)core-site.xml
       
引用
<!-- config HDFS namenode address -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost.localdomain:9000</value>
</property>
<!-- cinfig hadoop runtime directory -->
<property>
        <name>hadoop.tmp.dir</name>
        <value>/host01/hadoop-2.7.1/tmp</value>
</property>

3) hdfs-site.xml
      
引用
<!-- config HDFS backup count as 1 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

4) mapred-site.xml
       mv mapred-site.xml.template mapred-site.xml
       vim mapred-site.xml
      
引用
<!-- config mapreduce running on yarn -->
       <property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
       </property>

5) yarn-site.xml
   
引用
<!-- configure the boss of yarn (ResourceManager) address -->
    <property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost.localdomain</value>
    </property>
<!-- the way reducer get the data -->
     <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
    </property>


5. 添加java,hadoop到环境变量
vim /etc/proflie
引用
export JAVA_HOME=/usr/java/jdk1.6.0_45
export HADOOP_HOME=/host01/hadoop-2.7.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

source /etc/profile

6.启动hadoop
1)格式化namenode
hdfs namenode -format
INFO common.Storage: Storage directory /host01/hadoop-2.7.1/tmp/dfs/name has been successfully formatted.
2)启动
先启动HDFS
sbin/start-dfs.sh
再启动YARN
sbin/start-yarn.sh
3) jps验证
11836 SecondaryNameNode
11598 DataNode
12229 NodeManager
12533 Jps
11470 NameNode
12118 ResourceManager

http://hostip:50070 (HDFS管理界面)

猜你喜欢

转载自arlenye.iteye.com/blog/2227233
今日推荐