Hadoop的安装配置

Hadoop的安装配置

登录Hadoop官网(http://hadoop.apache.org/releases.html)
下载Hadoop 2.6.0安装包hadoop-2.6.0.tar.gz。然后解压至本地指定目录。

tar zxvf hadoop-2.6.0.tar.gz -C /usr/localln -s hadoop-2.6.0 hadoop

下面讲解Hadoop的配置。
1)打开/etc/profile,末尾加入:

export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL

执行 **source /etc/profile**使其生效,
然后进入Hadoop配置目录:/usr/local/hadoop/etc/hadoop,

配置Hadoop。

2)配置hadoop_env.sh。

export JAVA_HOME=/usr/lib/jvm/java-1.7

3)配置core-site.xml。

<property>
      <name>fs.defaultFS</name>
      <value>hdfs://Master:9000</value>
  </property>
  <property>
      <name>hadoop.tmp.dir</name>
      <value>file:/root/bigdata/tmp</value>
  </property>
<property>
      <name>io.file.buffer.size</name>
      <value>131702</value>
  </property>

4)配置yarn-site.xml。

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>Master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
    <value>Master:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>Master:8031</value>
</property>
<property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>Master:8033</value>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>Master:8088</value>
</property>

5)配置mapred-site.xml。

<property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
  </property>
  <property>
      <name>mapreduce.jobhistory.address</name>
      <value>Master:10020</value>
  </property>
  <property>
      <name>mapreduce.jobhistory.webapp.address</name>
      <value>Master:19888</value>
  </property>

6)创建namenode和datanode目录,并配置路径。
① 创建目录。

mkdir -p /hdfs/namenodemkdir -p /hdfs/datanode

② 在hdfs-site.xml中配置路径。

<property>
      <name>dfs.namenode.name.dir</name>
      <value>file:/hdfs/namenode</value>
  </property>
  <property>
      <name>dfs.datanode.data.dir</name>
          <value>file:/hdfs/datanode</value>
      </property>
      <property>
          <name>dfs.replication</name>
          <value>3</value>
      </property>
      <property>
          <name>dfs.namenode.secondary.http-address</name>
          <value>Master:9001</value>
      </property>
      <property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
      </property>

7)配置slaves文件,在其中加入所有从节点主机名,例如:
x.x.x.x worker1
x.x.x.x worker2
……
8)格式化namenode:

/usr/local/hadoop/bin/hadoop namenode -format

至此,Hadoop配置过程基本完成。

猜你喜欢

转载自blog.csdn.net/qq_43688472/article/details/85926837