Hadoop分布式配置

1.配置环境变量,要在安装了JDK前提下

export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin

2.配置conf/hadoop-env.sh

export JAVA_HOME=/usr/local/java/jdk1.7.0_45(必须)
export HADOOP_HEAPSIZE=512
export HADOOP_PID_DIR=/home/$USER/pids

3.修改主机名

sudo vi /etc/hostname

4.配置/etc/hosts

192.168.1.110 master
192.168.1.101 slave1
192.168.1.109 slave2
192.168.1.108 slave3

5.修改conf/core-site.xml

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://master:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/${user.name}/tmp</value>
  </property>
</configuration>

6.修改conf/hdfs-site.xml

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <name>dfs.name.dir</name>
    <value>/home/${user.name}/dfs_name</value>
  </property>
  <property>
    <name>dfs.data.dir</name>
    <value>/home/${user.name}/dfs_data</value>
  </property>
</configuration>

7.修改conf/mapred-site.xml

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>master:9001</value>
  </property>
  <property>
    <name>mapred.system.dir</name>
    <value>/home/${user.name}/mapred_system</value>
  </property>
  <property>
    <name>mapred.local.dir</name>
    <value>/home/${user.name}/mapred_local</value>
  </property>
</configuration>

8.修改conf/masters

master

9.修改conf/salves

slave1

slave2

slave3

猜你喜欢

转载自sosop.iteye.com/blog/2055798
今日推荐