hadoop 2.3.0 安装部署 基于CentOS6.5 x86_x64

准备工作(这里省略)
1.系统安装 
2.jdk 安装
 
一.重新编译 hadoop(64位系统必做/32位系统可从第二步开始)
1.yum install svn
2.yum install autoconfautomakelibtoolcmake
3.yum install ncurses-devel
4.yum install openssl-devel
5.yum install gcc*
6.安装maven
wget http://apache.fayea.com/apache-mirror/maven/maven-3/3.2.1/binaries/apache-maven-3.2.1-bin.tar.gz
tar xzvf./apache-maven-3.2.1-bin.tar.gz
mv./ apache-maven-3.2.1 /usr/local
设置环境变量
7.安装protobuf
 
wge thttps://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
tar xzvf./protobuf-2.5.0.tar.gz
cdprotobuf-2.5.0
./configure
make
make check
make install
8.获取hadoop程序代码
 
9.重新编译本地库
 
mvn package -Pdist,native -DskipTests –Dtar
 
二.hadoop2.3.0 安装
1.cp -ri hadoop-2.3.0 /home/
 
 
2. 修改配置文件
~/hadoop-2.3.0/etc/hadoop/hadoop-env.sh
~/hadoop-2.3.0/etc/hadoop/yarn-env.sh
~/hadoop-2.3.0/etc/hadoop/slaves
~/hadoop-2.3.0/etc/hadoop/core-site.xml
~/hadoop-2.3.0/etc/hadoop/hdfs-site.xml
~/hadoop-2.3.0/etc/hadoop/mapred-site.xml
~/hadoop-2.3.0/etc/hadoop/yarn-site.xml
以上个别文件默认丌存在的,可以复制相应的template文件获得
hadoop-env.sh
 
export JAVA_HOME=/usr/java/jdk1.7.0_25-cloudera

 
 
 
yarn-env.sh
 
slaves
 
hadoop29
hadoop31
hadoop129
 
core-site.xml
 
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop30:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/data/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.groups</name>
<value>*</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop30:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/data/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/data/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
 
mapred-site.xml
 
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop30:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop30:19888</value>
</property>
</configuration>

yarn-site.xml
 
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop30:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop30:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop30:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop30:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop30:8088</value>
</property>
</configuration>
 
免密码
ssh-keygen -t rsa
cd /root/.ssh
cat id_rsa.pub >> authorized_keys
 
将每台 的 id_rsa.pub   合并在 authorized_keys
 
将配置好的  authorized_keys 复制到每一台机器上
 scp -r authorized_keys root@hadoop29:/root/.ssh/
 
 
创建 hadoop需要的目录
 
mkdir -p /home/hadoop/data/hdfs/name
mkdir -p /home/hadoop/data/hdfs/data
mkdir -p /home/hadoop/data/tmp
 
将 hadoop文件分配到其他机器
 
 scp -r /home/hadoop root@hadoop29:/home/
 
启劢集群及检验
 
./hadoop namenode -format
../sbin/start-all.sh
 
更多 1

 

猜你喜欢

转载自ssydxa219.iteye.com/blog/2034589