hadoop hbase hive 集群安装

一:卸载redhat操作系统默认jdk
1:查找安装默认安装jdk
   rpm -qa | grep java
2:删除jdk
   rpm -e --nodeps java-1.6.0-openjdk-1.6.0.0-1.21.b17.el6.x86_64

二:安装oracle jdk
1:使用root账号安装
2:创建目录:/usr/java
3:下载jdk存放到/usr/java目录:jdk-6u33-linux-x64.bin
4:给安装文件添加执行权限:
   chmod +x jdk-6u43-linux-x64.bin
5:执行jdk安装包
   ./jdk-6u43-linux-x64.bin
6:在/etc/profile文件中添加环境变量
export JAVA_HOME=/usr/java/jdk1.6.0_43
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin

7:配置生效,执行下面命令
source /etc/profile

三:主机分配,在每一个机器的的/etc/hosts文件中添加下面四行内容
192.168.205.23 inm1
192.168.205.24 inm2
192.168.205.25 inm3
192.168.205.26 inm4
192.168.205.27 inm5
192.168.205.28 inm6
192.168.205.29 inm7
192.168.205.30 inm8
192.168.205.31 inm9
192.168.205.32 inm10


四:关闭所有机器防火墙
chkconfig iptables off
service iptables stop

五:在每台机器上创建hadoop用户组合hadoop用户
1:创建用户组:groupadd hadoop
2:创建用户:useradd -g hadoop hadoop
3:修改密码:passwd hadoop

六:在master.hadoop机器上配置SSH
[hadoop@master ~]$ ssh-keygen -t rsa -P ""
   Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): /home/hadoop/.ssh/id_rsa
[hadoop@master ~]cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
[hadoop@master ~]chmod 700 ~/.ssh/
[hadoop@master ~]chmod 600 ~/.ssh/authorized_key
验证
[hadoop@master ~]ssh localhost
[hadoop@master ~]ssh inm1
复制ssh配置到其它机器
[hadoop@master ~]ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@inm2
[hadoop@master ~]ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@inm3


七:zookeeper三节点集群安装
1:使用三台服务器安装zookeeper,安装在hadoop用户上
   192.168.205.24、192.168.205.25、192.168.205.26
2:使用cloudera版本zookeeper:zookeeper-3.4.5-cdh4.2.0.tar.gz
3:解压并修改目录名称
   tar -zxf zookeeper-3.4.5-cdh4.2.0.tar.gz
   mv zookeeper-3.4.5-cdh4.2.0/ zookeeper
4:配置zookeeper,在conf目录下创建zoo.cfg文件,添加文件内容
  tickTime=2000  
   initLimit=5   
   syncLimit=2   
   dataDir=/homt/hadoop/storage/zookeeper/data
   dataLogDir=/homt/hadoop/storage/zookeeper/logs   
   clientPort=2181 
   server.1=inm2:2888:3888   
   server.2=inm3:2888:3888  

   server.3=inm4:2888:3888
5:创建zookeeper的数据文件和日志存放目录
   /home/hadoop/storage/zookeeper/data
   /home/hadoop/storage/zookeeper/logs
   在/home/hadoop/storage/zookeeper/data目录中创建文件myid,添加内容为:1
6:复制安装的zookeeper和storage目录到inm3和inm4机器上。
   scp -r zookeeper inm4:/home/hadoop
   scp -r storage inm4:/home/hadoop
   修改inm3机器上myid文件中值为2
   修改inm3机器上myid文件中值为3
7:启动服务器
   ./bin/zkServer.sh start
8:验证安装
   ./bin/zkCli.sh -server inm3:2181 

八:安装hadoop-2.0.0-cdh4.2.0
用户hadoop账号进入系统
1:解压tar -xvzf hadoop-2.0.0-cdh4.2.0.tar.gz ,修改目录名称:mv hadoop-2.0.0-cdh4.2.0 hadoop
2:配置Hadoop环境变量:修改vi ~/.bashrc,在文件最后面加上如下配置:
export HADOOP_HOME=/home/hadoop/hadoop
export HIVE_HOME=/home/hadoop/hive
export HBASE_HOME=/home/hadoop/hbase

export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin

3:使配置生效
   source .bashrc
4:修改HADOOP_HOME/etc/hadoop目录下mastes和slaves文件
   masters文件内容:
   inm1
   slaves文件内容:
   inm2
   inm3
   inm4
5:修改HADOOP_HOME/etc/hadoop/core-site.xml文件配置
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://inm1:9000</value>
  </property>
  
  <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
    <description>Size of read/write buffer used in SequenceFiles.</description>
  </property>
  
  <property>
    <name>io.native.lib.available</name>
    <value>true</value>
  </property>
</configuration>

6:修改HADOOP_HOME/etc/hadoop/hdfs-site.xml文件配置
<configuration>
  <property>
      <name>dfs.replication</name>
      <value>3</value>
  </property>
  <property>
      <name>hadoop.tmp.dir</name>
      <value>/home/hadoop/storage/hadoop/tmp</value>
  </property>
  <property>
		<name>dfs.name.dir</name>
		<value>/home/hadoop/storage/hadoop/name</value>
	</property>
	<property>
		<name>dfs.data.dir</name>
		<value>/home/hadoop/storage/hadoop/data</value>
	</property>
  <property>
		<name>dfs.block.size</name>
		<value>67108864</value>
		<description>HDFS blocksize of 64MB for large file-systems.</description>
	</property>
  <property>
      <name>dfs.namenode.http-address</name>
      <value>inm1:50070</value>
  </property>
  <property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
  </property>
</configuration>

7:修改HADOOP_HOME/etc/hadoop/mapred-site.xml文件配置
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>

  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>inm1:10020</value>
  </property>

  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>inm1:19888</value>
  </property>
</configuration>

8:修改HADOOP_HOME/etc/hadoop/yarn-site.xml文件配置
<configuration>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>inm1:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>inm1:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>inm1:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>inm1:8033</value>
  </property>
  <property>
     <name>yarn.resourcemanager.webapp.address</name>
     <value>inm1:8088</value>
   </property>
   <property>
      <description>Classpath for typical applications.</description>
      <name>yarn.application.classpath</name>
      <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,
          $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
          $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
          $YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,
          $YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*</value>
    </property>
    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce.shuffle</value>
   </property>
   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>

  <property>
      <name>yarn.nodemanager.local-dirs</name>
      <value>/home/hadoop/storage/yarn/local</value>
   </property>
   <property>
      <name>yarn.nodemanager.log-dirs</name>
      <value>/home/hadoop/storage/yarn/logs</value>
   </property>
   <property>
      <description>Where to aggregate logs</description>
      <name>yarn.nodemanager.remote-app-log-dir</name>
      <value>/home/hadoop/storage/yarn/logs</value>
   </property>

  <property>
      <name>yarn.app.mapreduce.am.staging-dir</name>
      <value>/user</value>
  </property>
</configuration>

9:同步hadoop工程到inm2,inm3,inm4机器上面
scp -r hadoop inm2:/home/hadoop
scp -r hadoop inm2:/home/hadoop
scp -r hadoop inm2:/home/hadoop

10:格式文件系统
hadoop namenode -format

11:启动hdfs和yarn,启动脚本在HADOOP_HOME/sbin目录中
./start-hdfs.sh
./start-yarn.sh

八:安装hbase-0.94.2-cdh4.2.0
1:解压tar -xvzf hbase-0.94.2-cdh4.2.0.tar.gz ,修改目录名称:mv hbase-0.94.2-cdh4.2.0.tar.gz hbase
2:修改HBASE_HOME/conf/regionservers文件,添加运行HRegionServer进程的机器名称。
  
inm2
   inm3
   inm4

3:修改HBASE_HOME/conf/hbase-site.xml文件
<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://inm1/hbase</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  
  <property>
    <name>hbase.tmp.dir</name>
    <value>/home/hadoop/storage/hbase</value>
  </property>
  
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>inm2,inm3,inm4</value>
  </property>
</configuration>

4:同步hbase工程到inm2,inm3,inm4机器上面
scp -r hbase inm2:/home/hadoop
scp -r hbase inm2:/home/hadoop
scp -r hbase inm2:/home/hadoop

5:在inm1上启动hbase集群
HBASE_HOME/bin/start-hbase.sh

6:执行hbase shell进入hbase console。执行list命令验证安装。

九:安装hive-0.10.0-cdh4.2.0
1:解压tar -xvzf hive-0.10.0-cdh4.2.0.tar.gz ,修改目录名称:mv hive-0.10.0-cdh4.2.0 hive
2:修改HIVE_HOME/conf/hive-site.xml文件
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://192.168.205.31:3306/hive?useUnicode=true&amp;characterEncoding=UTF-8</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>
  
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>username to use against metastore database</description>
  </property>
  
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hive2013</value>
    <description>password to use against metastore database</description>
  </property>
  
  <property>
   <name>mapred.job.tracker</name>
   <value>inm1:8031</value>
  </property>
  
  <property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
  </property>
  
  <property>
    <name>hive.aux.jars.path</name>
    <value>file:///home/hadoop/hive/lib/zookeeper-3.4.5-cdh4.2.0.jar,
      file:///home/hadoop/hive/lib/hive-hbase-handler-0.10.0-cdh4.2.0.jar,
      file:///home/hadoop/hive/lib/hbase-0.94.2-cdh4.2.0.jar,
      file:///home/hadoop/hive/lib/guava-11.0.2.jar</value>
  </property>
  
  <property>
    <name>hive.querylog.location</name>
    <value>/home/hadoop/storage/hive/querylog</value>
    <description>
      Location of Hive run time structured log file
    </description>
  </property>
  
  <property>
    <name>hive.support.concurrency</name>
    <description>Enable Hive's Table Lock Manager Service</description>
    <value>true</value>
  </property>
  
  <property>
    <name>hive.zookeeper.quorum</name>
    <description>Zookeeper quorum used by Hive's Table Lock Manager</description>
    <value>inm2,inm3,inm4</value>
  </property>
  
  <property>
    <name>hive.hwi.listen.host</name>
    <value>inm1</value>
    <description>This is the host address the Hive Web Interface will listen on</description>
  </property>
  
  <property>
    <name>hive.hwi.listen.port</name>
    <value>9999</value>
    <description>This is the port the Hive Web Interface will listen on</description>
  </property>
  
  <property>
    <name>hive.hwi.war.file</name>
    <value>lib/hive-hwi-0.10.0-cdh4.2.0.war</value>
    <description>This is the WAR file with the jsp content for Hive Web Interface</description>
  </property>


</configuration>
3:添加mysql驱动修改HIVE_HOME/lib目录。
4:进入hive console,执行show databases,验证安装是否成功!

猜你喜欢

转载自melin.iteye.com/blog/1848637