hadoop-0.20.2+hbase-0.90.3+zookeeper-3.3.1集成安装

hadoop-0.20.2+hbase-0.90.3+zookeeper-3.3.1集成安装
结合看了官方文档和前辈的一些安装笔记,今天拿到5台机器来做实验

1.根据需要分别在5台机器上配置hosts
   /etc/hosts
#ip            机器名称/域名
192.168.79.102 hadoopcm4  had102 
192.168.79.101 hadoopcm3  had101
192.168.79.100 hadoopcm2  had100
192.168.79.99  hadoopcm1  had99
192.168.79.98  hadoopcm0  had98

had102 had101 had100 这三台机器部署 hadoop,其中 had102作为namenode
had101 had100 had99  这三台机器部署 hbase, 其中 had101作为hmaster
had100 had99  had98  这三台机器部署 zk. 

2.分别在5台机器上 创建用户 had 并设置密码
  useradd had
  passwd had

3.SSH无密码验证配置
   a.分别在5台机器用 had用户登录,并执行以下命令,生成rsa密钥对
    [had@hadoopcm4 ~]$ ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/zkl/.ssh/id_rsa): 默认路径
    Enter passphrase (empty for no passphrase): 回车,空密码
    Enter same passphrase again:
    Your identification has been saved in /home/had/.ssh/id_rsa.
    Your public key has been saved in /home/had/.ssh/id_rsa.pub.

    这将在/home/had/.ssh/目录下生成一个私钥id_rsa和一个公钥id_rsa.pub。
  
   b. 将非namenode节点(had98,had99,had100,had101)上的 id_rsa.pub传送到namenode机器上
       cp  id_rsa.pub had98.id_rsa.pub
       scp had98.id_rsa.pub had102:/home/had/.ssh
       ............
       cp  id_rsa.pub had101.id_rsa.pub
       scp had101.id_rsa.pub had102:/home/had/.ssh 
            
   c. namenode节点上综合所有公钥(包括自身)并传送到所有节点上
      cp id_rsa.pub authorized_keys   ##namenode自己的公钥
      cat had98.id_rsa.pub >> authorized_keys
      ....
      cat had101.id_rsa.pub >> authorized_keys
     
     然后使用SSH协议将所有公钥信息authorized_keys复制到所有节点的.ssh目录下
      scp authorized_keys had98:/home/had/.ssh
      ......
      scp authorized_keys had101:/home/had/.ssh

     这样配置过后,所有节点之间可以相互SSH无密码登陆,可以通过命令
     "ssh 节点ip地址"来验证。  
    
4  分别在5台机器上相同的路径 安装 JDK1.6以上版本 并在 /etc/profile 里配置

5  把在自己windows机器上,下载好的hadoop0.20.2.tar.gz,hbase-0.90.3.tar.gz, zookeeper-3.3.1.tar.gz 都使用
   用户 had 登陆 上传到had102机器上。
  
6  在had102机器上 解压缩后,目录分别重新命名 hadoop,hbase,zookeeper
   即: /home/had/hadoop    [hadoop0.20.2.tar.gz解压后路径]
       /home/had/hbase     [hbase-0.90.3.tar.gz解压后路径]
       /home/had/zookeeper [zookeeper-3.3.1.tar.gz解压后路径]

7.在had102机器上
     a.配置 /etc/profile
        #set java environment
       JAVA_HOME=/usr/jdk/jdk1.6.0_13
       HADOOP_HOME=/home/had/hadoop
       CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:CLASSPATH
       PATH=$JAVA_HOME/bin:$PATH
       PATH=$HADOOP_HOME/bin:$PATH
       export JAVA_HOME CLASSPATH HADOOP_HOME  PATH
       export PATH=/sbin:$PATH:/usr/sbin/
     b.配置hadoop [/home/had/hadoop/conf]
        I. 修改hadoop-env.sh的变量
           export JAVA_HOME=/usr/jdk/jdk1.6.0_13
           export HADOOP_PID_DIR=/home/had/hadoop/tmp
        II.修改 masters 文件
               had102
           修改 slaves 文件
              had101
              had100
        III.修改 core-site.xml
            <configuration>
            <property>
               <name>fs.default.name</name>
                <value>hdfs://had102:9000</value>
            </property>
            <property>
               <name>hadoop.tmp.dir</name>
               <value>/home/had/hadoop/tmp</value>
             </property>
              </configuration>
            
             修改 hdfs-site.xml
               <configuration>
            <property>
              <name>dfs.name.dir</name>
               <value>/home/had/hadoop/name</value>
            </property>
            <property>
               <name>dfs.data.dir</name>
             <value>
                   /home/had/hadoop/data1/hdfs,/home/had/hadoop/data2/hdfs,/home/had/hadoop/data3/hdfs
             </value>
             </property>
              <property>
              <name>dfs.replication</name>
              <value>3</value>
               </property>
        </configuration>

      修改mapred-site.xml 
        <configuration>
         <property>
          <name>mapred.job.tracker</name>
          <value>had102:9001</value>
         </property>
         <property>
          <name>mapred.child.java.opts</name>
          <value>-Xmx768m</value>
         </property>
       </configuration>
      
8.scp -r /home/had/hadoop     had101:/home/had
  scp -r /home/had/hadoop     had100:/home/had
  切换到/home/had/hadoop目录下
  执行bin/hadoop namenode -format(格式化master主机生成name data tmp等文件夹)
 
9.在had102上 启动namenode 
执行 bin/start-all.sh
使用jps命令查看nomenode、secondnamenode是否正常启动
ie里面输入http://had102:50070  查看namenode的相关配置信息、运行状态和日志文件 
ie里面输入http://had102:50030  查看jobtasker的相关配置信息、运行状态和日志文件 

10.在had102上 配置hbase
     a 配置hbase [/home/had/hbase/conf]
       I 修改 hbase-env.sh
          export JAVA_HOME=/usr/jdk/jdk1.6.0_13/
  export HADOOP_HOME=/home/had/hadoop
    export HBASE_HOME=/home/had/hbase
          export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
          export PATH=$JAVA_HOME/bin:$PATH:$HBASE_HOME/bin
          export HBASE_MANAGES_ZK=false
         
    II 修改 hbase-site.xml
    <configuration>
<property>
  <name>hbase.rootdir</name>
  <value>hdfs://had102:9000/hbase</value>
</property>
<property>
  <name>hbase.cluster.distributed</name>
  <value>true</value>
</property>
<property>
  <name>hbase.master</name>
  <value>had101</value>
</property>
<property>
   <name>hbase.zookeeper.quorum</name>
   <value>had98,had99,had100</value>
  </property>
<property>
   <name>zookeeper.session.timeout</name>
   <value>60000000</value>
</property>
<property>
<name>hbase.zookeeper.property.clientport</name>
  <value>2181</value>
  </property>
</configuration>

III 修改regionserver
    had100
            had99
           
11.  scp -r /home/had/hbase     had101:/home/had
     scp -r /home/had/hbase     had100:/home/had
     scp -r /home/had/hbase     had99:/home/had
    
12. 配置ZK
      进入/home/had/zookeeper/conf/中
     (1)cp    zoo_sample.cfg    zoo.cfg
     (2)vim zoo.cfg,如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/home/had/zookeeper/data   #这个指定的位置 需要myid
dataLogDir=/home/had/zookeeper/log
# the port at which the clients will connect
clientPort=2181
server.1=had98:2888:3888
server.2=had99:2888:3888
server.3=had100:2888:3888

      (3)修改log4j.properties
            log4j.appender.ROLLINGFILE.File=/home/had/zookeeper/zookeeper.log
           
      (4) mkdir /home/had/zookeeper/data
          mkdir /home/had/zookeeper/log
         
13. scp -r /home/had/zookeeper   had100:/home/had
    scp -r /home/had/zookeeper   had99:/home/had
    scp -r /home/had/zookeeper   had98:/home/had           
     
14. 分别登陆had100,had99,had98
    进入 /home/had/zookeeper/data 
    touch myid (此序号设置和zoo.cfg里面的server设置要对应) 
    列如:在had100 机器上 vi myid 内容为 3
   
15. 分别在  had100,had99,had98上 进入/home/had/zookeeper 启动ZK
     bin/zkServer.sh start  

16.启动hbase集群,登陆had101
   (1) /home/hadoop/hbase/bin/start-base.sh
   (2) 执行jps显示Hmaster是否启动 
   (3) 执行bin/hbase shell

  (4)>create 't1', t2','t3'(测试利用hmaster插入数据)
    > list (显示已经插入的数据)
    >t1
   输入:http://had101:60010

先启动 hadoop集群,再分别启动ZK,最后启动hbase集群。

需要注意的是 hbase 依赖的hadoop-*.jar 要与 使用hadoop版本保持一致

猜你喜欢

转载自liuyijie2007.iteye.com/blog/1294566