大数据Hadoop+Hive+Hbase的部署

1.基础环境的准备
1.1 准确4台服务器,尽量不要安装中文环境。
1.2 服务器的静态的IP的设置
1.3 系统安装的centos6.5
1.4 所用的软件版本如下:
    jdk-7u79-linux-x64.tar.gz
    zookeeper-3.4.6.tar.gz
    hadoop-2.5.1-x64.tar.gz
    apache-hive-1.2.1-bin.tar.gz
    hbase-0.98.15-hadoop2-bin.tar.gz 

2.基础环境的部署

2.1 主机名映射

vi /etc/hosts
192.168.100.11 node1
192.168.100.12 node2
192.168.100.13 node3
192.168.100.14 node4

2.2 关闭防火墙

 iptables –X
 iptables –F
 iptables –Z
 service iptables save
 vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=permissive
reboot
2.3  yum的配置和安装ftp

2.3.1 node1节点的配置

cd /etc/yum.repos.d/
rm -rf *
vi local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
mkdir /opt/centos
mount /dev/sr0 /opt/centos/
yum clean all
yum list
yum install vsftpd –y
vi /etc/vsftpd/vsftpd.conf
加多一行anon_root=/opt---引导ftp到/opt目录下
service vsftpd restart
chkconfig vsftpd on

2.3.2其它节点的配置

cd /etc/yum.repos.d/
rm -rf *
vi local.repo
[centos]
name=centos
baseurl=ftp://192.168.100.11/centos
gpgcheck=0
enabled=1
yum clean all
yum list

2.4 安装openssh-clients 

yum install openssh-clients -y
2.5 安装ntp(全部所有节点)
yum install ntp –y 
node1:vi /etc/ntp.conf
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
server 127.127.1.0
fudge 127.127.1.0 stratum 10
service ntpd restart
chkconfig ntpd on
其他节点: ntpdate node1
Hadoop整个集群的角色分布
       NN    DN    JN   ZK  ZKFC   RS
node1   1                1   1
node2   1    1      1    1   1
node3        1      1    1         1
node4        1      1              1
3.Hadoop部署准备环境

3.1安装免秘钥  

node1,node2>>对所有的节点免秘钥
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
cd /root/.ssh/
scp id_dsa.pub node2:/tmp/
cat /tmp/id_dsa.pub >> /root/.ssh/authorized_keys

3.2  JDK的部署

mkdir /home/tools
cd /home/tools
scp jdk-7u79-linux-x64.tar.gz @node4:/home/tools/.
mkdir /usr/java
tar zxvf jdk-7u79-linux-x64.tar.gz -C /usr/java/.
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_79
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
java –version

4 安装zookeeper(node1,node2,node3)

scp zookeeper-3.4.6.tar.gz node3:/home/tools/.
tar -zxvf zookeeper-3.4.6.tar.gz -C /home/.
vi /etc/profile 
export ZOOKEEPER_HOME=/home/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin
source /etc/profile
cd /home/zookeeper-3.4.6/conf/
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg 
dataDir=/opt/zookeeper
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888 
scp -r zookeeper-3.4.6/ @node2:/home/.
scp -r zookeeper-3.4.6/ @node3:/home/.
mkdir /opt/zookeeper
cd /opt/zookeeper
vi myid
node1:1,node2:2,node3:3
zkServer.sh start
zkServer.sh status
5. Hadoop的部署

5.1 解压安装配置

tar zxvf hadoop-2.5.1-x64.tar.gz -C /home/.
vi /etc/profile 
export HADOOP_HOME=/home/hadoop-2.5.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile

5.2 配置Hadoop文件

cd /home/hadoop-2.5.1/etc/hadoop
vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_79
vi hdfs-site.xml  

   <property>
         <name>dfs.nameservices</name>
         <value>lyl</value>
    </property>
    
   <property>
         <name>dfs.ha.namenodes.lyl</name>
         <value>nn1,nn2</value>
    </property>
    
    <property>
         <name>dfs.namenode.rpc-address.lyl.nn1</name>
         <value>node1:8020</value>
    </property>


    <property>
         <name>dfs.namenode.rpc-address.lyl.nn2</name>
         <value>node2:8020</value>
    </property>
     
    <property>
         <name>dfs.namenode.http-address.lyl.nn1</name>
         <value>node1:50070</value>
    </property>


    <property>
         <name>dfs.namenode.http-address.lyl.nn2</name>
         <value>node2:50070</value>
    </property>
    
    <property>
         <name>dfs.namenode.shared.edits.dir</name>
         <value>qjournal://node2:8485;node3:8485;node4:8485/lyl</value>
    </property>
    
    <property>
         <name>dfs.client.failover.proxy.provider.lyl</name>
         <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    
    <property>
         <name>dfs.ha.fencing.methods</name>
         <value>sshfence</value>
    </property>


    <property>
         <name>dfs.ha.fencing.ssh.private-key-files</name>
         <value>/root/.ssh/id_dsa</value>
    </property>
     
    <property>
         <name>dfs.journalnode.edits.dir</name>
         <value>/opt/journal/data</value>
    </property>
   
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
vi core-site.xml
 
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://lyl</value>
</property>
 
<property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop</value>
</property>
 
<property>
    <name>ha.zookeeper.quorum</name>
    <value>node1:2181,node2:2181,node3:2181</value>
</property>

cp mapred-site.xml.template  mapred-site.xml
vi mapred-site.xml
 
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
vi yarn-site.xml

 <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
  </property>

  <property>
     <name>yarn.resourcemanager.ha.enabled</name>
     <value>true</value>
  </property>

  <property>
     <name>yarn.resourcemanager.cluster-id</name>
     <value>lylyear</value>
 </property>

 <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
 </property>

 <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>node3</value>
 </property>

 <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>node4</value>
 </property>

<property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>node1:2181,node2:2181,node3:2181</value>
</property>
vi slaves 指定DataNode(NodeManager)节点
node2
node3
node4
scp -r hadoop-2.5.1/ @node2:/home/.

5.3 启动Hadoop

nod2:node3:node4:-> hadoop-daemon.sh start journalnode
node1: hdfs namenode –format
hadoop-daemon.sh start namenode
node2: hdfs namenode –bootstrapStandby
node1: hdfs zkfc –formatZK
start-all.sh
node3:node4: yarn-daemon.sh start resourcemanager
Hive的部署节点的分配
        MySQL      Hive
node1    1
node2                1
node3
node4

6,Hive的部署(单用户模式安装)

6.1 node1安装MySql

yum install mysql-server
service mysqld start
chkconfig mysqld on
chkconfig --list mysqld

mysql
use mysql
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123' WITH GRANT OPTION;
delete from user where host != '%';
flush privileges;
mysql -u root –p

6.2 node2安装Hive

tar zxvf apache-hive-1.2.1-bin.tar.gz -C /home/
mv apache-hive-1.2.1-bin/ hive-1.2.1
vi /etc/profile
export HIVE_HOME=/home/hive-1.2.1
export PATH=$PATH:$HIVE_HOME/bin
cd hive-1.2.1/conf/
cp hive-default.xml.template hive-site.xml
vi hive-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at


       http://www.apache.org/licenses/LICENSE-2.0


   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<configuration>
   <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/user/hive_remote/warehouse</value>
   </property>


   <property>
       <name>hive.metastore.local</name>
       <value>true</value>
  </property>


  <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://node1/hive_remote?createDatabaseIfNotExist=true</value>
 </property>


  <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
  </property>


  <property>
     <name>javax.jdo.option.ConnectionUserName</name>
     <value>root</value>
  </property>


  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
  </property>
  
</configuration>

cd /home/hive-1.2.1/lib/
cp /home/tools/mysql-connector-java-5.1.32-bin.jar .
cd /home/hadoop-2.5.1/share/hadoop/yarn/lib
cp /home/hive-1.2.1/lib/jline-2.12.jar .
rm -rf jline-0.9.94.jar
hive
Hbase的部署的节点角色分布
        ZK    Master    RegionServer
node1   1       1
node2   1                    1
node3   1       1            1
node4                        1

7,Hbase的部署(完全分布)

7.1 node3实现无秘钥登录

scp id_dsa.pub node1:/tmp/.
cat /tmp/id_dsa.pub >> /root/.ssh/authorized_keys

7.2 解压安装部署Hbase

tar zxvf hbase-0.98.15-hadoop2-bin.tar.gz -C /home/.  
mv hbase-0.98.15-hadoop2/ hbase-0.98
vi /etc/profile
export HBASE_HOME=/home/hbase-0.98
export PATH=$PATH:$HBASE_HOME/bin
source /etc/profile

7.3 配置Hbase文件

vi hbase-env.sh 
export JAVA_HOME=/usr/java/jdk1.7.0_79
export HBASE_MANAGES_ZK=false
 vi hbase-site.xml
 
 <property>
      <name>hbase.rootdir</name>
      <value>hdfs://lyl/hbase</value>
   </property>
 
   <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
   </property>
 
   <property>
       <name>hbase.zookeeper.quorum</name>
       <value>node1,node2,node3</value>
 </property>
vi regionservers   
node2
node3
node4
vi backup-masters
node1
cp /home/hadoop-2.5.1/etc/hadoop/hdfs-site.xml  .
scp -r hbase-0.98/ @node1:/home/.

7.4 启动Hbase(检查时间同步)

zkServer.sh start
start-all.sh
start-hbase.sh
hbase shell


猜你喜欢

转载自blog.csdn.net/afafawfaf/article/details/80854083