VMWare虚拟机集群搭建Hadoop with Hive

虚拟机下载,以及序列号 准备
VMware Workstation v11.1.0 https://download3.vmware.com/software/wkst/file/VMware-workstation-full-11.1.0-2496824.exe
key :1F04Z-6D111-7Z029-AV0Q4-3AEH8
 
Linux操作系统Centos 6.6
CentOS-6.6-i386-minimal.iso
 
虚拟机网络 参数: 网络选择NAT,   
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
UUID=166b95ca-b98b-446c-a68e-6022012e4a9a
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=192.168.32.168       ---宿主机器是 192.168.32.1
NETMASK=255.255.255.0
PREFIX=24
GATEWAY=192.168.32.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
HWADDR=00:0C:29:11:DE:CA
PEERDNS=yes
PEERROUTES=yes
USERCTL=NO
/etc/init.d/network restart
 
删除DNS
/etc/resolve.conf     --- 注释所有项, 不然由于DNS无效会导致很多莫名其妙超时
 
添加Hosts
/etc/hosts
192.168.32.168  master
192.168.32.101  slave1
192.168.32.102  slave2
192.168.32.103  slave3
 
 
修改本机HostName:
/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master
 
创建hadoop用户,以及hdgrp组
groupadd hdgrp
useradd -g hdgrp hadoop
passwd hadoop   Hd1234.
后续用hadoop用户去安装所有的软件
 
filezilla连接虚拟机传文件

用root关闭防火墙
/etc/init.d/iptables stop 关闭防火墙
chkconfig iptables off 关闭开机启动
 
创建Hadoop,Hive的安装目录
/bdp/install    作为安装目录, 不默认放到/usr下面,  不然可能有很多权限问题 。
下载hadoop 2.2.0,以及hive-0.12.0解压到这个目录下
 
修改环境变量
/etc/profile
export JAVA_HOME=/bdp/install/jdk1.7.0_79
export HADOOP_HOME=/bdp/install/hadoop-2.2.0
export HIVE_HOME=/bdp/install/apache-hive-0.2.0-bin
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HIVE_HOME/sbin:$PATH
 
 
 
修改java home
export JAVA_HOME=/bdp/install/jdk1.7.0_79
vi /bdp/install/hadoop-2.2.0/etc/hadoop/hadoop-env.sh
vi /bdp/install/hadoop-2.2.0/etc/hadoop/yarn-env.sh
scp /bdp/install/hadoop-2.2.0/etc/hadoop/hadoop-env.sh hadoop@slave1:/bdp/install/hadoop-2.2.0/etc/hadoop/hadoop-env.sh
scp /bdp/install/hadoop-2.2.0/etc/hadoop/hadoop-env.sh hadoop@slave2:/bdp/install/hadoop-2.2.0/etc/hadoop/hadoop-env.sh
scp /bdp/install/hadoop-2.2.0/etc/hadoop/hadoop-env.sh hadoop@slave3:/bdp/install/hadoop-2.2.0/etc/hadoop/hadoop-env.sh
scp /bdp/install/hadoop-2.2.0/etc/hadoop/yarn-env.sh hadoop@slave3:/bdp/install/hadoop-2.2.0/etc/hadoop/yarn-env.sh
scp /bdp/install/hadoop-2.2.0/etc/hadoop/yarn-env.sh hadoop@slave3:/bdp/install/hadoop-2.2.0/etc/hadoop/yarn-env.sh
scp /bdp/install/hadoop-2.2.0/etc/hadoop/yarn-env.sh hadoop@slave3:/bdp/install/hadoop-2.2.0/etc/hadoop/yarn-env.sh


Hadoop需要修改的配置文件

添加文件master
master


修改slaves文件
slave1
slave2
slave3


core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/bdp/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>hadoop.proxyuser.hduser.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hduser.groups</name>
        <value>*</value>
    </property>
</configuration>



hdfs-site.xml
<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:9001</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/bdp/data/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/bdp/data/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
</configuration>




mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>




yarn-site.xml
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
     <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>master:8088</value>
    </property>
</configuration>


克隆主机MAC查询修改
Master装完后,克隆作为Slave的ISO,会有虚拟机网卡问题,删除rule的第一行,同时在eth0里面修改mac地址
/etc/udev/rules.d/70-persistent-net.rules
101: 删除第一行,把第二行的eth1修改为eth0,    记录下mac:  00:0c:29:68:e7:8c
102: 删除第一行,把第二行的eth1修改为eth0,    记录下mac:  00:0c:29:ee:b0:07
103: 删除第一行,把第二行的eth1修改为eth0,    记录下mac:  00:0c:29:e5:be:7d
修改hostname   /etc/sysconfig/network   slave1  slave2 slave3
 
 
master,slave之间ssh相互认证
全部用hadoop登录, 执行ssh-keygen -t rsa,3次回车
chmod 755  ~/. ssh
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 644  ~/.ssh/ authorized_keys
ssh slave1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh slave2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh slave3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys slave1:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys slave2:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys slave3:~/.ssh/authorized_keys
 

格式化namenode:hdfs namenode –format 

bin/hdfs namenode –format -clusterid clustername (集群的刷新:一定要记得加-clusterid n54)
dataNode 无法启动是配置过程中最常见的问题,主要原因是多次format namenode 造成namenode 和datanode的clusterID不一致。建议查看datanode上面的log信息。解决办法:修改每一个datanode上面的CID(位于dfs/data/current/VERSION文件夹中)使两者一致。
 
启动Hadoop:
start-dfs.sh    start-yarn.sh
执行jps看结果master: 
2763 ResourceManager
3007 Jps
1851 NameNode
2008 SecondaryNameNode
 
执行jps看结果slave:
1835 Jps
1739 NodeManager
1423 DataNode
 
 
查看集群状态:
hdfs dfsadmin -report
 
查看文件块组成: 
hdfs fsck / -files -blocks
 
查看HDFS:   
http://192.168.32.168:50070
 
查看RerouceManager调度:    
http://192.168.32.168:8088
 
运行测试程序:
生成数据
echo 'bla bla' > test_in.dat
echo 'a b c ' >> test_in.dat
 
文件上传到Hadoop文件系统
hdfs dfs -mkdir /user/hadoop
hadoop fs -put ~/wordcount/wc-in/test_in.dat /user/hadoop/
 
执行自带的样例程序
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /user/hadoop/test_in.dat /user/hadoop/test_out.dat
 
查看结果
fs -ls /user/hadoop/test_out.dat
hadoop fs -cat /user/hadoop/test_out.dat/part-r-00000
a       1
b       1
bla     2
c       1
 
Hive语句:
hive -e"show databases"
default
附:
hive使用derby作为元数据库找不到所创建表的原因
在不同的目录登录hive,derby的metastrore会建在不同的目录下,也就是说用的是不同的metastore,所以当然会找不到相应的元数据。
 
 
在hive提示符下显示DB的名字hive-site.xml:
hive.cli.prompt=true
 

猜你喜欢

转载自zzhonghe.iteye.com/blog/2210936