Apache Hive 3.1单机安装部署

版权声明:本文为博主原创文章,转载请注明出处 https://blog.csdn.net/vkingnew/article/details/89389400
软件列表:
CentOS 7.5
Hadoop 3.2
Hive  3.1.1
MySQL 5.7.25


第一部分:准备主机
0.前置条件:
0.1 配制SSH:
# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:T7I8WHvVqtEvNhhCuGgtwJcu+tfx2QcFs94HCwZYzlM root@node4
The key's randomart image is:
+---[RSA 2048]----+
|       o. E      |
|      .o..o      |
| .   . .+. +     |
|  o o . ..+ o.   |
|   + o oSo.+.o.  |
|  . = ++.*+oo..  |
| . o o.o=+++o.   |
|.   . . oooo=.   |
| ...      .o o.  |
+----[SHA256]-----+

# cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys

# ssh localhost
Last login: Thu Apr 18 21:19:19 2019

0.2配置防火墙和SELinux
0.3配置主机名:

第二部分:安装配置Hadoop:
1.Java安装

# cat /etc/profile.d/java.sh 
export JAVA_HOME=/usr/local/jdk
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

2.Hadoop 单机安装
#tar -xzvf hadoop-3.2.0.tar.gz  -C /usr/local/ 
#mv /usr/local/hadoop-3.2.0/ /usr/local/hadoop

# cat /etc/profile.d/hadoop.sh 
export HADOOP_HOME=/usr/hadoop/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
# source  /etc/profile.d/hadoop.sh 
# hadoop version
Hadoop 3.2.0
Source code repository https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
Compiled by sunilg on 2019-01-08T06:08Z
Compiled with protoc 2.5.0
From source with checksum d3f0795ed0d9dc378e2c785d3668f39
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0.jar


3.Hive 3.1 安装



4.创建目录:

# mkdir -p /data/hadoop/tmp
# mkdir -p /data/hadoop/hdfs/{data,name}

5.添加用户:

5.5 格式化:
# hdfs namenode -format
提示信息:
2019-04-18 21:14:24,904 INFO common.Storage: Storage directory /data/hadoop/tmp/dfs/name has been successfully formatted.
2019-04-18 21:14:24,936 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2019-04-18 21:14:25,243 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds .
2019-04-18 21:14:25,266 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2019-04-18 21:14:25,310 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node4/192.168.0.154
************************************************************/

若没有提示 成功的话,会看到 “successfully formatted” 和 “Exitting with status 0” 的提示,若为 “Exitting with status 1” 则是出错。
6.启动hadoop:

# ./start-all.sh    
Starting namenodes on [localhost]
Last login: Thu Apr 18 21:20:18 CST 2019 from localhost on pts/3
Starting datanodes
Last login: Thu Apr 18 21:20:35 CST 2019 on pts/2
Starting secondary namenodes [node4]
Last login: Thu Apr 18 21:20:38 CST 2019 on pts/2
Starting resourcemanager
Last login: Thu Apr 18 21:20:50 CST 2019 on pts/2
resourcemanager is running as process 3172.  Stop it first.
Starting nodemanagers
Last login: Thu Apr 18 21:21:18 CST 2019 on pts/2
-- 进程查看:
# jps
3716 NameNode
3172 ResourceManager
3832 DataNode
4569 Jps
4010 SecondaryNameNode
4427 NodeManager

7.端口:
http://192.168.0.154:8088  All Applications
http://192.168.0.154:9870   Namenode information
http://192.168.0.154:8042  Node information

8.操作wordcount示例:
# hadoop fs -ls /
# hadoop fs -mkdir /input
# hadoop fs -mkdir /output
#如果有output目录,删除;把结果集输出到这里,事先不能存在
#hadoop fs -rm -r /output

# 把提前准备好的文本文件上传到 hdfs 的 /input 目录
hadoop fs -put /home/hadoop/data/*.txt /input

cd /usr/local/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/
hadoop jar hadoop-mapreduce-examples-3.2.0.jar wordcount /input /output

# 查看 hdfs 上 /output 生成的结果
hadoop fs -ls /output
# 输出词频统计结果
hadoop fs -cat /output/part-r-00000


第三部分 配置Hive:
# tar -xzvf apache-hive-3.1.1-bin.tar.gz  -C /usr/local/
# mv /usr/local/apache-hive-3.1.1-bin/ /usr/local/hive

# cat /etc/profile.d/hive.sh 
export HIVE_HOME=/usr/local/hive
export PATH=$PATH:$HIVE_HOME/bin

# source /etc/profile.d/hive.sh 

hive 初始化:
# schematool  -dbType mysql -initSchema --verbose

猜你喜欢

转载自blog.csdn.net/vkingnew/article/details/89389400