ubuntu18.04安装hadoop、mysql、hive

一、配置环境变量

export HADOOP_HOME=/opt/hadoop/hadoop-2.8.5
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_HOME=$HADOOP_HOME
export YADR_CONF_DIR=$HADOOP_HOME
export JAVA_HOME=/opt/java/jdk1.8.0_144
export HIVE_HOME=/opt/hive/apache-hive-2.3.7-bin
export CLASSPATH=.:JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin

二、安装hadoop

vim hadoop-env.sh:

export JAVA_HOME=/opt/java/jdk1.8.0_144

vim core-site.xml:

<configuration>
            <property>
                 <name>fs.defaultFS</name>
                <value>hdfs://localhost:9000</value>
              </property>

              <property>
						<name>hadoop.tmp.dir</name>
					<value>/opt/hadoop/hadoop-2.8.5/hadooptmp</value>
					</property>
</configuration>

vim hdfs-site.xml:

<configuration>
            <property>
                            <name>dfs.replication</name>
                                    <value>3</value>
                                        </property>
        <property>
                         <name>dfs.namenode.name.dir</name>
                         <value>file:/opt/hadoop/hadoop-2.8.5/namedata</value>
                                      </property>
        <property>
                         <name>dfs.datanode.data.dir</name>
                         <value>file:/opt/hadoop/hadoop-2.8.5/nodedata</value>
                                      </property>
                                </configuration>

执行 NameNode 的格式化:

./bin/hdfs namenode -format

三、安装mysql

进入mysql执行:

create database hive;
use hive;
create table user(Host char(20),User char(10),Password char(20));
 
insert into user(Host,User,Password) values("localhost","hive","hive");
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost' IDENTIFIED BY 'hive';
flush privileges;

四、安装hive

vim hive-env.sh:

HADOOP_HOME=/opt/hadoop/hadoop-2.8.5

vim hive-site.xml:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost:3306/hive?useSSL=false</value>
</property>
<property>
                        <name>javax.jdo.option.ConnectionDriverName</name>
                                        <value>com.mysql.jdbc.Driver</value>
                                                </property>
<property>
                        <name>javax.jdo.option.ConnectionUserName</name>
                                        <value>hive</value>
                                                </property>
<property>
                        <name>javax.jdo.option.ConnectionPassword</name>
                                        <value>hive</value>
                                                </property>
</configuration>

接下来将JDBC驱动移动到master服务器上:

tar -zxvf mysql-connector-java-5.1.47.tar.gz
cp mysql-connector-java-5.1.47/mysql-connector-java-5.1.40-bin.jar /opt/hive/lib/

启动Hadoop,在Hadoop中创建Hive需要用到的目录并设置好权限:

hadoop fs -mkdir /tmp 
hadoop fs -mkdir -p /user/hive/warehouse 
hadoop fs -chmod g+w /tmp 
hadoop fs -chmod g+w /user/hive/warehouse
启动hive的时候也会自动创建

初始化meta数据库:

$ cd /opt/hive/bin
$ ./schematool -dbType mysql -initSchema
$ ./schematool -dbType mysql -info
$ hive

hive的元数据在mysql里面,mysql和hive同步,hadoop fs -ls /user/hive/warehouse/ 对应的是一个表一个文件,初始化hive元数据库的时候,hadoop里面对应的文件没有删除,hive建表的时候相同的表名的话,数据还在。

猜你喜欢

转载自blog.csdn.net/qq_36588424/article/details/109656380