Hive Big Data technologies (Hive build)

Hive Big Data technologies (Hive build)


1.1 What is the Hive


Hive: Facebook open source for the statistics to solve the massive structure of the log.
Hive is based on Hadoop data warehousing tools, you can map the structure of the data file to a table, and provides SQL-like query.
Essentially: HQL converted to the MapReduce procedure
Hive
1) in the data storage processing Hive the HDFS
2 Hive analytical data underlying implementation) is MapReduce
3) executing a program run Yarn


1.2 Hive advantages and disadvantages


1.2.1 advantage

  1. Operator interface using SQL-like syntax, the ability to provide rapid development (simple, easy to use).
  2. Avoiding to write MapReduce, reduce learning costs for developers.
  3. Hive execution delay is relatively high, so the Hive commonly used in data analysis, less demanding real-time applications.
  4. Hive advantage of big data processing, for processing data no small advantage, because Hive execution delay is relatively high.
  5. Hive supports user-defined functions, users can implement your own functions according to their needs.

1.2.2 shortcomings
1. HQL limited ability to express the Hive
(1) iterative algorithm can not express
(2) data mining is not good at
2. Hive efficiency is relatively low
(. 1) Hive MapReduce jobs automatically generated, usually enough intelligence
(2) Hive tuning more difficult, coarser


1.3 Hive architecture principles


Hive architecture principles
1. User Interface: Client
CLI (Hive shell), JDBC / ODBC (access to the Java hive), WEBUI (browser access Hive)
2. Metadata: Metastore
metadata includes: table name, table belongs to a database (the default is the default), owner of the table, column / partition field, type the table (if an external table), the data directory and other table where;
the default storage in comes the derby database, it is recommended to use MySQL storage Metastore
3. Hadoop
using HDFS store, is calculated using the MapReduce.
4. Drive: Driver
(. 1) parser (SQL Parser): converts the SQL string into an abstract syntax tree AST, this step is typically done in the third-party tool magazine, such ANTLR; AST a parsing such table exists, the field if there is, whether or not SQL semantics wrong.
(2) the compiler (Physical Plan): The AST compiled logic execution plan.
(3) optimizer (Query Optimizer): logic execution plan optimization.
(4) the actuator (Execution): to convert into a physical execution plan logic program can run. For Hive, it is to MR / Spark.
Hive operating mechanism
Hive through a series of interactive interface provided to the user, receiving a user instruction (the SQL), uses its own Driver, binding the metadata (the MetaStore), translate the instructions into the MapReduce, submitted to Hadoop performed, and finally, the execution returns output the results to the user interaction interface.


1.4 Hive database and compare


由于 Hive 采用了类似SQL 的查询语言 HQL(Hive Query Language),因此很容易将 Hive 理解为数据库。其实从结构上来看,Hive 和数据库除了拥有类似的查询语言,再无类似之处。本文将从多个方面来阐述 Hive 和数据库的差异。数据库可以用在 Online 的应用中,但是Hive 是为数据仓库而设计的,清楚这一点,有助于从应用角度理解 Hive 的特性。
1.4.1 查询语言
由于SQL被广泛的应用在数据仓库中,因此,专门针对Hive的特性设计了类SQL的查询语言HQL。熟悉SQL开发的开发者可以很方便的使用Hive进行开发。
1.4.2 数据存储位置
Hive 是建立在 Hadoop 之上的,所有 Hive 的数据都是存储在 HDFS 中的。而数据库则可以将数据保存在块设备或者本地文件系统中。
1.4.3 数据更新
由于Hive是针对数据仓库应用设计的,而数据仓库的内容是读多写少的。因此,Hive中不建议对数据的改写,所有的数据都是在加载的时候确定好的。而数据库中的数据通常是需要经常进行修改的,因此可以使用 INSERT INTO … VALUES 添加数据,使用 UPDATE … SET修改数据。
1.4.4 索引
Hive在加载数据的过程中不会对数据进行任何处理,甚至不会对数据进行扫描,因此也没有对数据中的某些Key建立索引。Hive要访问数据中满足条件的特定值时,需要暴力扫描整个数据,因此访问延迟较高。由于 MapReduce 的引入, Hive 可以并行访问数据,因此即使没有索引,对于大数据量的访问,Hive 仍然可以体现出优势。数据库中,通常会针对一个或者几个列建立索引,因此对于少量的特定条件的数据的访问,数据库可以有很高的效率,较低的延迟。由于数据的访问延迟较高,决定了 Hive 不适合在线数据查询。
1.4.5 执行
Hive中大多数查询的执行是通过 Hadoop 提供的 MapReduce 来实现的。而数据库通常有自己的执行引擎。
1.4.6 执行延迟
Hive 在查询数据的时候,由于没有索引,需要扫描整个表,因此延迟较高。另外一个导致 Hive 执行延迟高的因素是 MapReduce框架。由于MapReduce 本身具有较高的延迟,因此在利用MapReduce 执行Hive查询时,也会有较高的延迟。相对的,数据库的执行延迟较低。当然,这个低是有条件的,即数据规模较小,当数据规模大到超过数据库的处理能力的时候,Hive的并行计算显然能体现出优势。
1.4.7 可扩展性
由于Hive是建立在Hadoop之上的,因此Hive的可扩展性是和Hadoop的可扩展性是一致的(世界上最大的Hadoop 集群在 Yahoo!,2009年的规模在4000 台节点左右)。而数据库由于 ACID 语义的严格限制,扩展行非常有限。目前最先进的并行数据库 Oracle 在理论上的扩展能力也只有100台左右。
1.4.8 数据规模
Since Hive is built on a cluster and can take advantage of MapReduce parallel computing, so you can support very large-scale data; corresponding database can support data smaller scale.


2. Hive installation


YUM Library 2.1 download

       cd /usr/local/
       mkdir mysql
       cd mysql/
       yum -y install wget
       wget http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm

2.2 installed YUM repository

        yum -y install mysql57-community-release-el7-10.noarch.rpm

2.3 Install Database

        yum -y install mysql-community-server

2.4 to complete the installation. Restart mysql

        systemctl restart mysqld

2.5 Find password

        grep "password" /var/log/mysqld.log

2.6 Copy and paste the password into the database on top

        mysql -uroot -p

2.7 Change Password (enter the initial password, this time can not do anything, because then you must modify the default password to MySql database operation command to change the password)

        set global validate_password_policy=0;
        set global validate_password_policy=LOW;
        注:密码策略分四种
        ①OFF(关闭) ②LOW(低) ③MEDIUM(中)④STRONG(强)
        SET GLOBAL validate_password_length=6;
        ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';

2.8 ctrl + z to exit, and then re-enter the password 123456 ,, ,,, modified successfully found
2.9 set up a remote connection

       use mysql;
       update user set Host='%' where User='root';
       flush privileges;

2.10 Configuring Hive
2.10.1 New hive folder, and then dragged into the installation package

       mkdir /usr/local/hive
       cd /usr/local/hive

2.10.2 decompression ,, the hive to compress / usr / local / hive under

       tar -zxvf apache-hive-1.2.2-bin.tar.gz 

2.10.3 modify environment variables

       vim /etc/profile
       export JAVA_HOME=/usr/local/java/jdk1.8.0_211
       export HADOOP_HOME=/usr/local/hadoop/hadoop-2.9.2
       export HIVE_HOME=/usr/local/hive/apache-hive-1.2.2-bin/
       export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME              /sbin:$HIVE_HOME/bin
       source /etc/profile

2.10.4 Hive core configuration file

       cd /usr/local/hive/apache-hive-1.2.2-bin/conf/
       vim hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>

2.10.5 path modification Hadoop Hive

       cp hive-env.sh.template hive-env.sh
       vim hive-env.sh
       HADOOP_HOME=/usr/local/hadoop/hadoop-2.9.2/

Lib 2.11 MySql the connecting frame package directory dragged Hive

       cd /usr/local/hive/apache-hive-1.2.2-bin/lib/

2.12 Start Hive

       cd /usr/local/hive/apache-hive-1.2.2-bin/bin/
       ./hiveserver2

2.13 Another open a window into the bin directory under the hive

       cd /usr/local/hive/apache-hive-1.2.2-bin/bin/
       ./beeline -u jdbc:hive2://192.168.222.101:10000 -n root

Hive startHive interfacehttp port

Released seven original articles · won praise 2 · Views 1718

Guess you like

Origin blog.csdn.net/weixin_45553177/article/details/104194465