The transition from Hive to Spark SQL

SQLContext use

Create a Scala project, create a master class SQLContextApp

package com.yy.spark

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SQLContext

/**
 * SQLContext的使用
 * Spark 1.x使用
 */
 object SQLContextApp extends App {

  var path = args(0)

  //创建相应的Context
  val sparkConf = new SparkConf()
  //在测试或者生产中,AppName和Master通过脚本进行指定,本地开发环境可以如下写法
  //sparkConf.setAppName("SQLContextApp").setMaster("local[2]")  
  val sparkContext = new SparkContext()
  val sqlContext = new SQLContext(sparkContext)

  //2)相关处理
  val people = sqlContext.read.format("json").load(path)
  people.printSchema()
  people.show()

  //3)关闭资源
  sparkContext.stop()
}

Spark Application submitted to the environment run
the following command at the server

$ spark-submit \
--class com.yy.spark.SQLContextApp \
--master local[2] \
/home/hadoop/lib/sparksql-project-1.0.jar \
/home/hadoop/app/spark-2.2.0-bin-hadoop2.6/examples/src/main/resources/people.json

By executing a shell script
to create a shell file 1), the statement will be executed just paste the file sqlcontext.sh

$ vim sqlcontext.sh

spark-submit \
--name SQLContextApp \
--class com.yy.spark.SQLContextApp \
--master local[2] \
/home/hadoop/lib/sparksql-project-1.0.jar \
/home/hadoop/app/spark-2.2.0-bin-hadoop2.6/examples/src/main/resources/people.json

2) to give permission

$ chmod u+x sqlcontext.sh

3) execution

$ ./sqlcontext.sh

HiveCntext use

Use HiveContext, does not require a Hive environment already installed. Hive-site.xml only need to
copy the hive under the conf directory folder under the hive-site.xml to spark the conf directory

$ cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf

Creating HiveContextApp, the following code

package com.yy.spark

import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.{SparkConf, SparkContext}

/**
 * HiveContext的使用
 * Spark 1.x使用
 */
 object HiveContextApp extends App {

  //创建相应的Context
  val sparkConf = new SparkConf()
  val sparkContext = new SparkContext()
  val hiveContext = new HiveContext(sparkContext)

  //2)相关处理
  hiveContext.table("emp").show()

  //3)关闭资源
  sparkContext.stop()

}

Use maven compile the project root directory

mvn package -Dmaven.test.skip=true

Compiling the project target directory jar package uploaded to the server lib directory, I compiled file is sparksql-project-1.0.jar
the mysql toolkit mysql-connector-java-5.1.45.jar uploaded to the software directory

Script editor hivecontext.sh

$ vim hivecontext.sh

spark-submit \
--class com.yy.spark.HiveContextApp \
--master local[2] \
--jars /home/hadoop/software/mysql-connector-java-5.1.45.jar \
/home/hadoop/lib/sparksql-project-1.0.jar

Give permission, execute script

$ chmod u+x sqlcontext.sh
$ ./hivecontext.sh

SparkSession use

Here to read the hive, for example

Copy the conf directory under the hive folder under the hive-site.xml to spark the conf directory

$ cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf

Creating SparkSessionApp, the following code

package com.yy.spark

import org.apache.spark.sql.SparkSession

/**
 * SparkSession使用
 * Spark 2.x
 */
object SparkSessionApp extends App {

  //读取本地文件
//  var path = args(0)
//  val spark = SparkSession.builder().appName("SparkSessionApp").master("local[2]").getOrCreate()
//  val people = spark.read.json(path)
//  people.show()
//  spark.stop()

  //读取Hive
  val sparkHive = SparkSession.builder().appName("HiveSparkSessionApp").master("local[2]").enableHiveSupport().getOrCreate()
  //加载hive表
  val emp = sparkHive.table("emp")
  emp.show()
  //关闭
  sparkHive.stop()

}

Use maven compile the project root directory

mvn package -Dmaven.test.skip=true

Compiling the project target directory jar package uploaded to the server lib directory, I compiled file is sparksql-project-1.0.jar
the mysql toolkit mysql-connector-java-5.1.45.jar uploaded to the software directory

Script editor hivecontext.sh

$ vim hivecontext.sh

spark-submit \
--class com.yy.spark.SparkSessionApp \
--master local[2] \
--jars /home/hadoop/software/mysql-connector-java-5.1.45.jar \
/home/hadoop/lib/sparksql-project-1.0.jar

Give permission, execute script

$ chmod u+x sqlcontext.sh
$ ./hivecontext.sh

Use spark-shell & spark-sql of

If you use the hive, the premise also need to copy the hive-site.xml to spark the conf directory

cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf

spark-shell

$ ./spark-shell --master local[2] --jars ~/software/mysql-connector-java-5.1.45.jar

# 查看hive中所有表
scala> spark.sql("show tables").show
+--------+---------+-----------+
|database|tableName|isTemporary|
+--------+---------+-----------+
| default|      emp|      false|
+--------+---------+-----------+

# 查看emp表数据
scala> spark.sql("select * from emp").show

spark-sql

Use spark-sql sql statement can be written directly in the console

$ ./spark-sql --master local[2] --jars ~/software/mysql-connector-java-5.1.45.jar

# 查看hive中所有表
spark-sql> show tables;

# 查看emp表数据
spark-sql> select * from emp;

Use of thriftserver & beeline

Start thriftserver,

$ cd $SPARK_HOME/sbin
$ ./start-thriftserver.sh --master local[2] --jars ~/software/mysql-connector-java-5.1.45.jar

The default port 10000, can be modified by specifying parameters

./sbin/start-thriftserver.sh \
  --master local[2] \
  --jars ~/software/mysql-connector-java-5.1.45.jar \
  --hiveconf hive.server2.thrift.port=14000

Start beeline, -u refers thriftserver address, -n server user name

$ cd $SPARK_HOME/bin
$ ./beeline -u jdbc:hive2://localhost:10000 -n hadoop
0: jdbc:hive2://localhost:10000> show tables;
0: jdbc:hive2://localhost:10000> select * from emp;

Thriftserver difference and spark-shell / spark-sql of

1) spark-shell, spark- sql, each of which is a spark application startup
application when 2) thriftserver, no matter how many clients (beeline / code) start, is a spark application, when the application server resources simply start once; solve the problem of data sharing, multiple clients can share data;

jdbc way to access programming

When using jdbc development, first start thriftserver

Introducing dependency in pom.xml

<dependency>
  <groupId>org.spark-project.hive</groupId>
  <artifactId>hive-jdbc</artifactId>
  <version>1.2.1.spark2</version>
</dependency>

jdbc access code is as follows

package com.yy.spark

import java.sql.DriverManager

/**
 * 通过JDBC访问
 */
object SparkSQLThriftServerApp extends App {

  Class.forName("org.apache.hive.jdbc.HiveDriver")

  val conn = DriverManager.getConnection("jdbc:hive2://hadoop000:10000", "hadoop", "")
  val pstmt = conn.prepareStatement("select empno,ename,salary from emp")
  val rs = pstmt.executeQuery()
  while (rs.next()) {
    println("empno:" + rs.getInt("empno") + ", ename:"+rs.getString("ename")
      + ", salary:"+rs.getDouble("salary"))
  }
  rs.close()
  pstmt.close()
  conn.close()
}

Guess you like

Origin www.cnblogs.com/yanceyy/p/11978457.html
Recommended