SparkSQL(三)jdbc访问hive表

一、目的:

使用jdbc访问hive表

二、操作:

1.前提

开启thriftserver

sbin/start-thriftserver.sh  \
--master local[2] \
--jars /opt/datas/mysql-connector-java-5.1.27-bin.jar  \
--hiveconf hive.server2.thrift.port=14000 

2.加载hive-jdbc依赖

    <dependency>
      <groupId>org.spark-project.hive</groupId>
      <artifactId>hive-jdbc</artifactId>
      <version>0.13.1</version>
    </dependency>

3.代码

package SparkSQL

import java.sql.DriverManager

/**
  * 通过jdbc的方式访问
  */
object SparkSQLThriftServerApp {
  def main(args: Array[String]): Unit = {

    Class.forName("org.apache.hive.jdbc.HiveDriver")

    val conn=DriverManager.getConnection("jdbc:hive2://bigdata.ibeifeng.com:14000","bigdata.ibeifeng.com","")
    val pstmt=conn.prepareStatement("select empno,ename,sal from imooc.emp")
    val rs =pstmt.executeQuery()
    while (rs.next()){
      println("empno:"+rs.getInt("empno")+",ename:"+rs.getString("ename")+
        ",sal:"+rs.getDouble("sal"))
    }

    rs.close()
    pstmt.close()


  }
}

猜你喜欢

转载自blog.csdn.net/u010886217/article/details/82916492
今日推荐