spark hive hbase 结合

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/bigdataf/article/details/79095492

spark hive hbase 结合

业务需求,需要整合需要读取hive数据导入hbase中,一下是环境配置流程以及中间遇到的问题

1.spark读hive

需要copy hive-site和hdfs-site 等配置文件到项目资源包下

object hivesql {

  case class Record(key: Int, value: String)

  def main(args: Array[String]): Unit = {

    // warehouseLocation points to the default location for managed databases and tables
    val warehouseLocation = "spark-warehouse"

    val spark = SparkSession
      .builder()
      .appName("Spark Hive Example")
      .master("local[2]")
      .config("spark.sql.warehouse.dir", warehouseLocation)
      .enableHiveSupport()
      .getOrCreate()

    import spark.implicits._
    import spark.sql

//    sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
//    sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")

    // Queries are expressed in HiveQL
    sql("SELECT * FROM test limit 10").show()

问题1

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:85)
    ... 98 more
Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
    at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
    at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
    ... 100 more

原因是hive没有启动metastore 服务
在hive-site.xml 中添加

<property>
        <name>hive.metastore.uris</name>
        <value>thrift://ip:9083</value>
        <description>IP address (or fully-qualified domain name) and port of the metastore host</description>
    </property>

hive –service metastore 启动

再次run 程序报一下错误

    at hivesql.main(hivesql.scala)
Caused by: MetaException(message:java.lang.ClassNotFoundException Class org.openx.data.jsonserde.JsonSerDe not found)
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:399)
    at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
    ... 66 more
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: org.openx.data.jsonserde.JsonSerDe
    at org.apache.hadoop.hive.ql.plan.TableDesc.getDeserializerClass(TableDesc.java:74)
    at org.apache.spark.sql.hive.execution.HiveTableScanExec.addColumnMetadataToConf(HiveTableScanExec.scala:99)
    at org.apache.spark.sql.hive.execution.HiveTableScanExec.<init>(HiveTableScanExec.scala:82)
    at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$4.apply(HiveStrategies.scala:99)
    at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$4.apply(HiveStrategies.scala:99)
    at org.apache.spark.sql.execution.SparkPlanner.pruneFilterProject(SparkPlanner.scala:93)
    at org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:95)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
    at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
    at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:79)
    at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:75)
    at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:84)
    at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:84)
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2791)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:636)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:595)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:604)
    at hivesql$.main(hivesql.scala:33)
    at hivesql.main(hivesql.scala)
Caused by: java.lang.ClassNotFoundException: org.openx.data.jsonserde.JsonSerDe
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.hadoop.hive.ql.plan.TableDesc.getDeserializerClass(TableDesc.java:71)
    ... 38 more

这是由于我们使用第三放jar 解析json文件导致的
需要将其加入到你项目中可以直接copy jar 到项目add libary
或者添加到mvn中应用

mvn install:install-file -Dfile=/Users/zenmen/Documents/json-serde-1.3.8-jar-with-dependencies.jar -DgroupId=com.hive.jsonserde -DartifactId=json-serde -Dversion=1.3.8 -Dpackaging=jar

[INFO] Scanning for projects...
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-install-plugin:2.4:install-file (default-cli) @ standalone-pom ---
[INFO] Installing /Users/zenmen/Documents/json-serde-1.3.8-jar-with-dependencies.jar to /Users/zenmen/.m2/repository/com/wifi/jsonserde/json-serde/1.3.8/json-serde-1.3.8.jar
[INFO] Installing /var/folders/r7/mr4qcrzn6r73wkcwv01c_5f80000gn/T/mvninstall4253829782020752562.pom to /Users/zenmen/.m2/repository/com/hive/jsonserde/json-serde/1.3.8/json-serde-1.3.8.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.412 s
[INFO] Finished at: 2018-01-17T10:34:23+08:00
[INFO] Final Memory: 6M/155M
[INFO] ------------------------------------------------------------------------

pom 中配置如下

<dependency>
        <groupId>com.hive.jsonserde</groupId>
        <artifactId>json-serde</artifactId>
        <version>1.3.8</version>
    </dependency>

发现还缺少 hadoop lzo code
同样的方式 找到hadoop-lzo包自己添加,或者添加对应的mvn依赖 即可解决

问题2:

java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path

copy hadoop native 下的lib库文件后发现还是不行
查询官网
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/NativeLibraries.html

The native hadoop library is supported on *nix platforms only. The library does not to work with Cygwin or the Mac OS X platform.

我 … native library 不适用于mac os x。发现不支持,难不成还有自己编译,虽然本地查询hive已经没有问题,但作为程序员 这是不完美的,时间原因,后续研究吧,
windows 对应的Hadoop lib
https://github.com/steveloughran/winutils

发现一下文章也谈及到
http://blog.csdn.net/tterminator/article/details/51779689

猜你喜欢

转载自blog.csdn.net/bigdataf/article/details/79095492