hbase安装和spark集成常见问题

一、zookeeper集成
使用自带zookeeper可参考hbase官网 https://hbase.apache.org/book.html#quickstart
二、spark/hadoop集成
需要将jar放到hadoop的lib下
或者在spark提交命令中添加 --driver-class-path /opt/hadoop-2.7.7/lib/*:/opt/hbase-2.1.2/conf/参数

三、此时会出现各种奇葩问题,找不到类啊等等
一般都是jar包冲突问题,将冲突的jar包处理即可
netty
jackson

四、spark 执行时报java.lang.ClassNotFoundException: org.apache.htrace.core.HTraceConfiguration
将/hadoop-2.7.7/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar放在在spark的jars中
推测如果是hadoop时报错,需要将/hadoop-2.7.7/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar 放到hadoop/lib中

五、org.apache.hadoop.hbase.TableNotFoundException: hbase:test

hbase:test去掉hbase:

六、Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not se(转)
当从SparkSql得到的dataFrame,映射成RDD之后向hbase中直接保存数据的时候报错:

Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set in JobConf.

采用的是saveAsNewApiHadoopDataSet

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import SparkContext._
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.mapreduce.Job
import org.apache.hadoop.hbase.mapreduce.TableOutputFormat
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.client.Result
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.conf.Configuration


但是更换为saveAsHadoopDataset就可以使用,


// var conf = new Configuration()
// var tableName = "test_t1"
// val jobConf = new JobConf(conf,this.getClass)
// jobConf.set("hbase.zookeeper.quorum","10.172.10.169,10.172.10.168,10.172.10.170")
// jobConf.setOutputKeyClass(classOf[ImmutableBytesWritable])
//// jobConf.setOutputValueClass(classOf[Put])
// jobConf.setOutputFormat(classOf[org.apache.hadoop.hbase.mapred.TableOutputFormat])
// jobConf.set(TableOutputFormat.OUTPUT_TABLE,"test_t1")
// rdd1.map(
// x => {
// var put = new Put(Bytes.toBytes(x._1))
// put.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("c1"), Bytes.toBytes(x._2))
// (new ImmutableBytesWritable,put)
// }
// ).saveAsHadoopDataset(jobConf)

不知道在哪里出现了错误,对比发现应该是 没有使用sc.hadoopConfiguration,而是使用的JobConf 作为参数,新API不能用旧的configuration 。


// sc.hadoopConfiguration.set("hbase.zookeeper.quorum","10.172.10.169,10.172.10.168,10.172.10.170")
//// sc.hadoopConfiguration.set("zookeeper.znode.parent","/hbase")
// sc.hadoopConfiguration.set(TableOutputFormat.OUTPUT_TABLE,"test_t1")
// var job = new Job(sc.hadoopConfiguration)
// job.setOutputKeyClass(classOf[ImmutableBytesWritable])
// job.setOutputValueClass(classOf[Result])
// job.setOutputFormatClass(classOf[TableOutputFormat[ImmutableBytesWritable]])
//
// rdd1.map(
// x => {
// var put = new Put(Bytes.toBytes(x._1))
// put.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("c1"), Bytes.toBytes(x._2))
// (new ImmutableBytesWritable,put)
// }
// ).saveAsNewAPIHadoopDataset(job.getConfiguration)

七、Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder

将/hadoop-2.7.7/lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar放在在spark的jars中
推测如果是hadoop时报错,需要将/hadoop-2.7.7/lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar 放到hadoop/lib中

猜你喜欢

转载自www.cnblogs.com/4ttty/p/10382807.html