Spark reads hbase data (newAPIHadoopRDD method)

There are many ways to read hbase data with spark.Today, I implemented a simple demo using spark's built-in method newAPIHadoopRDD. The code is very simple and no comments are added.

For spark writing to hbase, you can see the previous two articles  https://blog.csdn.net/xianpanjia4616/article/details/85301998 , https://blog.csdn.net/xianpanjia4616/article/details/80738961

package hbase

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.util.Bytes
import org.apache.log4j.{Level, Logger}
import util.PropertiesScalaUtils
import org.apache.spark.sql.SparkSession

/**
  * spark读取hbase的数据
  */
object ReadHbase {
  def main(args: Array[String]): Unit = {
    Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
    Logger.getLogger("org.apache.hadoop").setLevel(Level.WARN)
    Logger.getLogger("org.eclipse

Guess you like

Origin blog.csdn.net/xianpanjia4616/article/details/89157616