Spark+hbase环境搭建


一、环境

Spark: 2.1.0

Hadoop: 2.6.0

Hbase: 1.2.6

开发环境:Android Studio

二、hbase简介

HBase是一个分布式的、面向列的开源数据库,该技术来源于Fay Chang所撰写的Google论文“Bigtable:一个结构化数据的分布式存储系统。就像Bigtable利用了Google文件系统(File System)所提供的分布式数据存储一样,HBaseHadoop之上提供了类似于Bigtable的能力。HBaseApacheHadoop项目的子项目。HBase不同于一般的关系数据库,它是一个适合于非结构化数据存储的数据库。另一个不同的是HBase基于列的而不是基于行的模式 

上一篇介绍过spark读取hadoop文件的形式,不过通常我们处理完数据后都是要存放到数据库的,而且对于实时查询来说,肯定不能每次去读文件解析返回结果,这样速度根本来不及,所以最终还是要用到数据库查询,hbase就是这样一个满足实时查询的非关系型数据库,不同于hive,它通过key-value形式在内存中维护数据,因此可以满足实时查询的需要,不像hive需要遍历整张表,hbase可以通过索引快速找到数据。 

 

三、hbase环境配置

1、hbase下载

http://mirrors.hust.edu.cn/apache/hbase/

建议使用官网的稳定版本 

2、将下载的hbase解压到指定目录,这里是/home/hbase,配置/etc/profile,把路径加入到环境变量里,完整的配置如下:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export SPARK_HOME=/home/spark/spark-2.1.0-bin-hadoop2.6
export HADOOP_HOME=/home/hadoop/hadoop-2.6.1
export HBASE_HOME=/home/hbase/hbase-1.2.6
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PYTHONPATH=$SPARK_HOME/python
export SPARK_SCALA_VERSION=2.11
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$HBASE_HOME/bin

3、配置$HBASE_HOME/conf/hbase-site.xml

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:9000/hbase</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>master,worker</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/hbase/zookeeper</value>
  </property>
</configuration>

备注:

1)habse.rootdir是配置hbasehadoop连接的地方,hbase是基于hadoop的一个项目,因此在配置hbase之前先配置好hadoop

2)hbase集群模式运作与否的标志,默认是false,开启需要设置为truefalse时启动hbase会在一个jvm中运行hbasezk

3)hbase.zookeeper.quorum:重要的也是必须设置的,启动zk的服务器列表,逗号分隔,分布式模式下必须设置,默认是localhosthbase客户端也需要设置这个值去访问zk

4)注意的是:sparkhadoophbase这些配置时使用的地址都是机器名而不是ip,更不是随便取的一个名字,然后在/etc/hosts配置映射就行了,特别是hbase,它会将你配置的地址转化为ip,然后去获取这个ip对应的hostname,最后去/etc/hosts里面寻找这个hostname对应的ip,如果找不到就会报错,因此为了避免少走弯路,一定要用正确的机器名来配置,并且这机器名对应的ip地址只能有一个,之前不小心多配置了127.0.0.1,结果hbase变成监听本地了,其他的节点都访问不了它,导致各种问题 

4、配置regionservers,所有的用户数据以及元数据的请求,在经过Region的定位,最终会落在RegionServer上,并由RegionServer实现数据的读写操作,配置时最好使用机器名而不是ip,如下:

master
worker 

5、配置hbase-env.sh

export HBASE_MANAGES_ZK=true

注意:上面这个属性是为了使用hbase自带的zookeeper而配置,个人觉得还是使用自带的比较方便,这样不用考虑版本兼容性,当然如果想自己配置zookeeper,需要把这个属性设置为false,然后去下载zookeeper进行配置 

6、测试hbase

1)启动hadoop

2)执行命令hdfs dfsadmin -safemode leave来关闭hadoopnamenode的安全模式,如果关闭的话,namenode是只读模式的,这样的话hbase就无法在里面写数据,结果就是HMaster会提示ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet,然后查看hmasterlog就会看到:util.FSUtils: Waiting for dfs to exit safe mode...

3)启动hbase,进入$HBASE_HOME/bin目录,执行./start-hbase.sh来启动hbase,启动成功后可以用jps命令查看进程是否起来,起来的话执行hbase shell进行测试,可以用命令createtable,row,family命令来测试,如果执行成功说明hbase搭建成功了

  

四、spark调用hbase

1、android studio中创建一个java项目,这里取名为hbase,然后在项目的lib里面导入spark-2.1.0-bin-hadoop2.6\jars里面所有的jar包以及hbase-1.2.6\lib下的hbase-protocol-1.2.6.jarhbase-common-1.2.6.jarhtrace-core-3.1.0-incubating.jarhbase-server-1.2.6.jarhbase-client-1.2.6.jarmetrics-core-2.2.0.jar,这些jar包是开发spark+hbase的必备jar 

2、配置spark-env.sh,将用到的hbasejar包设置到classpath,如下:

export SPARK_CLASSPATH=$HBASE_HOME/lib/hbase-protocol-1.2.6.jar:$HBASE_HOME/lib/hbase-common-1.2.6.jar:$HBASE_HOME/lib/htrace-core-3.1.0-incubating.jar:$HBASE_HOME/lib/hbase-server-1.2.6.jar:$HBASE_HOME/lib/hbase-client-1.2.6.jar:$HBASE_HOME/lib/metrics-core-2.2.0.jar:$SPARK_CLASSPATH

注意:spark是采用分布式模式工作的,每个worker节点和主程序一样,主程序需要什么jar包,worker节点执行的时候也会需要的,因此spark如果想调用hbase,必须把hbasejar包引进来 

3、配置window下的hosts文件,增加如下内容:

10.14.66.215    master
10.14.66.127    worker

配置这个的作用在上面提到过,hbase会拿对方的hostnamehosts文件下寻找对应的ip,所以必须配置 

4、编写spark程序如下:

package com.example2;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.CompareFilter;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.PageFilter;
import org.apache.hadoop.hbase.filter.RegexStringComparator;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
import org.apache.hadoop.hbase.mapreduce.TableOutputFormat;
import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
import org.apache.hadoop.hbase.protobuf.generated.ClientProtos;
import org.apache.hadoop.hbase.protobuf.generated.FilterProtos;
import org.apache.hadoop.hbase.util.Base64;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.mapreduce.Job;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;

import scala.Tuple2;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;


public class HBaseTest {
    private static final String TABLE_NAME = "scores";

    public static Configuration conf = null;
    public HTable table = null;
    public HBaseAdmin admin = null;


    static {
        conf = HBaseConfiguration.create();
        conf.set("hbase.zookeeper.property.clientPort", "2181");
        conf.set("hbase.zookeeper.quorum", "master,worker");
        conf.set("hbase.master", "10.14.66.215:60000");
        System.out.println(conf.get("hbase.zookeeper.quorum"));
    }

    /**
     * create table
     */
    public static void creatTable(String tableName, String[] familys)
            throws Exception {
        HBaseAdmin admin = new HBaseAdmin(conf);
        if (admin.tableExists(tableName)) {
            System.out.println("table already exists!");
        } else {
            HTableDescriptor tableDesc = new HTableDescriptor(tableName);
            for (int i = 0; i < familys.length; i++) {
                tableDesc.addFamily(new HColumnDescriptor(familys[i]));
            }
            admin.createTable(tableDesc);
            System.out.println("create table " + tableName + " ok.");
        }
    }

    /**
     * delete table
     */
    public static void deleteTable(String tableName) throws Exception {
        try {
            HBaseAdmin admin = new HBaseAdmin(conf);
            admin.disableTable(tableName);
            admin.deleteTable(tableName);
            System.out.println("delete table " + tableName + " ok.");
        } catch (MasterNotRunningException e) {
            e.printStackTrace();
        } catch (ZooKeeperConnectionException e) {
            e.printStackTrace();
        }
    }

    /**
     * insert data
     */
    public static void addRecord(String tableName, String rowKey,
                                 String family, String qualifier, String value) throws Exception {
        try {
            HTable table = new HTable(conf, tableName);
            Put put = new Put(Bytes.toBytes(rowKey));
            put.add(Bytes.toBytes(family), Bytes.toBytes(qualifier),
                    Bytes.toBytes(value));
            table.put(put);
            System.out.println("insert recored " + rowKey + " to table "
                    + tableName + " ok.");
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    /**
     * delete record
     */
    public static void delRecord(String tableName, String rowKey)
            throws IOException {
        HTable table = new HTable(conf, tableName);
        List list = new ArrayList();
        Delete del = new Delete(rowKey.getBytes());
        list.add(del);
        table.delete(list);
        System.out.println("del recored " + rowKey + " ok.");
    }

    /**
     * query record
     */
    public static void getOneRecord(String tableName, String rowKey)
            throws IOException {
        HTable table = new HTable(conf, tableName);
        Get get = new Get(rowKey.getBytes());
        Result rs = table.get(get);
        for (KeyValue kv : rs.raw()) {
            System.out.print(new String(kv.getRow()) + " ");
            System.out.print(new String(kv.getFamily()) + ":");
            System.out.print(new String(kv.getQualifier()) + " ");
            System.out.print(kv.getTimestamp() + " ");
            System.out.println(new String(kv.getValue()));
        }
    }

    /**
     * show data
     */
    public static void getAllRecord(String tableName) {
        try {
            HTable table = new HTable(conf, tableName);
            Scan s = new Scan();
            ResultScanner ss = table.getScanner(s);
            for (Result r : ss) {
                for (KeyValue kv : r.raw()) {
                    System.out.print(new String(kv.getRow()) + " ");
                    System.out.print(new String(kv.getFamily()) + ":");
                    System.out.print(new String(kv.getQualifier()) + " ");
                    System.out.print(kv.getTimestamp() + " ");
                    System.out.println(new String(kv.getValue()));
                }
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    static String convertScanToString(Scan scan) throws IOException {
        ClientProtos.Scan proto = ProtobufUtil.toScan(scan);
        return Base64.encodeBytes(proto.toByteArray());
    }


    public static void main(String[] args) {
        // TODO Auto-generated method stub

        SparkConf conf1 = new SparkConf().setAppName(
                "DrCleaner_Retention_Rate_Geo_2_to_14").set("spark.executor.memory", "2000m").setMaster(
                "spark://10.14.66.215:7077").set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
                 .setJars(new String[]{"/D:/hbase/build/libs/hbase.jar"});
        conf1.set("spark.cores.max", "4");
        SparkSession spark = SparkSession
                .builder()
                .appName("Java Spark SQL basic example")
                .config(conf1)
                .getOrCreate();
        JavaSparkContext sc = JavaSparkContext.fromSparkContext(spark.sparkContext());


       try {
            String tablename = "scores";
            String[] familys = {"family1", "family2"};
            HBaseTest.creatTable(tablename, familys);

            // add record row and row2
            HBaseTest.addRecord(tablename, "row1", "family1", "q1", "5");
            HBaseTest.addRecord(tablename, "row1", "family1", "q2", "90");
            HBaseTest.addRecord(tablename, "row2", "family2", "q1", "97");
            HBaseTest.addRecord(tablename, "row2", "family2", "q2", "87");          

            System.out.println("===========get one record========");
            HBaseTest.getOneRecord(tablename, "scores");

            System.out.println("===========show all record========");
            HBaseTest.getAllRecord(tablename);

            System.out.println("===========del one record========");
            HBaseTest.delRecord(tablename, "row");
            HBaseTest.getAllRecord(tablename);

            System.out.println("===========show all record========");
            HBaseTest.getAllRecord(tablename);
        } catch (Exception e) {
            e.printStackTrace();
        }
        try {

            Scan scan = new Scan();
           //  scan.setStartRow(Bytes.toBytes("195861-1035177490"));
          //  scan.setStopRow(Bytes.toBytes("195861-1072173147"));

            scan.addColumn(Bytes.toBytes("family1"), Bytes.toBytes("q1"));
            scan.addColumn(Bytes.toBytes("family2"), Bytes.toBytes("q1"));
            List<Filter> filters = new ArrayList<Filter>();
            // RegexStringComparator comp = new RegexStringComparator("87"); // 以 you 开头的字符串
            //  SingleColumnValueFilter filter2 = new SingleColumnValueFilter(Bytes.toBytes("family1"), Bytes.toBytes("q1"), CompareFilter.CompareOp.EQUAL, comp);
            SingleColumnValueFilter filter  = new SingleColumnValueFilter(Bytes.toBytes("family1"),
                    Bytes.toBytes("q1"),
                    CompareFilter.CompareOp.GREATER, Bytes.toBytes("88"));
            filter.setFilterIfMissing(true);//if set true will skip row which column doesn't exist
            SingleColumnValueFilter filter2 = new SingleColumnValueFilter(Bytes.toBytes("family2"),
                    Bytes.toBytes("q2"),
                    CompareFilter.CompareOp.LESS, Bytes.toBytes("111"));
            filter2.setFilterIfMissing(true);

            PageFilter filter3 = new PageFilter(10);
            filters.add(filter);
            filters.add(filter2);
            filters.add(filter3);
            FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL, filters);
            scan.setFilter(filterList);

            conf.set(TableInputFormat.INPUT_TABLE, "scores");
            conf.set(TableInputFormat.SCAN, convertScanToString(scan));

            //read data from hbase
            JavaPairRDD<ImmutableBytesWritable, Result> hBaseRDD = sc.newAPIHadoopRDD(conf,
                    TableInputFormat.class, ImmutableBytesWritable.class,
                    Result.class);
            long count = hBaseRDD.count();
            System.out.println("count: " + count);

            JavaRDD<Person> datas2 = hBaseRDD.map(new Function<Tuple2<ImmutableBytesWritable, Result>, Person>() {
                @Override
                public Person call(Tuple2<ImmutableBytesWritable, Result> immutableBytesWritableResultTuple2) throws Exception {
                    Result result = immutableBytesWritableResultTuple2._2();
                    byte[] o = result.getValue(Bytes.toBytes("course"), Bytes.toBytes("art"));
                    if (o != null) {
                        Person person = new Person();
                        person.setAge(Long.parseLong(Bytes.toString(o)));
                        person.setName(Bytes.toString(result.getRow()));

                        return person;
                    }
                    return null;
                }
            });   




           Dataset<Row> data = spark.createDataFrame(datas2, Person.class);
           data.show();
            //write data to hbase
            Job newAPIJobConfiguration1 = Job.getInstance(conf);
            newAPIJobConfiguration1.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "scores");
            newAPIJobConfiguration1.setOutputFormatClass(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);

            // create Key, Value pair to store in HBase
            JavaPairRDD<ImmutableBytesWritable, Put> hbasePuts = data.javaRDD().mapToPair(
                    new PairFunction<Row, ImmutableBytesWritable, Put>() {
                        @Override
                        public Tuple2<ImmutableBytesWritable, Put> call(Row row) throws Exception {

                            Put put = new Put(Bytes.toBytes(row.<String>getAs("name")+"test"));//row key
                            put.add(Bytes.toBytes("family1"), Bytes.toBytes("q1"), Bytes.toBytes(String.valueOf(row.<Long>getAs("age"))));

                            return new Tuple2<ImmutableBytesWritable, Put>(new ImmutableBytesWritable(), put);
                        }
                    });

            // save to HBase- Spark built-in API method
            hbasePuts.saveAsNewAPIHadoopDataset(newAPIJobConfiguration1.getConfiguration());
            spark.close();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}


备注:

1)第一次用java开发spark+hbase时遇到了一个特别纠结的问题,当你自定义一个类时,如果调用这个类就会报错这个类未定义,当时搜索了很久才知道java开发和python不一样,每个worker节点并不知道你定义了什么类,因此需要你手动告诉它,这里就用到了setJars函数,它可以将指定路径下的jar包发送给每个worker节点。

2)知道上面这个后刚开始自己把自定义的类打包成一个jar包放在指定位置下,再次运行,又发现了一个棘手的问题,就是每次修改代码后运行还是之前的效果,也就是说修改代码无效,起初还以为是什么缓存问题,后来发现不对劲,只有重新替换jar包后才会生效,又找了很久发现不仅仅是自定义的类需要通过setJars指定,项目里所有的代码都需要,因为每次worker节点运行时会通过java发射机制去寻找分配任务所属的类,然后获取该类里面的rdd等任务对象去执行,这也是为什么修改代码无效的原因,解决的办法就是setJars指向当前项目编译后的jar存放路径,这样每次重新运行也会重新更新jar包

3)在将hbaseRDD转换成指定的RDD过程中,如果使用的是

org.apache.hadoop.hbase.util.Bytes来转换数据的话需要注意,它转换成对应类型会严格限制数据的字节数,比如说转换成long类型,就会要求数据的byte数组至少八个字节,不然就会报错,因此最好先将数据转化为string,再转化成具体类型



 

猜你喜欢

转载自blog.csdn.net/u012292247/article/details/73661152
今日推荐