Hadoop学习三十三:Hadoop-HBase Bulk Load使用翻译

一. 地址

     http://hbase.apache.org/book.html#arch.bulk.load

     

写道
9.8. Bulk Loading
9.8.1. Overview
HBase includes several methods of loading data into tables. The most straightforward method is to either use the TableOutputFormat class from a MapReduce job, or use the normal client APIs; however, these are not always the most efficient methods.

The bulk load feature uses a MapReduce job to output table data in HBase's internal data format, and then directly loads the generated StoreFiles into a running cluster. Using bulk load will use less CPU and network resources than simply using the HBase API.

9.8.2. Bulk Load Limitations
As bulk loading bypasses the write path, the WAL doesn’t get written to as part of the process. Replication works by reading the WAL files so it won’t see the bulk loaded data – and the same goes for the edits that use Put.setWriteToWAL(true). One way to handle that is to ship the raw files or the HFiles to the other cluster and do the other processing there.

9.8.3. Bulk Load Architecture
The HBase bulk load process consists of two main steps.

9.8.3.1. Preparing data via a MapReduce job
The first step of a bulk load is to generate HBase data files (StoreFiles) from a MapReduce job using HFileOutputFormat. This output format writes out data in HBase's internal storage format so that they can be later loaded very efficiently into the cluster.

In order to function efficiently, HFileOutputFormat must be configured such that each output HFile fits within a single region. In order to do this, jobs whose output will be bulk loaded into HBase use Hadoop's TotalOrderPartitioner class to partition the map output into disjoint ranges of the key space, corresponding to the key ranges of the regions in the table.

HFileOutputFormat includes a convenience function, configureIncrementalLoad(), which automatically sets up a TotalOrderPartitioner based on the current region boundaries of a table.

9.8.3.2. Completing the data load
After the data has been prepared using HFileOutputFormat, it is loaded into the cluster using completebulkload. This command line tool iterates through the prepared data files, and for each one determines the region the file belongs to. It then contacts the appropriate Region Server which adopts the HFile, moving it into its storage directory and making the data available to clients.

If the region boundaries have changed during the course of bulk load preparation, or between the preparation and completion steps, the completebulkloads utility will automatically split the data files into pieces corresponding to the new boundaries. This process is not optimally efficient, so users should take care to minimize the delay between preparing a bulk load and importing it into the cluster, especially if other clients are simultaneously loading data through other means.

9.8.4. Importing the prepared data using the completebulkload tool
After a data import has been prepared, either by using the importtsv tool with the "importtsv.bulk.output" option or by some other MapReduce job using the HFileOutputFormat, the completebulkload tool is used to import the data into the running cluster.

The completebulkload tool simply takes the output path where importtsv or your MapReduce job put its results, and the table name to import into. For example:

$ hadoop jar hbase-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable
The -c config-file option can be used to specify a file containing the appropriate hbase parameters (e.g., hbase-site.xml) if not supplied already on the CLASSPATH (In addition, the CLASSPATH must contain the directory that has the zookeeper configuration file if zookeeper is NOT managed by HBase).

Note: If the target table does not already exist in HBase, this tool will create the table automatically.

This tool will run quickly, after which point the new data will be visible in the cluster.

9.8.5. See Also
For more information about the referenced utilities, see Section 15.1.11, “ImportTsv” and Section 15.1.12, “CompleteBulkLoad”.

See How-to: Use HBase Bulk Loading, and Why for a recent blog on current state of bulk loading.

9.8.6. Advanced Usage
Although the importtsv tool is useful in many cases, advanced users may want to generate data programatically, or import data from other formats. To get started doing so, dig into ImportTsv.java and check the JavaDoc for HFileOutputFormat.

The import step of the bulk load can also be done programatically. See the LoadIncrementalHFiles class for more information.

二. 翻译

9.8. Bluk Loading

9.8.1. 概述

Hbase有多种方式将数据导入到表中,最直接的方式就是通过MapReduce调用TableOutputFormat,或者使用普通HBase Client Apis。

但这些都不是最有效的方式。

Bluk Load将数据以HBase内部的组织格式输出成文件,然后将数据文件加载到已运行的集群中。

9.8.2. Bluk Load限制

因为Bulk Load绕过了传统的写入Memstore和WAL,再从Memstore刷新到HFile的过程。当需要从WAL恢复数据的时候,WAL看不到任何通过Bulk Load产生的数据。

9.8.3. Bluk Load工作方式

包含以下两步

9.8.3.1

通过MapReduce使用HFileOutputFormat来生成HBase的数据文件格式(StoreFiles)。这样格式的数据文件就是HBase内部的文件组织格式,并且在将数据写入到集群的过程中是相当容易的。

为了使该方法更有效,HFileOutputFormat必须通过配置,每个输出的HFile必须适应单个的region。

为了实现此功能,MapReduce的Job采用了Hadoop的TotalOrderPartitioner类,通过进行分区操作用以对应表中各个region。

同时,HFileOutputFormat包含有一个非常方便的方法,configureIncrementalLoad(), 这个方法会基于表的当前区域边界自动设置一个TotalOrderPartitioner。

9.8.3.2

通过HFileOutputFormat准备好数据之后,使用命令行工具completebulkload将数据加载到集群中。这个命令行工具遍历准备好的数据文件,并确定每一个文件所属的region。然后,当连接到对应的Region Server,移动HFile到存储目录为用户提供数据。

如果在数据准备或者数据载入的时候,region边界发生了变化,那么HBase将自动进行块分割,用以适应新的边界变化。这个过程效率是很低下的,特别是有其他的client在做数据录入操作。所以需要注意,尽量使用少的时间去创造数据文件以及录入该数据文件进入集群。 

9.8.4

但数据准备好的时候,无论是通过importtsv还是过MapReduce的HFileOutputFormat,completebulkload用来将数据导入到集群中。

completebulkload就是采用与importtsv 相同的输出路径和表的名称来执行。

例如:$ hadoop jar hbase-VERSION.jar completebulkload /user/todd/myoutput mytable

这个命令会执行的非常快,完成之后在集群中就能看到新的数据。

三. 总结

1.使用HBase提供的importtsv工具或者MapReduce的HFileOutputFormat将Hdfs文件转换成StoreFiles。

2.使用completebulkload将StoreFiles以HFile的格式加载到HBase集群中。

一. 地址 二. 翻译 三. 总结

猜你喜欢

转载自zy19982004.iteye.com/blog/2032815