New Apache project will Drill big data in near real time

New Apache project will Drill big data in near real time

Dremel-based project accepted as an Apache Incubator


Working with big data is a lot like dealing with the Heisenberg Uncertainty Principle: either you're going to have a massive amount of data on hand or you're going to be able to query that data in real time--never both.

But now a new open source project has just been accepted as an Apache Software Foundation Incubation project that will let you do both: have your data and search it fast, too.

Apache Drill is an ad-hoc query system based on Dremel, another big data system that, like Hadoop, wasinvented by Google engineers to not only manage large datasets but also perform interactive analysis in near real-time.

To explain Drill, you can first examine the architecture of Hadoop, which uses the Hadoop distributed file system (HDFS) for storage and the MapReduce framework to perform batch analysis on whatever data is stored within Hadoop. Hadoop data, notably, does not have to be structured--which makes Hadoop ideal for analyzing and working with data from sources like social media, documents, and graphs: anything that can't easily fit within rows and columns.

Because Hadoop uses MapReduce to perform data queries, searches have to be done in batches. So, while you can perform highly detailed analysis of historical data, for instance, one area you would not want to use Hadoop for is transactional data. Transactional data, by its very nature, is highly complex and fluid, as a transaction on an ecommerce site can generate many steps that all have to be implemented quickly.

Nor would it be efficient for Hadoop to be used to process structured data sets that require very minimal latency, such as a Web site served up by a MySQL database in a typical LAMP stack. That's a speed requirement that Hadoop would poorly serve.

Drill, however, can perform data queries at a much faster rate -- sometimes trillions of rows in seconds. It can do this by searching data either stored in columnar form (such as Google's BigTable) or within a distributed file system like GoogleFS, the precursor to HDFS.

 

The Drill project was submitted to the ASF by Hadoop vendor MapR, which sees Dremel-based technology as filling a gap in interactive analysis within the big data sector.

 

According to MapR engineer Tomer Shiran, who is leading the Apache Drill project, the first thing the project will work on is getting a consensus on Drill's APIs so that other vendors can work with Drill. While Dremel was strictly being used by Google, there was no need to standardize APIs, but as an open source project that clearly needs to change, Shiran said.

 

"Chris Wensel, who wrote Cascading, is interested in using the Drill execution engine for queries written in Cascading," Shiran added.

 

Expanding supported query languages will be one area of focus for the Drill project. Another will be adding support for additional formats, such as JSON, since right now Dremel only supports the Google Protocol Buffer Format.

Dremel has been in use within the Google offices since 2006, performing such tasks as analysis of crawled web documents, OCR results from Google Books, and debugging of map tiles on Google Maps. Dremel is also the engine that drives Google's BigQuery Analytics as a Service.

After uploading data to the BigQuery service, users gain the advantage of Dremel's use of a custom structured query language (which the Drill team calls DrQL) to run queries, analyzing billions of rows in seconds. This can be done via several methods, including a Web-based user interface, a REST API, or a command-line tool. Data can be imported into the Google BigQuery servers in CSV format.

Dremel, and now Drill, should be attractive for more than just its speed: SQL queries on data are a lot easier to work with than writing MapReduce jobs. But it's not yet a skilled SQL player, as users report a need for better join support as well as support for more analytic functions and set operators.

As Drill moves forward, Shiran said, many of these limitations will be solved, and the tool itself will be extended to become a more robust player in the big data arena.

 

Read more of Brian Proffitt's Open for Discussion blog and follow the latest IT news at ITworld. Drop Brian a line or follow Brian on Twitter at @TheTechScribe. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.

 

http://www.itworld.com/big-datahadoop/290026/new-apache-project-will-drill-big-data-near-real-time

猜你喜欢

转载自ldbjakyo.iteye.com/blog/1659638