Skip to main content

SparklineData Druid OLAP framework pitfalls

In this post, I would like to share my impressions and experience prototyping SparklineData/spark-druid-olap open source framework.

The main idea of the framework to enable SQL access to Druid index using Tableau Desktop, on the way provide single access point API to query indexed and raw data.

Since Druid 0.9 does not have SQL support out of the box, the big advantage of SparklineData framework is providing the ability to run SQL queries over Data in Druid, which is very useful for the end to end Tableau integration.
Another aspect of using the same API to query raw data is not useful in practice, at least from Tableau perspective.


Running environment information
  1. Hadoop cluster of Cloudera distribution 5.10.0 
  2. Spark 1.6.1
  3. SparklineData release 0.4.1
  4. Druid 0.10.0-rc
SparklineData running as part of the Spark Thrift server which is unfortunately not supported by default Cloudera distribution and requires to recompile Spark with Thrift support.

After upgrading Spark, downloading SparklineData artifacts, making some data formatting and uploading to HDFS I was ready to start SparklineData Thrift server using this command.

Then I connected to the Thrift server with Beeline, defined source Spark table, defined and built Druid index and defined Sparkline table according to the description on Wiki page.

The last step is connecting Tableau Desktop to the Spark Thrift server using ODBC connection as described in this post

After all preparation steps, I had a system ready for building reports with Tableau.

Now starting the most interesting part ...

Pitfall #1

Querying columns which is not mapped to Druid index having about 100 times slower response time (8 seconds vs 12 minutes). The reason of such a difference is very clear having said that from Tableau user experience perspective it is totally disaster.

Pitfall #2

Trying to fix Pitfall #1 I start thinking about

  1. Caching raw data in memory using SparkSQL
  2. Caching raw data using Spark RDD as part of the Thrift server

The first approach was pretty straightforward, here is the command. I can only query in memory table using another Beeline session due to spark.sql.hive.thriftServer.singleSession=true configuration but failed to query from Tableau because this table defined as a temporal table in Hive meta store.

To implement the second approach I add custom code to SparklineData Thrift server to cache raw data on startup sqlContext.cacheTable("existing_spark_table") command.
Unfortunately this kind of caching, cache data in memory in lazy approach, that means the first query slow but next same queries will be fast.

Pitfall #3

Fine tuning of SparklineData queries is pretty complex and time-consuming activity. I used explain druid rewrite command to see query explain plan as well try to disable query cost model. Having said that I improved some queries runtime from 2 minutes to 15 seconds.

I paid attention that Tableau heavy manipulates with timestamp field of Druid index which translated by SparklineData to JavaScript which probably explains tens of seconds query response time.

One of the Tableau queries I posted here.

Pitfall #4

Tableau query to get data in specified time range failed. For example querying data in last 7 days, one month etc.


Conclusions

For me, it was a nice journey to deploy, configure and integrate all the parts of the system.

Using the same API to query raw and indexed data without caching raw data in memory sounds more like non practical approach due to a huge difference in query response time.

High ramp up to learn how to improve queries performance and understand SparklineData query optimizer.

Due to the results described bellow we found SparklineData/spark-druid-olap does not suit our system requirements.




Comments

Popular posts from this blog

Geo-spatial search in HBase

Nowadays geospatial query of the data became a part of almost every application working with location coordinates. There are a number of NoSQL databases supports geospatial search out of the box such as MongoDB, Couchbase, Solr etc. I'm going to write about bounding box query over HBase . I'm using this kind of query to select points located on visible geographic area of the WEB client. During my investigation, I realized that more effective and more complex solution is using Geohash or QuadKeys approach. These approaches required to redesign you data model then I found more simple (and less effective) solution - using built-in HBase encoder OrderedBytes  (hbase-common-0.98.4-hadoop2.jar) Current example of code working with HBase version 0.98.4. Actually bounding box query required comparison of latitude and longitude coordinates of the point saved in HBase. As you know HBase keep data in a binary format according to lexicographical order. Because of coordinates ...

Oozie workflow basic example

I asked to build periodic job to perform data aggregation MR and upload to HBase based on: 1. period of time 2. start running when input folder exists 3. start running when input folder contains _SUCCESS file So I define 4 steps in my workflow Step 1 : based on job requirements check if input folder exists and contains _SUCCESS file <decision name="upload-decision">    <switch>       <case to="create-csv">          ${fs:exists(startFlag)}       </case>       <default to="end"/>     </switch> </decision> Step 2 : Running MR job as java action of Oozie In prepare section appears deletion of _SUCCESS file and MR job output folder <action name="create-csv"> <java> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <prepare> <delete p...

Distributed bloom filter

Requirements To filter in very quick and efficient way the stream of location data coming from the external source with a rate of 1M events per second. Assumption that 80% of events should be filtered out and only 20% pass to the analytic system for further processing. Filtering process should find the match between the event on input and predefined half-static data structure (predefined location areas) contains 60,000M entries. In this article, I assume that reader familiar with Storm framework  and Bloom filter definition. Solution Once requirement talking about a stream of data Storm framework was chosen, to provide efficient filtering Guava Bloom filter was chosen. Using Bloom filter calculator  find out that having 0.02% false positive probability Bloom filter bit array takes about 60G memory which can't be loaded into java process heap memory. Created simple Storm topology contains Kafka Spout and Filter bolts. There are several possible solutions ...