Skip to main content

Posts

Showing posts from 2014

Distributed bloom filter

Requirements To filter in very quick and efficient way the stream of location data coming from the external source with a rate of 1M events per second. Assumption that 80% of events should be filtered out and only 20% pass to the analytic system for further processing. Filtering process should find the match between the event on input and predefined half-static data structure (predefined location areas) contains 60,000M entries. In this article, I assume that reader familiar with Storm framework  and Bloom filter definition. Solution Once requirement talking about a stream of data Storm framework was chosen, to provide efficient filtering Guava Bloom filter was chosen. Using Bloom filter calculator  find out that having 0.02% false positive probability Bloom filter bit array takes about 60G memory which can't be loaded into java process heap memory. Created simple Storm topology contains Kafka Spout and Filter bolts. There are several possible solutions ...

Oozie workflow basic example

I asked to build periodic job to perform data aggregation MR and upload to HBase based on: 1. period of time 2. start running when input folder exists 3. start running when input folder contains _SUCCESS file So I define 4 steps in my workflow Step 1 : based on job requirements check if input folder exists and contains _SUCCESS file <decision name="upload-decision">    <switch>       <case to="create-csv">          ${fs:exists(startFlag)}       </case>       <default to="end"/>     </switch> </decision> Step 2 : Running MR job as java action of Oozie In prepare section appears deletion of _SUCCESS file and MR job output folder <action name="create-csv"> <java> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <prepare> <delete p...