Apache Zeppelin is a web-based notebook that enables interactive data analytics. You can make beautiful data-driven, interactive and collaborative documents with SQL, Scala or Python and more. However Apache Zeppelin is still an incubator project, I expect a serious boost of notebooks like Apache Zeppelin on top of data processing (like Apache Spark) and data storage (like HDFS, NoSQL and also RDBMS) solutions.
Cloudera is not covering Apache Zeppelin out of the box, but there is blog post how to install Apache Zeppelin on CDH. Hortonworks is covering Apache Zeppelin out of the box, see the picture of the HDP projects (well done Hortonworks). MapR is not covering Apache Zeppelin out of the box, but there is a blog post how to build Apache Zeppelin on MapR.
We see that Amazon Web Services (AWS) has a Platform as a Service solution (PaaS) called Elastic Map Reduce (EMR). We see that since this summer Apache Zeppelin is supported on the EMR release page.
If we look at Microsoft Azure, there is a blog post how to start with Apache Zeppelin on the HD Insights Spark Cluster (this is a also a PaaS solution).
If we look at Google Cloud Platform we see a blog post to install Apache Zeppelin on top of Google BigQuery.
And now a short demo, lets do some data discovery with Apache Zeppelin on an open data set. For this case I use the Fire Report from the City of Amsterdam from 2010 – 2015.
If you want a short intro look first at this short video of Apache Zeppelin (overview).
I use of course docker to start a Zeppelin container. I found an image in the docker hub from Dylan Meissner (thx). Run the docker container to enter the command below:
$ docker run -d -p 8080:8080 dylanmei/zeppelin
Look in the browser at dockerhost:8080 and create a new notebook:
Step 1: Load and unzip the dataset (I use the “shell” interpreter)
%sh wget https://files.datapress.com/amsterdam/dataset/brandmeldingen-2010-2015/2016-02-25T14:51:13/brwaa_2010-2015.zip -O /tmp/brwaa_2010-2015.zip
%sh unzip /tmp/brwaa_2010-2015.zip -d /tmp
Step 2: Clean the data, in this case remove the header
%sh sed -i '1d' /tmp/brwaa_2010-2015.csv
Step 3: Put data into HDFS
%sh hadoop fs -put /tmp/brwaa_2010-2015.csv /tmp
Step 4: Load the data (most import fields) via a class and use the map function (default Scala)
val dataset=sc.textFile("/tmp/brwaa_2010-2015.csv") case class Melding (id: Integer, melding_type: String, jaar: String, maand_nr: String, prioriteit: String, uur: String, dagdeel: String, buurt: String, wijk: String, gemeente: String) val melding = dataset.map(k=>k.split(";")).map( k => Melding(k(0).toInt,k(2),k(7),k(8),k(14),k(15),k(16),k(19),k(20),k(22)) ) melding.toDF().registerTempTable("melding_table")
Step 5: Use Spark SQL to run the first query
%sql select count(*) from melding_table
Below you can see some more queries and charts:
Next step is how to predict fire with help of Spark ML.