import Scoobi._, Reduction._ val lines = fromTextFile("hdfs://in/...") val counts = lines.mapFlatten(_.split(" ")) .map(word => (word, 1)) .groupByKey .combine(Sum.int) counts.toTextFile("hdfs://out/...", overwrite=true).persist(ScoobiConfiguration())
This is what Scoobi is all about. Scoobi is a Scala library that focuses on making you more productive at building Hadoop applications. It stands on the functional programming shoulders of Scala and allows you to just write what you want rather than how to do it.
Scoobi is a library that leverages the Scala programming language to provide a programmer friendly abstraction around Hadoop's MapReduce to facilitate rapid development of analytics and machine-learning algorithms.
Familiar APIs - the
DList API is very similar to the standard Scala
Strong typing - the APIs are strongly typed so as to catch more errors at compile time, a
major improvement over standard Hadoop MapReduce where type-based run-time errors often occur
Ability to parameterise with rich data types - unlike Hadoop MapReduce, which requires that you go off implementing a myriad of classes that implement the
Writable interface, Scoobi allows
DList objects to be parameterised by normal Scala types including value types (e.g.
Double), tuple types (with arbitrary nesting) as well as case classes
Optimization across library boundaries - the optimiser and execution engine will assemble Scoobi code spread across multiple software components so you still keep the benefits of modularity
It's Scala - being a Scala library, Scoobi applications still have access to those precious Java libraries plus all the functional programming and concise syntax that makes developing Hadoop applications very productive
Apache V2 licence - just like the rest of Hadoop
The user mailing list is at http://groups.google.com/group/scoobi-users. Please use it for questions and comments!