Scalding with CDH3U2 in a Maven Project



This wiki describes a procedure that should allow the dedicated reader to create an executable jar file implementing Scalding, using Maven, that is readily available for deployment on CDH3U2 cluster.

Hadoop Flavors and Compatibility Issues

To deploy a MapReduce job on any Hadoop cluster, since the different Hadoop versions are not necessarily compatible with each other, one has to ensure that the core Hadoop libraries the client code uses are identical to those found throughout the entire cluster. Roughly said, client code that is planned to be deployed as an executable jar, should use the same exact jars as are used by the server nodes on the cluster. See for a walk down the Hadoop and Cloudera version road of chaos.



  • Scalding source - here we used v0.5.3
  • SBT - to build Scalding
  • Cloudera’s Hadoop (CDH) - binaries are fine, e.g. hadoop-0.20.2-cdh3u2.tar.gz . Other versions are cool, just use the same version your cluster uses.
  • IDE with Maven support - here I use Eclipse. There is no need for an IDE if you are a Maven wizard. I am not one of those.


  1. CD to your Scalding source directory

  2. Edit build.sbt to exclude the hadoop-core jar from being packaged in Scalding:

    excludedJars in assembly <<= (fullClasspath in assembly) map { cp =>
      cp.filter { Set("janino-2.5.16.jar", "hadoop-core-0.20.2.jar") contains }


  3. sbt -29 update (-29 is a flag for SBT to build with Scala 2.9.1 libraries. Use if you intend to implement your code with this version of Scala)

  4. sbt -29 assembly (creates scalding-assembly.0.5.3.jar)

  5. My own preference is to install self compiled jars in my local Maven repository. Therefore I use mvn install:install-file target (see to install the created scalding-assembly.0.5.3.jar locally. From hereon this jar’s spec are groupId=com.twitter artifactId=scalding-assembly version=0.5.3.cdh3u2

  6. Download Cloudera’s hadoop-0.20.2-cdh3u2.tar.gz

  7. As in 5, install locally your hadoop-core-cdh3u2.jar, or alternatively you can embed Cloudera’s parent pom in your project’s pom (in the following steps) - they have instructions somewhere on their website)

  8. In your IDE, create a new Scala project using/based on this pom:

  9. Create the file src/assembly/job.xml and edit:

  10. The fun part! - Create your Scala class implementing Scalding’s Job

    class SomethingCool(args: Args) extends Job(args)
  11. mvn package (creates a fat jar)

  12. The generated jar would be placed under your project’s target folder, named like: YOURPROJECT-0.0.1-SNAPSHOT-job.jar

  13. CD to your hadoop-0.20-cdh3u2 folder

  14. Setup your Hadoop configuration files (most importantly, your conf/core-site.xml file) and edit

  15. Run

    bin/hadoop jar YOURPROJECT-0.0.1-SNAPSHOT-job.jar com.twitter.scalding.Tool your.package.your.class --hdfs --input` `hdfs:// --output hdfs://
    -libjars YOURPROJECT-0.0.1-SNAPSHOT-job.jar