Skip to content
Philip (flip) Kromer edited this page May 9, 2012 · 1 revision

wukong -- a fun, scalable data processing framework

Design Overview

Wukong/Hanuman are chiefly concerned with these specific types of graphs:

  • dataflow -- chains of simple modules to handle continuous data processing -- coordinates Flume, Unix pipes, ZeroMQ, Esper, Storm.
  • workflows -- episodic jobs sequences, joined by dependency links -- comparable to Rake, Azkaban or Oozie.
  • map/reduce -- Hadoop's standard disordered/partitioned stream > partition, sort & group > process groups workflow. Comparable to MRJob and Dumbo.
  • queue workers -- pub/sub asynchronously triggered jobs -- comparable Resque, RabbitMQ/AMQP, Amazon Simple Worker, Heroku workers.

In addition, wukong stages may be deployed into http middlware: lightweight distributed API handlers -- comparable to Rack, Goliath or Twisted.

When you're describing a Wukong/Hanuman flow, you're writing pure expressive ruby, not some hokey interpreted language or clumsy XML format. Thanks to JRuby, it can speak directly to Java-based components like Hadoop, Flume, Storm or Spark.

Design Rules

  • whiteboard rule: the user-facing conceptual model should match the picture you would draw on the whiteboard in an engineering discussion. The fundamental goal is to abstract the necessary messiness surrounding the industrial-strength components it orchestrates while still providing their essential power.
  • common cases are simple, complex cases are always possible: The code should be as simple as the story it tells. For the things you do all the time, you only need to describe how this data flow is different from all other data flows. However, at no point in the project lifecycle should Wukong/Hanuman hit a brick wall or peat bog requiring its total replacement. A complex production system may, for example, require that you replace a critical path with custom Java code -- but that's a small set of substitutions in an otherwise stable, scalable graph. In the world of web programming, Ruby on Rails passes this test; Sinatra and Drupal do not.
  • petabyte rule: Wukong/Hanuman coordinate industrial-strength components that wort at terabyte- and petabyte-scale. Conceptual simplicity makes it an excellent tool even for small jobs, but scalability is key. All components must assume an asynchronous, unreliable and distributed system.
  • laptop rule:
  • no dark magick: the core libraries provide elegant, predictable magic or no magic at all. We use metaprogramming heavily, but always predictably, and only in service of making common cases simple.
    • Soupy multi-option case statements are a smell.
    • Complex tasks will require code that is more explicit, but readable and organically connected to the typical usage. For example, many data flows will require a custom Wukong::Streamer class; but that class is no more complex than the built-in streamer models and receives all the same sugar methods they do.
  • get shit done: sometimes ugly tasks require ugly solutions. Shelling out to the hadoop process monitor and parsing its output is acceptable if it is robust and obviates the need for a native protocol handler.
  • be clever early, boring late: magic in service of having a terse language for assembling a graph is great. However, the assembled graph should be stomic and largely free of any conditional logic or dependencies.
    • for example, the data flow split statement allows you to set a condition on each branch. The assembled graph, however, is typically a fanout stage followed by filter stages.
    • the graph language has some helpers to refer to graph stages. The compiled graph uses explicit mostly-readable but unambiguous static handles.
    • some stages offer light polymorphism -- for example, select accepts either a regexp or block. This is handled at the factory level, and the resulting stage is free of conditional logic.
  • no lock-in: needless to say, Wukong works seamlessly with the Infochimps platform, making robust, reliable massive-scale dataflows amazingly simple. However, wukong flows are not tied to the cloud: they project to Hadoop, Flume or any of the other open-source components that power our platform.