Regular readers of this blog might notice that the Naiad team is rather fond of defining new kinds of dataflow. Just yesterday, we did it again with a paper that defines “timely dataflow”. In this post, I’m going to introduce timely dataflow at a high level, and follow-up posts will talk more about the distributed Naiad system that we have built on top of it. Read more…
Way back at the beginning of time, we had a post on performing PageRank on a 1.5B edge graph of who-follows-who on Twitter. We talked a bit about how several big data systems don’t do quite as well as a 40 line, single-threaded C# program. There was also a promise to show how to make things go much, much faster. So we’re going to do that today.
am going to be presenting presented our work on Naiad at the GraphLab workshop, this afternoon at 3:10pm. I’ll talk about the trade-offs between data-parallel and graph-parallel systems, and show how Naiad delivers the best of both worlds.
In this post we will cover a programming pattern frequently thought to be incompatible with data-parallel systems: sequentially updating a collection of values each as a function of the most recent values. In C#, this would look something like:
// sequentially updates array of values for (int i = 0; i < values.Length; i++) values[i] = updateBasedOn(values);
This type of iterative process is at the heart of several graph algorithms, from efficient pageranking, to graph coloring, to general loopy belief propagation. Although the logic looks very sequential, we will see that it can be made very parallel in the context of graph processing.
We’ve walked through a few example programs in previous posts, computing the connected components of a symmetric graph and the strongly connected components of a directed graph, but we didn’t say very much about how a computer (or computers) might execute these programs. In this post we’ll do that at a high level by introducing differential dataflow, the main feature distinguishing Naiad’s computational model from previous approaches to data-parallel computation. It is what lets Naiad perform iterative computation efficiently, and update these computations in millisecond timescales.
So in the last few posts we’ve hopefully whetted your appetite for using Naiad, but we haven’t said much about the secret sauce that enables it to compute results efficiently. Just before Thanksgiving, we finished work on the camera-ready version of our paper on differential dataflow, which will be presented early next year at the Conference on Innovative Data Systems Research (CIDR 2013). This paper sets out the key ideas and algorithms that underpin Naiad, and enable it to perform complex iterative analyses on real-world data in real-time.
The main idea in the paper is differential computation, which is a new form of incremental computation. The main idea of differential computation is that you can benefit from storing multiple previous versions of a computation’s state, if there exists some partial order describing the “reasons” for which the state changes. When applied to data-parallel dataflow, this allows a system like Naiad to disentangle the changes to a collection that are due to iteration, and those that are due to external inputs. In the next few weeks, we’ll post some more details about the math of differential computation, and discuss different ways of implementing it. In the meantime, have a read of the paper and get in touch if you have any questions!