Sunday, October 17, 2010

Say hi to GraphDB-Bench

Over the past month or two I've been working on a graph database benchmarking tool, called GraphDB-Bench. In this post I'll try to explain the whats, whys, and hows of the project. Read on...


Preemptive disclaimer:
The project has just started and is by no means finished. This post is all about introducing the project and hopefully building some momentum behind it. There are still a lot of unanswered questions and unfilled blanks...


What is it?
GraphDB-Bench is a graph database benchmarking tool, which allows the developer to define what type of operations/access-patterns/algorithms a benchmark is comprised of. Additionally, benchmarks are portable: the same benchmark definition (same algorithms, same code, same everything) can be run against graph databases from different vendors.


Neat, but how?
To achieve portability, GraphDB-Bench is built on the TinkerPop stack, making use of Blueprints, Pipes, and Gremlin. These libraries abstract away any knowledge of the underlying graph database implementation, which means benchmarks written in GraphDB-Bench can be executed on any Blueprints-supported graph database: Neo4j, OrientDB, TinkerGraph, InfiniteGraph (soon), Redis (soon),  and more.
To see what I mean by  "developers can define what type of operations a benchmark is comprised of" please see the wiki documentation.


Ok, why does the world need this?
Graph databases are not a hammer, but when applied to the right problem (see The Graph Traversal Pattern and Constructions from Dots and Lines) they outperform the vast majority of other data storage technologies. This has lead to the growing popularity of graph databases. However, ask Joe-above-Average if he knows what a graph database is and he'll probably say no - they're taking over the world, but haven't done so yet.
At present, developers don't know how many graph database alternatives exist, let alone how to compare them. Moreover, they probably don't have the time, resources, or desire to thoroughly compare the available solutions. 
Wouldn't it be sweet if we had an unbiased (read: vendor agnostic), open, extensible, community driven graph database benchmarking tool... and a public repository of the benchmark results that it produces? Yeah, it would.


What are the main contributions?
Portability, again... to reiterate, in GraphDB-Bench benchmarks are portable: once written, they can be executed on any Blueprints-supported graph database. When a new graph database gets Blueprints support all existing benchmarks can be run on it, quickly telling us where that database sits in the competitive landscape.


Fairness... benchmarking is a notoriously biased activity. Basically, you only see something if you're actively looking for it. One problem with this is benchmarks may target a products strengths and ignore it's weaknesses. Another is that different applications perform different types of operations on the database. If a benchmark tests for every type of operation except the ones your application uses, the results will be of limited value.
GraphDB-Bench is very extensible, it allows the developer to define what a benchmark will "look for". This encourages people to define and develop benchmarks that interest them and match their use cases. More importantly, because GraphDB-Bench lets anyone define, extend, and combine benchmarks, it makes it easy for developers to collaborate... hopefully resulting in a community that together decides what aspects of graph databases benchmarks should test.
All of this doesn't make bias go away, but shifts the bias towards the needs of an open community, rather than the needs of an individual/company.


It's not all about "my database is bigger than yours"
Although GraphDB-Bench was first and foremost designed to be a benchmarking tool, it may be useful for a few other purposes. 
If you read the wiki you'll see that GraphDB-Bench makes it easy to execute an arbitrary number of operations (think of an operation as one run of an algorithm) against a database, each with different input parameters (e.g. the traversal starts from a different start vertex each time). Once executed, the running time and output of each operation can be seen in the result logs. This functionality may make it suitable as a tool for comparing the performance/correctness of different algorithm implementations on the same dataset, or of one algorithm on different datasets.


What about benchmark results?
Sorry, not yet... we're working on it.
This project aims to add value to the graph database ecosystem by providing a vendor agnostic evaluation tool. A users group discussion has started about how to best carry out the benchmarking process, ensuring minimum bias and maximum transparency. As soon as that gets sorted the first round of benchmarks will be run, and results will be published on the project wiki.


I like, how can I get some and contribute some back?
Read the wiki, git clone, and join the users group!

Saturday, October 16, 2010

Thesis Report Available

For those interested in reading our thesis report, sorry for the delay in making it publicly available. It actually still hasn't made it through the marking process, so may undergo a few more slight changes. Regardless, I'm grateful to those that have shown interest in our work so I've made the final draft available here: http://arxiv.org/abs/1301.5121.

Regarding follow-up work, it looks like later this year another masters student will base their thesis on ours, extending it further in the direction of social network partitioning. We'll be collaborating with them, and hope to write a joint paper, which combines results from both theses.

I'm low on time right now so won't go into any discussion regarding results/findings. If you're interested, please read the report and ask/discuss/critique freely!