It can be confusing to navigate the various triple store options out there. Which one is best for your application?
A couple of steps forward to help get started with bigdata.
We’ve published an early draft of a bigdata architecture whitepaper. It’s a work in progress as you’ll be able to tell by reading it.
Also, we’ve started sketching out the getting started guide for scale-out on the wiki. We still recommend keeping us involved in the process if you’re interested in trying out bigdata on a cluster. There are a lot of do’s and dont’s when it comes to configuring and writing performant code for a distributed database. What this guide is currently missing is sample code for distributed data load and query. Keep an eye out for this in the next few days.
Now that bigdata is handling billions of triples with ease, we are ready to venture into higher expressivity as well. There is always a tradeoff between the expressiveness of the ontology and the computational complexity of computing the entailments. So far, bigdata has focused on the data scale, now we are ready to look at the reasoner complexity. To do this we are exploring some integration options, including partnering with Clark & Parsia to develop an integration with the Pellet2 OWL reasoner .
Speak out and let us know what combination of data scale and ontology complexity you need. Do you want datalog , OWL2 profiles (RL, QL, EL), Horn-SHIQ? Do you need SWRL , and how you want to use it? Example ontologies, data scale and use caselets would all help.