Monthly Archives: July 2014

MapGraph processes nearly 30 billion edges per second on GPU cluster

We have been working on a multi-GPU version of MapGraph.  The new code performs Breadth First Search over 4.3 billion directed edges on a scale-free graph at 29 Billion Traversed Edges Per Second (29 GTEPS) on a cluster with 64 K20 GPUs.   Using the platform described in the technical report, MapGraph is 3x faster than the YarcData(r) Urika(r) appliance at what we estimate to be 1/3 of the hardware cost for a 9x improvement in price/performance. Using the K40 GPUs, a cluster of 81 GPUs has nearly 1 TB of RAM, the maximum available for the YarcData appliance.  However, unlike YarcData, the MapGraph solution can scale to even larger GPU deployments.

Contact us directly if you are interested in custom applications of this emerging disruptive technology.
—-
Cray, YarcData, and Urika are trademarks of Cray, Inc.

facebooktwittergoogle_pluslinkedin

Significant increases in transaction throughput for RWStore

We have made some significant changes in the RWStore to improve the throughput for incremental transactions.  These changes address a problem where scattered IOs lead to a bottleneck in the disk system and also reduce the total #of bytes written to the disk.  The benefit can be very large for SATA disks, but it is substantial for SAS and SSD disks as well.  How substantial?  We observe throughput increasing by 3-4x over our baseline configurations for incremental data load of LUBM.

First, a bit of background.  Bigdata uses clustered indices for everything.  This includes the dictionary indices (TERM2ID, ID2TERM, and BLOBS) and the statement indices (SPO, POS, and OSP).  In quads mode, we use a different set of clustered indices for the statements (SPOC, POSC, etc).  Some of these indices naturally have good locality on update, especially the ID2TERM, SPO/SPOC indices, and the CSPO index (in quads mode).  These indices will always show good locality for transaction updates since we sort the index writes and then write on the indices in ascending order for maximum cache effect.

However, the statement indices that start with O (OSP, OCSP) always have very poor locality. This is because the Object position varies quite a bit across the statements in any given transaction update. This means that most index pages for the OSP/OCSP indices that are touched by a transaction will be dirtied by a single tuple during the transaction.  The same problem exists to a somewhat lessor extend with the P indices (POS, POCS, PCSO).

The TERM2ID index normally has decent locality, but if you are using UUIDs or similar globally unique identifiers in your URLs, then that will cause a scattered update profile on the TERM2ID index.  What we recommend for a best practice here is to create an inline IV type for your UUID-based URLs such that they will be automatically converted into fixed length IVs (18 bytes – 1 flags, 1 extension byte, and 16 bytes for the UUID).  This will remove the UUID-based URLs completely from the dictionary indices. They will be inlined into the statement indices instead as 18 bytes per URL.

The solution for these scattered updates is to (a) reduce the branching factors to target a 1024 byte page size (or less) for the indices with scattered update patterns (this reduces the #of bytes written to the disk); (b) enable the small slot optimization in the RWStore (this ensures good locality on the disk for the indices with the scattered update patterns; and (c) optionally reduce the write retention queue capacity for those indices (this reduces GC overhead associated with those indices – there is little benefit to a high retention queue if the access pattern for the index is scattered).

Small slots processing will be in the 1.3.2 release.  To enable small slot processing before then, you need to be using branches/BIGDATA_RELEASE_1_3_0 at r8568 or above.

The current advice to reduce IO in update transactions is:

  • Default the BTree branching factor of 256 .
  • Set the default BTree retention to 4000.
  • Enable the small slot optimization.
  • Override branching factors for OSP/OCSP and POS/POSC to 64.

To do this, you need to modify your properties file and/or specify the following when creating a new namespace within bigdata.

# Enable small slot optimization.
com.bigdata.rwstore.RWStore.smallSlotType=1024
# Set the default B+Tree branching factor.
com.bigdata.btree.BTree.branchingFactor=256
# Set the default B+Tree retention queue capacity.
com.bigdata.btree.writeRetentionQueue.capacity=4000

The branching factor overrides need to be made for each index in each triple store or quad store instance. For example, the following property will override the OSP index branching factor for the default bigdata namespace, which is “kb”.  You need to do this for each namespace that you create.

com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor=400
com.bigdata.namespace.kb.spo.SPO.com.bigdata.btree.BTree.branchingFactor=1024
com.bigdata.namespace.kb.spo.OSP.com.bigdata.btree.BTree.branchingFactor=64
com.bigdata.namespace.kb.spo.POS.com.bigdata.btree.BTree.branchingFactor=64

The small slot optimization will take effect when you restart bigdata.  The changes to the write retention queue capacity and the branching factors will only take effect when a new triple store or quad store instance is created.

We still need to examine the impact on query performance from changing these various branching factors.  In principle, the latency of the index is proportional to log(p), where p is the height of the B+Tree.  Thus, it should be a sub-linear relationship. Testing on BSBM 100M reveals that reduced branching factors for the indices with scattered update patterns (as recommended above) does not impact query performance.

facebooktwittergoogle_pluslinkedin

Bigdata powers Linked Open Data portal for the Bavarian State Library (German Library)

We are pleased to announce that Bayerische Staatsbibliothek is in production
with bigdata powering their public SPARQL end point.

http://lod.b3kat.de
A description of the provided dataset can be found at the Datahub: http://datahub.io/dataset/b3kat.
Some details about the Bavarian State Library: http://www.bsb-muenchen.de/en/about-us/the-library-in-brief/

For more information, please contact lod@bsb-muenchen.de .

facebooktwittergoogle_pluslinkedin