DataSpaces Architecture

DataSpaces has a layered architecture which includes (bottom-up) a communication layer, distributed object store layer, service layer, and programming abstraction layer.

DataSpaces Architecture

Communication Layer

DataSpaces service is built on top of a data communication and transport layer called DART. It transparently exposes the features/capabilities of remote direct memory access (RDMA), e.g., asynchronous operations, at the application level to enable low-overhead and efficient communications by overlapping data transfers with computations. Asynchronous data transfers and completion semantics allow data extraction from running applications with minimum overhead on the application, and make DART a building block for other services.

DART has been implemented on several RDMA-capable advanced network interconnects, e.g. Infiniband, Cray Portals/Gemini, IBM DCMF, and is portable across a large number of high-end computing systems.

Distributed Object Store Layer

DataSpaces builds an in-memory object storage repository by allocating memory buffers from distributed compute nodes. A distributed hash table (DHT) is constructed to index the data locations and support fast data look-up. Query engine is built on the storage repository and DHT to resolve and service application data queries.

Core Service Layer

On top of the object store and DART communication layers, DataSpaces implements a number of core services to support the execution and data exchanges in coupled scientific workflows, which are summarized as below.

Coordination and Data Sharing: the service defines and creates an in-memory object storage repository by allocating memory buffers from distributed compute nodes, and manages the memory buffers to create the abstraction of a virtual shared space. The shared-space can be associatively accessed by interacting applications, which would enable asynchronous coordination and memory-to-memory data sharing.

Put/Get interface

Scalable Messaging: the service enables publish/subscribe/notification type messaging patterns to the scientists. The messaging system allows scientists to (1) dynamically subscribe to data events in regions of interest, (2) define actions that are triggered based on the events, and (3) get notified when these events occur. For example, the registered data event may specify that a function or simple reduction operation of the data values in a certain region of the application domain is greater/less than a threshold value; and the resulting actions include users getting notified and user-defined actions, e.g., visualization or writing the target data to persistent storage, being triggered at the staging nodes.

Mapping and Scheduling: this service manages the in-situ/in-transit placement of online data processing operations as part of the coupled simulation-analytics workflow. In-situ data processing operations execute inline on the same processor cores that run the simulation. In-transit processing executes on the dedicated compute nodes of staging area. The service also supports data-centric mapping and scheduling of the workflow tasks, which aims at increasing the opportunities of intra-node in-memory data sharing and reuse, thus reducing the amount of network data movement.

Programming Abstraction Layer

DataSpaces extends existing parallel programming models, such as MPI and Partitioned Global Address Space (PGAS), with a simple set of APIs and user-defined input files to expose the above mentioned core services, in order to enable the coupling of workflow component applications. (1) DataSpaces provides the put()/get() operators for applications to access the virtual shared-space, thus to enable asynchronous coordination and memory-to-memory data sharing. (2) DataSpaces provides the pub()/sub() operators to enable the publish/subscribe/notification messaging pattern. It allows scientists to dynamically register the data events of interest, define the actions that are triggered based on the events, and get notified when the events occur. (3) Programming data analysis workflow that operate on data being generated by simulation, consists of the following steps: First, define the data dependencies between the analysis operations as a DAG, where each DAG task represents a specific analysis operation of the data analysis workflow; Second, express the data sharing between parent and child tasks of the DAG using shared-space put()/get() operators.

Recent News

June 2016: DataSpaces 1.6.1 Release!

Dataspaces 1.6.1 has been released with new network support and various bug fixes. It is available in the Download section.

July 2015: DataSpaces tutorial at XSEDE'15!

We give a DataSpaces tutorial at XSEDE'15 conference at St. Louis, MO in July 2015. The slides are available here (part1,part2)

July 2015: DataSpaces 1.6.0 Release!

Dataspaces 1.6.0 has been released with new features and various bug fixes. It is available in the Download section.

January 2015: DataSpaces 1.5.0 Release!

Dataspaces 1.5.0 has been released with various bug fixes. It is available in the Download section.

June 13th, 2014: DataSpaces 1.4.0 Release!

Dataspaces 1.4.0 has been released with new features and various bug fixes (along with a DataSpaces logo). It is available in the Download section.

May 22nd, 2014: Tong Jin won best poster award at IPDPS 2014 PHD Forum

His poster titled "Runtime Support for Dynamic Online Data Management and Insight Discovery in Large Scale Coupled Simulation Workflow" highlights his research work in HPC arena. You can see the poster here.

December 2013: DataSpaces 1.3.0 Release!

We are excited to announce the release of Dataspaces 1.3.0. It is available in the Download section.

August 19th, 2013: Presentation at ADIOS Code Sprint

Fan Zhang and Hoang Bui gave a talk titled "Data Staging with DataSpaces" for ADIOS Code Sprint at Oak Ridge National Lab

July 18th, 2013: DataSpaces, as part of ADIOS wins R&D 100 award

ADIOS: Adaptable I/O System for Big Data, has received the prestigious R&D 100 Award. More information can be found here and here .

This work is supported by the National Science Foundation and the Deparment of Energy.