A Distributed Implementation of the Shared Data-object Model

The shared data-object model is designed to ease the implementation of parallel applications on loosely coupled distributed systems. Unlike most other models for distributed programming (e.g., RPC), the shared data-object model allows processes on different machines to share data. Such data are enca...

Full description

Bibliographic Details
Main Authors: Bal, H.E., Kaashoek, M.F., Tanenbaum, A.S.
Format: Other Non-Article Part of Journal/Newspaper
Language:English
Published: USENIX 1989
Subjects:
Online Access:https://research.vu.nl/en/publications/cf9480d3-f57d-4788-a69d-88eb8a60dd3e
http://hdl.handle.net/1871.1/cf9480d3-f57d-4788-a69d-88eb8a60dd3e
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.37.8731&rep=rep1&type=pdf
Description
Summary:The shared data-object model is designed to ease the implementation of parallel applications on loosely coupled distributed systems. Unlike most other models for distributed programming (e.g., RPC), the shared data-object model allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The shared data-object model forms the basis of a new language for distributed programming, Orca, which gives linguistic support for parallelism and data-objects. A distributed implementation of the shared data-object model should take care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate objects in order to decrease access times to objects and increase parallelism. The intent of this paper is to show that, for several applications, the proposed model is both easy to use and efficient. We first give a brief description of the shared data-object model and Orca. Next, we describe one of several existing implementations of Orca. This implementation replicates all objects on all processors and updates replicas through a reliable broadcast protocol. We describe all three layers of this implementation: the Orca compiler, the Orca run time system, and the reliable broadcast protocol. Finally, we report on our experiences in using this implementation. We describe three parallel applications written in Orca and give performance measurements for them. We also compare these figures with those of a nondistributed (shared-memory) implementation of Orca. The measurements show that significant speedups can be obtained for all three applications.