Experience with distributed programming in Orca

Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, data types and create instances (objects) of these types, which may be shared among processes. All op...

Full description

Bibliographic Details
Main Authors: Henri E. Bal, M. Frans Kaashoek, Andrew S. Tanenbaum
Other Authors: The Pennsylvania State University CiteSeerX Archives
Format: Text
Language:English
Published: 1990
Subjects:
Online Access:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.119.6176
http://www.cs.vu.nl/~ast/publications/iccl-1990.pdf
Description
Summary:Orca is a language for programming parallel applications on distributed computing systems. Although processors in such systems communicate only through message passing and not through shared memory, data types and create instances (objects) of these types, which may be shared among processes. All operations on shared objects are executed atomically. Orca’s shared objects are implemented by replicating them in the local memories of the proces-sors. Read operations use the local copies of the object, without doing any interprocess communication. Write operations update all copies using an efficient reliable broadcast protocol. In this paper, we briefly describe the language and its implementation and then report on our ex-periences in using Orca for three parallel applications: the Traveling Salesman Problem, the All-pairs Shortest Paths problem, and Successive Overrelaxation. These applications have different needs for shared data: TSP greatly benefits from the support for shared data; ASP benefits from the use of broad-cast communication, even though it is hidden in the implementation; SOR merely requires point-to-point communication, but still can be implemented in the language by simulating message passing.