Object distribution in Orca using compile-time and run-time techniques

Orca is a language for parallel programming on distributed systems. Communication in Orca is based on shared data-objects, which is a form of distributed shared memory. The performance of Orca programs depends strongly on how shared dataobjects are distributed among the local physical memories of th...

Full description

Bibliographic Details
Main Authors: Henri E. Bal, M. Frans Kaashoek
Other Authors: The Pennsylvania State University CiteSeerX Archives
Format: Text
Language:English
Published: 1993
Subjects:
Online Access:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.89.7557
http://www.bsslab.de/download/documents/amoeba_docs/oopsla93.pdf
Description
Summary:Orca is a language for parallel programming on distributed systems. Communication in Orca is based on shared data-objects, which is a form of distributed shared memory. The performance of Orca programs depends strongly on how shared dataobjects are distributed among the local physical memories of the processors. This paper studies a new and efficient solution to this problem, based on an integration of compile-time and run-time techniques. The Orca compiler has been extended to determine the access patterns of processes to shared objects. The compiler passes a summary of this information to the run-time system, which uses it to make good decisions about which objects to replicate and where to store nonreplicated objects. Measurements show that the new system gives better overall performance than any previous implementation of Orca.