Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems

Avery important component of a parallel system that generates irregular computational patterns is its work distribution strategy. Scheduling strategies for these kinds of systems must be smart enough in order to dynamically balance workload while not incurring a very high overhead. Logic programs ru...

Full description

Bibliographic Details
Main Authors: In^es De Castro Dutra, V Tor Santos Costa, Jorge L. V. Barbosa, Claudio F. R. Geyer
Other Authors: The Pennsylvania State University CiteSeerX Archives
Format: Text
Language:English
Published: 1999
Subjects:
Online Access:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.93.7676
http://www.inf.unisinos.br/~holo/publicacoes/artigos/andorra+orca.pdf
id ftciteseerx:oai:CiteSeerX.psu:10.1.1.93.7676
record_format openpolar
spelling ftciteseerx:oai:CiteSeerX.psu:10.1.1.93.7676 2023-05-15T17:53:57+02:00 Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems In^es De Castro Dutra V Tor Santos Costa Jorge L. V. Barbosa Claudio F. R. Geyer The Pennsylvania State University CiteSeerX Archives 1999 application/pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.93.7676 http://www.inf.unisinos.br/~holo/publicacoes/artigos/andorra+orca.pdf en eng http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.93.7676 http://www.inf.unisinos.br/~holo/publicacoes/artigos/andorra+orca.pdf Metadata may be used without restrictions as long as the oai identifier remains attached to it. http://www.inf.unisinos.br/~holo/publicacoes/artigos/andorra+orca.pdf text 1999 ftciteseerx 2016-01-08T19:54:00Z Avery important component of a parallel system that generates irregular computational patterns is its work distribution strategy. Scheduling strategies for these kinds of systems must be smart enough in order to dynamically balance workload while not incurring a very high overhead. Logic programs running on parallel logic programming systems are examples of irregular parallel computations. The two main forms of parallelism exploited by parallel logic programming systems are: and-parallelism, that arises when several literals in the body of a clause can execute in parallel, and or-parallelism, that arises when several alternative clauses in the database program can be selected in parallel. In this work we show that scheduling strategies for distributing and-work and or-work in parallel logic programming systems must combine information obtained at compile-time with runtime information whenever possible, in order to obtain better performance. The information obtained at compile-time has two advantages over current implemented systems that use only runtime information: (1) the user does not need to adjust parameters in order to estimate sizes of and-work and orwork for the programs; (2) the schedulers can use more accurate estimates of sizes of and-work and or-work to make better decisions at runtime. We did our experiments with Andorra-I, a parallel logic programming system that exploits both determinate and-parallelism and or-parallelism. In order to obtain compile-time granularity information we used the ORCA tool. Our benchmark set ranges from programs containing and-parallelism only, or-parallelism only and a combination of both and-, and or-parallelism. Our results show that, when well designed, scheduling strategies can actually bene t from compile-time granularity information. Text Orca Unknown Avery ENVELOPE(-65.433,-65.433,-66.883,-66.883)
institution Open Polar
collection Unknown
op_collection_id ftciteseerx
language English
description Avery important component of a parallel system that generates irregular computational patterns is its work distribution strategy. Scheduling strategies for these kinds of systems must be smart enough in order to dynamically balance workload while not incurring a very high overhead. Logic programs running on parallel logic programming systems are examples of irregular parallel computations. The two main forms of parallelism exploited by parallel logic programming systems are: and-parallelism, that arises when several literals in the body of a clause can execute in parallel, and or-parallelism, that arises when several alternative clauses in the database program can be selected in parallel. In this work we show that scheduling strategies for distributing and-work and or-work in parallel logic programming systems must combine information obtained at compile-time with runtime information whenever possible, in order to obtain better performance. The information obtained at compile-time has two advantages over current implemented systems that use only runtime information: (1) the user does not need to adjust parameters in order to estimate sizes of and-work and orwork for the programs; (2) the schedulers can use more accurate estimates of sizes of and-work and or-work to make better decisions at runtime. We did our experiments with Andorra-I, a parallel logic programming system that exploits both determinate and-parallelism and or-parallelism. In order to obtain compile-time granularity information we used the ORCA tool. Our benchmark set ranges from programs containing and-parallelism only, or-parallelism only and a combination of both and-, and or-parallelism. Our results show that, when well designed, scheduling strategies can actually bene t from compile-time granularity information.
author2 The Pennsylvania State University CiteSeerX Archives
format Text
author In^es De Castro Dutra
V Tor Santos Costa
Jorge L. V. Barbosa
Claudio F. R. Geyer
spellingShingle In^es De Castro Dutra
V Tor Santos Costa
Jorge L. V. Barbosa
Claudio F. R. Geyer
Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems
author_facet In^es De Castro Dutra
V Tor Santos Costa
Jorge L. V. Barbosa
Claudio F. R. Geyer
author_sort In^es De Castro Dutra
title Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems
title_short Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems
title_full Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems
title_fullStr Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems
title_full_unstemmed Using Compile-Time Granularity Information to Support Dynamic Work Distribution in Parallel Logic Programming Systems
title_sort using compile-time granularity information to support dynamic work distribution in parallel logic programming systems
publishDate 1999
url http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.93.7676
http://www.inf.unisinos.br/~holo/publicacoes/artigos/andorra+orca.pdf
long_lat ENVELOPE(-65.433,-65.433,-66.883,-66.883)
geographic Avery
geographic_facet Avery
genre Orca
genre_facet Orca
op_source http://www.inf.unisinos.br/~holo/publicacoes/artigos/andorra+orca.pdf
op_relation http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.93.7676
http://www.inf.unisinos.br/~holo/publicacoes/artigos/andorra+orca.pdf
op_rights Metadata may be used without restrictions as long as the oai identifier remains attached to it.
_version_ 1766161665352531968