Reusing distributed computing software and patterns for midscale collaborative science

Many of the challenges faced by the LHC experiments (aggregation of distributed computing resources, management of data across multiple storage facilities, integration of experiment-specific workflow management tools across multiple grid services) are similarly experienced by "midscale" hi...

Full description

Bibliographic Details
Main Authors: Paschalis Paschos, Mats Rynge, Benedikt Riedel, Frank Wuerthwein, Robert William Gardner Jr
Format: Conference Object
Language:unknown
Published: 2019
Subjects:
Online Access:https://zenodo.org/record/3599635
https://doi.org/10.5281/zenodo.3599635
id ftzenodo:oai:zenodo.org:3599635
record_format openpolar
spelling ftzenodo:oai:zenodo.org:3599635 2023-05-15T18:22:39+02:00 Reusing distributed computing software and patterns for midscale collaborative science Paschalis Paschos Mats Rynge Benedikt Riedel Frank Wuerthwein Robert William Gardner Jr 2019-11-05 https://zenodo.org/record/3599635 https://doi.org/10.5281/zenodo.3599635 unknown doi:10.5281/zenodo.3599634 https://zenodo.org/communities/chep-2019 https://zenodo.org/record/3599635 https://doi.org/10.5281/zenodo.3599635 oai:zenodo.org:3599635 info:eu-repo/semantics/openAccess https://creativecommons.org/licenses/by/4.0/legalcode info:eu-repo/semantics/lecture presentation 2019 ftzenodo https://doi.org/10.5281/zenodo.359963510.5281/zenodo.3599634 2023-03-10T22:21:19Z Many of the challenges faced by the LHC experiments (aggregation of distributed computing resources, management of data across multiple storage facilities, integration of experiment-specific workflow management tools across multiple grid services) are similarly experienced by "midscale" high energy physics and astrophysics experiments, particularly as their data set volumes are increasing at comparable rates. Often these (international, multi-institution) collaborations have outgrown the computing resources offered by their home laboratories, or the capacities of any single member institution. Unlike the LHC experiments, however, these collaborations often lack the manpower required to build, integrate and operate the systems required to meet their scale. In the Open Science Grid, we have organized a team designed to support collaborative science organizations re-use proven software and patterns in distributed processing and data management, often but not restricted to software developed for the LHC. Examples are re-use of the Rucio and FTS3 software for reliable data transfer and management, XRootD for data access and caching, Ceph for large scale pre-processing storage, and Pegasus for workflow management across heterogeneous resources. We summarize experience with the VERITAS gamma ray observatory, the South Pole Telescope (CMB detector), and the XENON dark matter search experiment. Conference Object South pole Zenodo South Pole
institution Open Polar
collection Zenodo
op_collection_id ftzenodo
language unknown
description Many of the challenges faced by the LHC experiments (aggregation of distributed computing resources, management of data across multiple storage facilities, integration of experiment-specific workflow management tools across multiple grid services) are similarly experienced by "midscale" high energy physics and astrophysics experiments, particularly as their data set volumes are increasing at comparable rates. Often these (international, multi-institution) collaborations have outgrown the computing resources offered by their home laboratories, or the capacities of any single member institution. Unlike the LHC experiments, however, these collaborations often lack the manpower required to build, integrate and operate the systems required to meet their scale. In the Open Science Grid, we have organized a team designed to support collaborative science organizations re-use proven software and patterns in distributed processing and data management, often but not restricted to software developed for the LHC. Examples are re-use of the Rucio and FTS3 software for reliable data transfer and management, XRootD for data access and caching, Ceph for large scale pre-processing storage, and Pegasus for workflow management across heterogeneous resources. We summarize experience with the VERITAS gamma ray observatory, the South Pole Telescope (CMB detector), and the XENON dark matter search experiment.
format Conference Object
author Paschalis Paschos
Mats Rynge
Benedikt Riedel
Frank Wuerthwein
Robert William Gardner Jr
spellingShingle Paschalis Paschos
Mats Rynge
Benedikt Riedel
Frank Wuerthwein
Robert William Gardner Jr
Reusing distributed computing software and patterns for midscale collaborative science
author_facet Paschalis Paschos
Mats Rynge
Benedikt Riedel
Frank Wuerthwein
Robert William Gardner Jr
author_sort Paschalis Paschos
title Reusing distributed computing software and patterns for midscale collaborative science
title_short Reusing distributed computing software and patterns for midscale collaborative science
title_full Reusing distributed computing software and patterns for midscale collaborative science
title_fullStr Reusing distributed computing software and patterns for midscale collaborative science
title_full_unstemmed Reusing distributed computing software and patterns for midscale collaborative science
title_sort reusing distributed computing software and patterns for midscale collaborative science
publishDate 2019
url https://zenodo.org/record/3599635
https://doi.org/10.5281/zenodo.3599635
geographic South Pole
geographic_facet South Pole
genre South pole
genre_facet South pole
op_relation doi:10.5281/zenodo.3599634
https://zenodo.org/communities/chep-2019
https://zenodo.org/record/3599635
https://doi.org/10.5281/zenodo.3599635
oai:zenodo.org:3599635
op_rights info:eu-repo/semantics/openAccess
https://creativecommons.org/licenses/by/4.0/legalcode
op_doi https://doi.org/10.5281/zenodo.359963510.5281/zenodo.3599634
_version_ 1766202060201525248