Reusing distributed computing software and patterns for midscale collaborative science

Many of the challenges faced by the LHC experiments (aggregation of distributed computing resources, management of data across multiple storage facilities, integration of experiment-specific workflow management tools across multiple grid services) are similarly experienced by "midscale" hi...

Full description

Bibliographic Details
Main Authors: Paschos, Paschalis, Rynge, Mats, Riedel, Benedikt, Wuerthwein, Frank, Gardner, Robert William
Format: Conference Object
Language:unknown
Published: Zenodo 2019
Subjects:
Online Access:https://dx.doi.org/10.5281/zenodo.3599635
https://zenodo.org/record/3599635
id ftdatacite:10.5281/zenodo.3599635
record_format openpolar
spelling ftdatacite:10.5281/zenodo.3599635 2023-05-15T18:22:41+02:00 Reusing distributed computing software and patterns for midscale collaborative science Paschos, Paschalis Rynge, Mats Riedel, Benedikt Wuerthwein, Frank Gardner, Robert William 2019 https://dx.doi.org/10.5281/zenodo.3599635 https://zenodo.org/record/3599635 unknown Zenodo https://zenodo.org/communities/chep-2019 https://dx.doi.org/10.5281/zenodo.3599634 https://zenodo.org/communities/chep-2019 Open Access Creative Commons Attribution 4.0 International https://creativecommons.org/licenses/by/4.0/legalcode cc-by-4.0 info:eu-repo/semantics/openAccess CC-BY Text Presentation article-journal ScholarlyArticle 2019 ftdatacite https://doi.org/10.5281/zenodo.3599635 https://doi.org/10.5281/zenodo.3599634 2021-11-05T12:55:41Z Many of the challenges faced by the LHC experiments (aggregation of distributed computing resources, management of data across multiple storage facilities, integration of experiment-specific workflow management tools across multiple grid services) are similarly experienced by "midscale" high energy physics and astrophysics experiments, particularly as their data set volumes are increasing at comparable rates. Often these (international, multi-institution) collaborations have outgrown the computing resources offered by their home laboratories, or the capacities of any single member institution. Unlike the LHC experiments, however, these collaborations often lack the manpower required to build, integrate and operate the systems required to meet their scale. In the Open Science Grid, we have organized a team designed to support collaborative science organizations re-use proven software and patterns in distributed processing and data management, often but not restricted to software developed for the LHC. Examples are re-use of the Rucio and FTS3 software for reliable data transfer and management, XRootD for data access and caching, Ceph for large scale pre-processing storage, and Pegasus for workflow management across heterogeneous resources. We summarize experience with the VERITAS gamma ray observatory, the South Pole Telescope (CMB detector), and the XENON dark matter search experiment. Conference Object South pole DataCite Metadata Store (German National Library of Science and Technology) South Pole
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
description Many of the challenges faced by the LHC experiments (aggregation of distributed computing resources, management of data across multiple storage facilities, integration of experiment-specific workflow management tools across multiple grid services) are similarly experienced by "midscale" high energy physics and astrophysics experiments, particularly as their data set volumes are increasing at comparable rates. Often these (international, multi-institution) collaborations have outgrown the computing resources offered by their home laboratories, or the capacities of any single member institution. Unlike the LHC experiments, however, these collaborations often lack the manpower required to build, integrate and operate the systems required to meet their scale. In the Open Science Grid, we have organized a team designed to support collaborative science organizations re-use proven software and patterns in distributed processing and data management, often but not restricted to software developed for the LHC. Examples are re-use of the Rucio and FTS3 software for reliable data transfer and management, XRootD for data access and caching, Ceph for large scale pre-processing storage, and Pegasus for workflow management across heterogeneous resources. We summarize experience with the VERITAS gamma ray observatory, the South Pole Telescope (CMB detector), and the XENON dark matter search experiment.
format Conference Object
author Paschos, Paschalis
Rynge, Mats
Riedel, Benedikt
Wuerthwein, Frank
Gardner, Robert William
spellingShingle Paschos, Paschalis
Rynge, Mats
Riedel, Benedikt
Wuerthwein, Frank
Gardner, Robert William
Reusing distributed computing software and patterns for midscale collaborative science
author_facet Paschos, Paschalis
Rynge, Mats
Riedel, Benedikt
Wuerthwein, Frank
Gardner, Robert William
author_sort Paschos, Paschalis
title Reusing distributed computing software and patterns for midscale collaborative science
title_short Reusing distributed computing software and patterns for midscale collaborative science
title_full Reusing distributed computing software and patterns for midscale collaborative science
title_fullStr Reusing distributed computing software and patterns for midscale collaborative science
title_full_unstemmed Reusing distributed computing software and patterns for midscale collaborative science
title_sort reusing distributed computing software and patterns for midscale collaborative science
publisher Zenodo
publishDate 2019
url https://dx.doi.org/10.5281/zenodo.3599635
https://zenodo.org/record/3599635
geographic South Pole
geographic_facet South Pole
genre South pole
genre_facet South pole
op_relation https://zenodo.org/communities/chep-2019
https://dx.doi.org/10.5281/zenodo.3599634
https://zenodo.org/communities/chep-2019
op_rights Open Access
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
cc-by-4.0
info:eu-repo/semantics/openAccess
op_rightsnorm CC-BY
op_doi https://doi.org/10.5281/zenodo.3599635
https://doi.org/10.5281/zenodo.3599634
_version_ 1766202090345988096