On the Resource Consumption of Distributed ML

The convergence of Machine Learning (ML) with the edge computing paradigm has paved the way for distributing processing-heavy ML tasks to the network's extremes. As the edge deployment details still remain an open issue, distributed ML schemes tend to be network-agnostic; thus, their effect on...

Full description

Bibliographic Details
Published in:2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)
Main Authors: Georgios Drainakis, Panagiotis Pantazopoulos, Konstantinos Katsaros, Vasilis Sourlas, Angelos Amditis
Format: Report
Language:English
Published: Zenodo 2021
Subjects:
DML
Online Access:https://doi.org/10.1109/LANMAN52105.2021.9478809
id ftzenodo:oai:zenodo.org:6861384
record_format openpolar
spelling ftzenodo:oai:zenodo.org:6861384 2024-09-15T18:03:52+00:00 On the Resource Consumption of Distributed ML Georgios Drainakis Panagiotis Pantazopoulos Konstantinos Katsaros Vasilis Sourlas Angelos Amditis 2021-07-12 https://doi.org/10.1109/LANMAN52105.2021.9478809 eng eng Zenodo https://zenodo.org/communities/5g_iana https://zenodo.org/communities/eu https://doi.org/10.1109/LANMAN52105.2021.9478809 oai:zenodo.org:6861384 info:eu-repo/semantics/openAccess Creative Commons Attribution 4.0 International https://creativecommons.org/licenses/by/4.0/legalcode IEEE LANMAN, IEEE International Symposium on Local and Metropolitan Area Networks, 12-14 July 2021 #machinelearning #ML #DML # edgenetwork #edgecomputing info:eu-repo/semantics/preprint 2021 ftzenodo https://doi.org/10.1109/LANMAN52105.2021.9478809 2024-07-27T02:52:06Z The convergence of Machine Learning (ML) with the edge computing paradigm has paved the way for distributing processing-heavy ML tasks to the network's extremes. As the edge deployment details still remain an open issue, distributed ML schemes tend to be network-agnostic; thus, their effect on the underlying network's resource consumption is largely ignored.In our work, assuming a network tree structure of varying size and edge computing characteristics, we introduce an analytical system model based on credible real-world measurements to capture the end-to-end consumption of ML schemes. In this context, we employ an edge-based (EL) and a federated (FL) ML scheme and in-depth compare their bandwidth needs and energy footprint against a cloud-based (CL) baseline approach. Our numerical evaluation suggests that EL exhibits a minimum of 25% bandwidth-efficiency compared to CL and FL, if employed by a few nodes higher in the edge network, while halving the network's energy costs. Report DML Zenodo 2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN) 1 6
institution Open Polar
collection Zenodo
op_collection_id ftzenodo
language English
topic #machinelearning #ML #DML # edgenetwork #edgecomputing
spellingShingle #machinelearning #ML #DML # edgenetwork #edgecomputing
Georgios Drainakis
Panagiotis Pantazopoulos
Konstantinos Katsaros
Vasilis Sourlas
Angelos Amditis
On the Resource Consumption of Distributed ML
topic_facet #machinelearning #ML #DML # edgenetwork #edgecomputing
description The convergence of Machine Learning (ML) with the edge computing paradigm has paved the way for distributing processing-heavy ML tasks to the network's extremes. As the edge deployment details still remain an open issue, distributed ML schemes tend to be network-agnostic; thus, their effect on the underlying network's resource consumption is largely ignored.In our work, assuming a network tree structure of varying size and edge computing characteristics, we introduce an analytical system model based on credible real-world measurements to capture the end-to-end consumption of ML schemes. In this context, we employ an edge-based (EL) and a federated (FL) ML scheme and in-depth compare their bandwidth needs and energy footprint against a cloud-based (CL) baseline approach. Our numerical evaluation suggests that EL exhibits a minimum of 25% bandwidth-efficiency compared to CL and FL, if employed by a few nodes higher in the edge network, while halving the network's energy costs.
format Report
author Georgios Drainakis
Panagiotis Pantazopoulos
Konstantinos Katsaros
Vasilis Sourlas
Angelos Amditis
author_facet Georgios Drainakis
Panagiotis Pantazopoulos
Konstantinos Katsaros
Vasilis Sourlas
Angelos Amditis
author_sort Georgios Drainakis
title On the Resource Consumption of Distributed ML
title_short On the Resource Consumption of Distributed ML
title_full On the Resource Consumption of Distributed ML
title_fullStr On the Resource Consumption of Distributed ML
title_full_unstemmed On the Resource Consumption of Distributed ML
title_sort on the resource consumption of distributed ml
publisher Zenodo
publishDate 2021
url https://doi.org/10.1109/LANMAN52105.2021.9478809
genre DML
genre_facet DML
op_source IEEE LANMAN, IEEE International Symposium on Local and Metropolitan Area Networks, 12-14 July 2021
op_relation https://zenodo.org/communities/5g_iana
https://zenodo.org/communities/eu
https://doi.org/10.1109/LANMAN52105.2021.9478809
oai:zenodo.org:6861384
op_rights info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
op_doi https://doi.org/10.1109/LANMAN52105.2021.9478809
container_title 2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)
container_start_page 1
op_container_end_page 6
_version_ 1810441316243341312