On the Resource Consumption of Distributed ML

The convergence of Machine Learning (ML) with the edge computing paradigm has paved the way for distributing processing-heavy ML tasks to the network's extremes. As the edge deployment details still remain an open issue, distributed ML schemes tend to be network-agnostic; thus, their effect on...

Full description

Bibliographic Details
Published in:2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)
Main Authors: Georgios Drainakis, Panagiotis Pantazopoulos, Konstantinos Katsaros, Vasilis Sourlas, Angelos Amditis
Format: Report
Language:English
Published: Zenodo 2021
Subjects:
DML
Online Access:https://doi.org/10.1109/LANMAN52105.2021.9478809
Description
Summary:The convergence of Machine Learning (ML) with the edge computing paradigm has paved the way for distributing processing-heavy ML tasks to the network's extremes. As the edge deployment details still remain an open issue, distributed ML schemes tend to be network-agnostic; thus, their effect on the underlying network's resource consumption is largely ignored.In our work, assuming a network tree structure of varying size and edge computing characteristics, we introduce an analytical system model based on credible real-world measurements to capture the end-to-end consumption of ML schemes. In this context, we employ an edge-based (EL) and a federated (FL) ML scheme and in-depth compare their bandwidth needs and energy footprint against a cloud-based (CL) baseline approach. Our numerical evaluation suggests that EL exhibits a minimum of 25% bandwidth-efficiency compared to CL and FL, if employed by a few nodes higher in the edge network, while halving the network's energy costs.