DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing ...

Distributed machine learning (DML) in mobile environments faces significant communication bottlenecks. Gradient compression has emerged as an effective solution to this issue, offering substantial benefits in environments with limited bandwidth and metered data. Yet, they encounter severe performanc...

Full description

Bibliographic Details
Main Authors: Lu, Rongwei, Jiang, Yutong, Mao, Yinan, Tang, Chen, Chen, Bin, Cui, Laizhong, Wang, Zhi
Format: Report
Language:unknown
Published: arXiv 2023
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2311.07324
https://arxiv.org/abs/2311.07324
id ftdatacite:10.48550/arxiv.2311.07324
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2311.07324 2023-12-31T10:06:17+01:00 DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing ... Lu, Rongwei Jiang, Yutong Mao, Yinan Tang, Chen Chen, Bin Cui, Laizhong Wang, Zhi 2023 https://dx.doi.org/10.48550/arxiv.2311.07324 https://arxiv.org/abs/2311.07324 unknown arXiv Creative Commons Attribution Non Commercial Share Alike 4.0 International https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode cc-by-nc-sa-4.0 Machine Learning cs.LG FOS Computer and information sciences CreativeWork Preprint article Article 2023 ftdatacite https://doi.org/10.48550/arxiv.2311.07324 2023-12-01T11:12:53Z Distributed machine learning (DML) in mobile environments faces significant communication bottlenecks. Gradient compression has emerged as an effective solution to this issue, offering substantial benefits in environments with limited bandwidth and metered data. Yet, they encounter severe performance drop in non-IID environments due to a one-size-fits-all compression approach, which does not account for the varying data volumes across workers. Assigning varying compression ratios to workers with distinct data distributions and volumes is thus a promising solution. This study introduces an analysis of distributed SGD with non-uniform compression, which reveals that the convergence rate (indicative of the iterations needed to achieve a certain accuracy) is influenced by compression ratios applied to workers with differing volumes. Accordingly, we frame relative compression ratio assignment as an $n$-variables chi-square nonlinear optimization problem, constrained by a fixed and limited communication budget. We ... Report DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Machine Learning cs.LG
FOS Computer and information sciences
spellingShingle Machine Learning cs.LG
FOS Computer and information sciences
Lu, Rongwei
Jiang, Yutong
Mao, Yinan
Tang, Chen
Chen, Bin
Cui, Laizhong
Wang, Zhi
DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing ...
topic_facet Machine Learning cs.LG
FOS Computer and information sciences
description Distributed machine learning (DML) in mobile environments faces significant communication bottlenecks. Gradient compression has emerged as an effective solution to this issue, offering substantial benefits in environments with limited bandwidth and metered data. Yet, they encounter severe performance drop in non-IID environments due to a one-size-fits-all compression approach, which does not account for the varying data volumes across workers. Assigning varying compression ratios to workers with distinct data distributions and volumes is thus a promising solution. This study introduces an analysis of distributed SGD with non-uniform compression, which reveals that the convergence rate (indicative of the iterations needed to achieve a certain accuracy) is influenced by compression ratios applied to workers with differing volumes. Accordingly, we frame relative compression ratio assignment as an $n$-variables chi-square nonlinear optimization problem, constrained by a fixed and limited communication budget. We ...
format Report
author Lu, Rongwei
Jiang, Yutong
Mao, Yinan
Tang, Chen
Chen, Bin
Cui, Laizhong
Wang, Zhi
author_facet Lu, Rongwei
Jiang, Yutong
Mao, Yinan
Tang, Chen
Chen, Bin
Cui, Laizhong
Wang, Zhi
author_sort Lu, Rongwei
title DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing ...
title_short DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing ...
title_full DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing ...
title_fullStr DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing ...
title_full_unstemmed DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing ...
title_sort dagc: data-volume-aware adaptive sparsification gradient compression for distributed machine learning in mobile computing ...
publisher arXiv
publishDate 2023
url https://dx.doi.org/10.48550/arxiv.2311.07324
https://arxiv.org/abs/2311.07324
genre DML
genre_facet DML
op_rights Creative Commons Attribution Non Commercial Share Alike 4.0 International
https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode
cc-by-nc-sa-4.0
op_doi https://doi.org/10.48550/arxiv.2311.07324
_version_ 1786838271947440128