Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions

Distributed machine learning (DML) over time-varying networks can be an enabler for emerging decentralized ML applications such as autonomous driving and drone fleeting. However, the commonly used weighted arithmetic mean model aggregation function in existing DML systems can result in high model lo...

Full description

Bibliographic Details
Main Authors: Du, Haizhou, Yang, Ryan, Chen, Yijian, Xiang, Qiao, Wibisono, Andre, Huang, Wei
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2022
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2201.12488
https://arxiv.org/abs/2201.12488
id ftdatacite:10.48550/arxiv.2201.12488
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2201.12488 2023-05-15T16:01:22+02:00 Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions Du, Haizhou Yang, Ryan Chen, Yijian Xiang, Qiao Wibisono, Andre Huang, Wei 2022 https://dx.doi.org/10.48550/arxiv.2201.12488 https://arxiv.org/abs/2201.12488 unknown arXiv Creative Commons Attribution 4.0 International https://creativecommons.org/licenses/by/4.0/legalcode cc-by-4.0 CC-BY Machine Learning cs.LG Distributed, Parallel, and Cluster Computing cs.DC FOS Computer and information sciences I.2.11 Article CreativeWork article Preprint 2022 ftdatacite https://doi.org/10.48550/arxiv.2201.12488 2022-03-10T14:23:50Z Distributed machine learning (DML) over time-varying networks can be an enabler for emerging decentralized ML applications such as autonomous driving and drone fleeting. However, the commonly used weighted arithmetic mean model aggregation function in existing DML systems can result in high model loss, low model accuracy, and slow convergence speed over time-varying networks. To address this issue, in this paper, we propose a novel non-linear class of model aggregation functions to achieve efficient DML over time-varying networks. Instead of taking a linear aggregation of neighboring models as most existing studies do, our mechanism uses a nonlinear aggregation, a weighted power-p mean (WPM), as the aggregation function of local models from neighbors. The subsequent optimizing steps are taken using mirror descent defined by a Bregman divergence that maintains convergence to optimality. In this paper, we analyze properties of the WPM and rigorously prove convergence properties of our aggregation mechanism. Additionally, through extensive experiments, we show that when p > 1, our design significantly improves the convergence speed of the model and the scalability of DML under time-varying networks compared with arithmetic mean aggregation functions, with little additional computation overhead. : 13 pages, 26 figures Article in Journal/Newspaper DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Machine Learning cs.LG
Distributed, Parallel, and Cluster Computing cs.DC
FOS Computer and information sciences
I.2.11
spellingShingle Machine Learning cs.LG
Distributed, Parallel, and Cluster Computing cs.DC
FOS Computer and information sciences
I.2.11
Du, Haizhou
Yang, Ryan
Chen, Yijian
Xiang, Qiao
Wibisono, Andre
Huang, Wei
Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions
topic_facet Machine Learning cs.LG
Distributed, Parallel, and Cluster Computing cs.DC
FOS Computer and information sciences
I.2.11
description Distributed machine learning (DML) over time-varying networks can be an enabler for emerging decentralized ML applications such as autonomous driving and drone fleeting. However, the commonly used weighted arithmetic mean model aggregation function in existing DML systems can result in high model loss, low model accuracy, and slow convergence speed over time-varying networks. To address this issue, in this paper, we propose a novel non-linear class of model aggregation functions to achieve efficient DML over time-varying networks. Instead of taking a linear aggregation of neighboring models as most existing studies do, our mechanism uses a nonlinear aggregation, a weighted power-p mean (WPM), as the aggregation function of local models from neighbors. The subsequent optimizing steps are taken using mirror descent defined by a Bregman divergence that maintains convergence to optimality. In this paper, we analyze properties of the WPM and rigorously prove convergence properties of our aggregation mechanism. Additionally, through extensive experiments, we show that when p > 1, our design significantly improves the convergence speed of the model and the scalability of DML under time-varying networks compared with arithmetic mean aggregation functions, with little additional computation overhead. : 13 pages, 26 figures
format Article in Journal/Newspaper
author Du, Haizhou
Yang, Ryan
Chen, Yijian
Xiang, Qiao
Wibisono, Andre
Huang, Wei
author_facet Du, Haizhou
Yang, Ryan
Chen, Yijian
Xiang, Qiao
Wibisono, Andre
Huang, Wei
author_sort Du, Haizhou
title Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions
title_short Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions
title_full Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions
title_fullStr Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions
title_full_unstemmed Achieving Efficient Distributed Machine Learning Using a Novel Non-Linear Class of Aggregation Functions
title_sort achieving efficient distributed machine learning using a novel non-linear class of aggregation functions
publisher arXiv
publishDate 2022
url https://dx.doi.org/10.48550/arxiv.2201.12488
https://arxiv.org/abs/2201.12488
genre DML
genre_facet DML
op_rights Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
cc-by-4.0
op_rightsnorm CC-BY
op_doi https://doi.org/10.48550/arxiv.2201.12488
_version_ 1766397260872024064