Communication Efficient Framework for Decentralized Machine Learning

© 2020 IEEE. In this paper, we propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm is based on the Alternating Direction Method of Multipliers (ADMM) algorithm. The key novelty in the propo...

Full description

Bibliographic Details
Main Authors: A Elgabli, Jihong Park, AS Bedi, M Bennis, V Aggarwal
Format: Conference Object
Language:unknown
Published: 2020
Subjects:
DML
Online Access:http://hdl.handle.net/10536/DRO/DU:30139694
https://figshare.com/articles/conference_contribution/Communication_Efficient_Framework_for_Decentralized_Machine_Learning/20699962
id ftdeakinunifig:oai:figshare.com:article/20699962
record_format openpolar
spelling ftdeakinunifig:oai:figshare.com:article/20699962 2024-06-23T07:52:23+00:00 Communication Efficient Framework for Decentralized Machine Learning A Elgabli Jihong Park AS Bedi M Bennis V Aggarwal 2020-01-01T00:00:00Z http://hdl.handle.net/10536/DRO/DU:30139694 https://figshare.com/articles/conference_contribution/Communication_Efficient_Framework_for_Decentralized_Machine_Learning/20699962 unknown http://hdl.handle.net/10536/DRO/DU:30139694 https://figshare.com/articles/conference_contribution/Communication_Efficient_Framework_for_Decentralized_Machine_Learning/20699962 All Rights Reserved data privacy gradient methods learning (artificial intelligence) regression analysis topology CORE C Text Conference contribution 2020 ftdeakinunifig 2024-06-06T01:19:32Z © 2020 IEEE. In this paper, we propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm is based on the Alternating Direction Method of Multipliers (ADMM) algorithm. The key novelty in the proposed algorithm is that it solves the problem in a decentralized topology where at most half of the workers are competing the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges faster than the centralized batch gradient descent for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets. Conference Object DML DRO - Deakin Research Online
institution Open Polar
collection DRO - Deakin Research Online
op_collection_id ftdeakinunifig
language unknown
topic data privacy
gradient methods
learning (artificial intelligence)
regression analysis
topology
CORE C
spellingShingle data privacy
gradient methods
learning (artificial intelligence)
regression analysis
topology
CORE C
A Elgabli
Jihong Park
AS Bedi
M Bennis
V Aggarwal
Communication Efficient Framework for Decentralized Machine Learning
topic_facet data privacy
gradient methods
learning (artificial intelligence)
regression analysis
topology
CORE C
description © 2020 IEEE. In this paper, we propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm is based on the Alternating Direction Method of Multipliers (ADMM) algorithm. The key novelty in the proposed algorithm is that it solves the problem in a decentralized topology where at most half of the workers are competing the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges faster than the centralized batch gradient descent for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets.
format Conference Object
author A Elgabli
Jihong Park
AS Bedi
M Bennis
V Aggarwal
author_facet A Elgabli
Jihong Park
AS Bedi
M Bennis
V Aggarwal
author_sort A Elgabli
title Communication Efficient Framework for Decentralized Machine Learning
title_short Communication Efficient Framework for Decentralized Machine Learning
title_full Communication Efficient Framework for Decentralized Machine Learning
title_fullStr Communication Efficient Framework for Decentralized Machine Learning
title_full_unstemmed Communication Efficient Framework for Decentralized Machine Learning
title_sort communication efficient framework for decentralized machine learning
publishDate 2020
url http://hdl.handle.net/10536/DRO/DU:30139694
https://figshare.com/articles/conference_contribution/Communication_Efficient_Framework_for_Decentralized_Machine_Learning/20699962
genre DML
genre_facet DML
op_relation http://hdl.handle.net/10536/DRO/DU:30139694
https://figshare.com/articles/conference_contribution/Communication_Efficient_Framework_for_Decentralized_Machine_Learning/20699962
op_rights All Rights Reserved
_version_ 1802643672032870400