Communication efficient framework for decentralized machine learning

Abstract In this paper, we propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm is based on the Alternating Direction Method of Multipliers (ADMM) algorithm. The key novelty in the proposed...

Full description

Bibliographic Details
Main Authors: Elgabli, A. (Anis), Park, J. (Jihong), Bedi, A. S. (Amrit S.), Bennis, M. (Mehdi), Aggarwal, V. (Vaneet)
Format: Conference Object
Language:English
Published: 2020
Subjects:
DML
Online Access:http://urn.fi/urn:nbn:fi-fe2020100277669
id ftunivoulu:oai:oulu.fi:nbnfi-fe2020100277669
record_format openpolar
spelling ftunivoulu:oai:oulu.fi:nbnfi-fe2020100277669 2023-07-30T04:03:11+02:00 Communication efficient framework for decentralized machine learning Elgabli, A. (Anis) Park, J. (Jihong) Bedi, A. S. (Amrit S.) Bennis, M. (Mehdi) Aggarwal, V. (Vaneet) 2020 application/pdf http://urn.fi/urn:nbn:fi-fe2020100277669 eng eng info:eu-repo/semantics/openAccess © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. info:eu-repo/semantics/conferenceObject info:eu-repo/semantics/acceptedVersion 2020 ftunivoulu 2023-07-08T20:00:17Z Abstract In this paper, we propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm is based on the Alternating Direction Method of Multipliers (ADMM) algorithm. The key novelty in the proposed algorithm is that it solves the problem in a decentralized topology where at most half of the workers are competing the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges faster than the centralized batch gradient descent for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets. Conference Object DML Jultika - University of Oulu repository
institution Open Polar
collection Jultika - University of Oulu repository
op_collection_id ftunivoulu
language English
description Abstract In this paper, we propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm is based on the Alternating Direction Method of Multipliers (ADMM) algorithm. The key novelty in the proposed algorithm is that it solves the problem in a decentralized topology where at most half of the workers are competing the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges faster than the centralized batch gradient descent for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets.
format Conference Object
author Elgabli, A. (Anis)
Park, J. (Jihong)
Bedi, A. S. (Amrit S.)
Bennis, M. (Mehdi)
Aggarwal, V. (Vaneet)
spellingShingle Elgabli, A. (Anis)
Park, J. (Jihong)
Bedi, A. S. (Amrit S.)
Bennis, M. (Mehdi)
Aggarwal, V. (Vaneet)
Communication efficient framework for decentralized machine learning
author_facet Elgabli, A. (Anis)
Park, J. (Jihong)
Bedi, A. S. (Amrit S.)
Bennis, M. (Mehdi)
Aggarwal, V. (Vaneet)
author_sort Elgabli, A. (Anis)
title Communication efficient framework for decentralized machine learning
title_short Communication efficient framework for decentralized machine learning
title_full Communication efficient framework for decentralized machine learning
title_fullStr Communication efficient framework for decentralized machine learning
title_full_unstemmed Communication efficient framework for decentralized machine learning
title_sort communication efficient framework for decentralized machine learning
publishDate 2020
url http://urn.fi/urn:nbn:fi-fe2020100277669
genre DML
genre_facet DML
op_rights info:eu-repo/semantics/openAccess
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
_version_ 1772814137980616704