Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ...
Distributed Machine Learning (DML) systems are utilized to enhance the speed of model training in data centers (DCs) and edge nodes. The Parameter Server (PS) communication architecture is commonly employed, but it faces severe long-tail latency caused by many-to-one "incast" traffic patte...
Main Authors: | , , , , , |
---|---|
Format: | Text |
Language: | unknown |
Published: |
arXiv
2023
|
Subjects: | |
Online Access: | https://dx.doi.org/10.48550/arxiv.2305.04279 https://arxiv.org/abs/2305.04279 |
id |
ftdatacite:10.48550/arxiv.2305.04279 |
---|---|
record_format |
openpolar |
spelling |
ftdatacite:10.48550/arxiv.2305.04279 2023-10-01T03:55:40+02:00 Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ... Chen, Zixuan Shi, Lei Liu, Xuandong Ai, Xin Liu, Sen Xu, Yang 2023 https://dx.doi.org/10.48550/arxiv.2305.04279 https://arxiv.org/abs/2305.04279 unknown arXiv https://dx.doi.org/10.1109/iwqos57198.2023.10188699 arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Distributed, Parallel, and Cluster Computing cs.DC Machine Learning cs.LG Networking and Internet Architecture cs.NI FOS Computer and information sciences ScholarlyArticle Article article-journal Text 2023 ftdatacite https://doi.org/10.48550/arxiv.2305.0427910.1109/iwqos57198.2023.10188699 2023-09-04T13:56:02Z Distributed Machine Learning (DML) systems are utilized to enhance the speed of model training in data centers (DCs) and edge nodes. The Parameter Server (PS) communication architecture is commonly employed, but it faces severe long-tail latency caused by many-to-one "incast" traffic patterns, negatively impacting training throughput. To address this challenge, we design the \textbf{L}oss-tolerant \textbf{T}ransmission \textbf{P}rotocol (LTP), which permits partial loss of gradients during synchronization to avoid unneeded retransmission and contributes to faster synchronization per iteration. LTP implements loss-tolerant transmission through \textit{out-of-order transmission} and \textit{out-of-order Acknowledges (ACKs)}. LTP employs \textit{Early Close} to adjust the loss-tolerant threshold based on network conditions and \textit{Bubble Filling} for data correction to maintain training accuracy. LTP is implemented by C++ and integrated into PyTorch. Evaluations on a testbed of 8 worker nodes and one PS ... : This paper will be published on IWQoS 2023. Preview version only ... Text DML DataCite Metadata Store (German National Library of Science and Technology) |
institution |
Open Polar |
collection |
DataCite Metadata Store (German National Library of Science and Technology) |
op_collection_id |
ftdatacite |
language |
unknown |
topic |
Distributed, Parallel, and Cluster Computing cs.DC Machine Learning cs.LG Networking and Internet Architecture cs.NI FOS Computer and information sciences |
spellingShingle |
Distributed, Parallel, and Cluster Computing cs.DC Machine Learning cs.LG Networking and Internet Architecture cs.NI FOS Computer and information sciences Chen, Zixuan Shi, Lei Liu, Xuandong Ai, Xin Liu, Sen Xu, Yang Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ... |
topic_facet |
Distributed, Parallel, and Cluster Computing cs.DC Machine Learning cs.LG Networking and Internet Architecture cs.NI FOS Computer and information sciences |
description |
Distributed Machine Learning (DML) systems are utilized to enhance the speed of model training in data centers (DCs) and edge nodes. The Parameter Server (PS) communication architecture is commonly employed, but it faces severe long-tail latency caused by many-to-one "incast" traffic patterns, negatively impacting training throughput. To address this challenge, we design the \textbf{L}oss-tolerant \textbf{T}ransmission \textbf{P}rotocol (LTP), which permits partial loss of gradients during synchronization to avoid unneeded retransmission and contributes to faster synchronization per iteration. LTP implements loss-tolerant transmission through \textit{out-of-order transmission} and \textit{out-of-order Acknowledges (ACKs)}. LTP employs \textit{Early Close} to adjust the loss-tolerant threshold based on network conditions and \textit{Bubble Filling} for data correction to maintain training accuracy. LTP is implemented by C++ and integrated into PyTorch. Evaluations on a testbed of 8 worker nodes and one PS ... : This paper will be published on IWQoS 2023. Preview version only ... |
format |
Text |
author |
Chen, Zixuan Shi, Lei Liu, Xuandong Ai, Xin Liu, Sen Xu, Yang |
author_facet |
Chen, Zixuan Shi, Lei Liu, Xuandong Ai, Xin Liu, Sen Xu, Yang |
author_sort |
Chen, Zixuan |
title |
Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ... |
title_short |
Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ... |
title_full |
Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ... |
title_fullStr |
Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ... |
title_full_unstemmed |
Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ... |
title_sort |
boosting distributed machine learning training through loss-tolerant transmission protocol ... |
publisher |
arXiv |
publishDate |
2023 |
url |
https://dx.doi.org/10.48550/arxiv.2305.04279 https://arxiv.org/abs/2305.04279 |
genre |
DML |
genre_facet |
DML |
op_relation |
https://dx.doi.org/10.1109/iwqos57198.2023.10188699 |
op_rights |
arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ |
op_doi |
https://doi.org/10.48550/arxiv.2305.0427910.1109/iwqos57198.2023.10188699 |
_version_ |
1778524287637913600 |