Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol ...

Distributed Machine Learning (DML) systems are utilized to enhance the speed of model training in data centers (DCs) and edge nodes. The Parameter Server (PS) communication architecture is commonly employed, but it faces severe long-tail latency caused by many-to-one "incast" traffic patte...

Full description

Bibliographic Details
Main Authors: Chen, Zixuan, Shi, Lei, Liu, Xuandong, Ai, Xin, Liu, Sen, Xu, Yang
Format: Text
Language:unknown
Published: arXiv 2023
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2305.04279
https://arxiv.org/abs/2305.04279
Description
Summary:Distributed Machine Learning (DML) systems are utilized to enhance the speed of model training in data centers (DCs) and edge nodes. The Parameter Server (PS) communication architecture is commonly employed, but it faces severe long-tail latency caused by many-to-one "incast" traffic patterns, negatively impacting training throughput. To address this challenge, we design the \textbf{L}oss-tolerant \textbf{T}ransmission \textbf{P}rotocol (LTP), which permits partial loss of gradients during synchronization to avoid unneeded retransmission and contributes to faster synchronization per iteration. LTP implements loss-tolerant transmission through \textit{out-of-order transmission} and \textit{out-of-order Acknowledges (ACKs)}. LTP employs \textit{Early Close} to adjust the loss-tolerant threshold based on network conditions and \textit{Bubble Filling} for data correction to maintain training accuracy. LTP is implemented by C++ and integrated into PyTorch. Evaluations on a testbed of 8 worker nodes and one PS ... : This paper will be published on IWQoS 2023. Preview version only ...