Multi-task learning on the edge: cost-efficiency and theoretical optimality

This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments on synthetic and real benchmark data demonstrate...

Full description

Bibliographic Details
Main Authors: Fakhry, Sami, Couillet, Romain, Tiomoko, Malik
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2021
Subjects:
Online Access:https://dx.doi.org/10.48550/arxiv.2110.04639
https://arxiv.org/abs/2110.04639
Description
Summary:This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments on synthetic and real benchmark data demonstrate that significant energy gains can be obtained with no performance loss. : 4 pages, 5 figures, code to reproduce figure available at: https://github.com/Sami-fak/DistributedMTLSPCA