Distributed deep neural network training on edge devices

Daniel Benditkis, Tomer Avidor, Aviv Keren, Neta Shoham, Liron Mor-Yosef, Nadav Tal-Israel

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Deep Neural Network (Deep Learning) models have been traditionally trained on dedicated servers, after collecting data from various edge devices and sending them to the server. In recent years new methodologies have emerged for training models in a distributed manner over edge devices, keeping the data on the devices themselves. This allows for better data privacy and reduces the training costs. One of the main challenges for such methodologies is reducing the communication costs to and mainly from the edge devices. In this work we compare the two main methodologies used for distributed edge training: Federated Learning and Large Batch Training. For each of the methodologies we examine their convergence rates, communication costs, and final model performance. In addition, we present two techniques for compressing the communication between the edge devices, and examine their suitability for each one of the training methodologies.

Original languageEnglish
Title of host publicationProceedings of the 4th ACM/IEEE Symposium on Edge Computing, SEC 2019
PublisherAssociation for Computing Machinery, Inc
Number of pages3
ISBN (Electronic)9781450367332
StatePublished - 7 Nov 2019
Externally publishedYes
Event4th ACM/IEEE Symposium on Edge Computing, SEC 2019 - Arlington, United States
Duration: 7 Nov 20199 Nov 2019

Publication series

NameProceedings of the 4th ACM/IEEE Symposium on Edge Computing, SEC 2019


Conference4th ACM/IEEE Symposium on Edge Computing, SEC 2019
Country/TerritoryUnited States


  • Communication compression
  • Deep learning
  • Edge device
  • Federated learning
  • Large batch
  • Neural network


Dive into the research topics of 'Distributed deep neural network training on edge devices'. Together they form a unique fingerprint.

Cite this