Skip to main content
Publications lead hero image abstract pattern

Publications

Manuscript Submission Deadline

Special Issue

Call for Papers

Machine learning, especially deep learning, has been successfully applied in a wealth of practical AI applications in the field of computer vision, natural language processing, healthcare, finance, robotics, etc. With the increasing size of machine learning models (e.g., the BERT-xlarge language model has over 1 billion parameters) and training data sets (e.g., the BDD100K auto-driving data set has 120 million images), training deep learning models requires significant amount of computations and may take days to months on a single GPU or TPU. A recent study from OpenAI reported that the computing complexity in AI training has been increasing exponentially with a 3.4-month doubling time since 2012, which is much faster than Moore's Law. Therefore, it becomes a common practice to exploit distributed machine learning to accelerate the training process with multiple processors.

Distributed machine learning typically requires the processors to exchange information repeatedly throughput the training process. With the fast-growing computing power of the AI processors such as GPUs, TPUs, and FPGAs, the data communications among processors gradually become the performance bottleneck and excessively limit the system scalability due to Amdahl's law. The design of communication-efficient distributed machine learning systems has attracted great attention from both academia and industry. There are different directions to address the communication challenges in distributed machine learning, such as communication-efficient distributed optimization algorithms, optimization of collective communication primitives, scheduling of computing and communication tasks, optimization of network stack, network congestion control, network topology design, etc. This Special Issue aims to provide a research venue for exchanging and discussing the technical trends and challenges of communication-efficient distributed machine learning. Both theoretical and system-oriented studies are invited for participation.

The topics of interest for this special issue include, but are not limited to:

  • Distributed optimization algorithms for machine learning
  • Congestion control in RDMA networks for distributed machine learning
  • Data center networks for distributed machine learning
  • Efficient communication schemes for distributed deep learning, deep reinforcement learning, or federated learning
  • Communication protocols or libraries for distributed machine learning
  • Network topology design for distributed machine learning
  • Convergence analysis of communication-efficient distributed optimization algorithms
  • Benchmarking and evaluation of communication-efficient distributed machine learning systems
  • Scheduling of computing and/or communication tasks in distributed machine learning
  • Scheduling of multiple distributed machine learning jobs
  • Network traffic modeling and performance analysis for distributed machine learning
  • Network science for distributed machine learning
  • Network architecture support for distributed machine learning
  • Coding techniques for distributed machine learning
  • Distributed machine learning at the edge

The papers for rigorous and well-coordinated peer-review process will be collected through the Manuscript Central System for IEEE Transactions on Network Science and Engineering.

Submission Guidelines

Prospective authors are invited to submit their manuscripts electronically, adhering to the IEEE Transactions on Network Science and Engineering guidelines. Note that the page limit is the same as that of regular papers. Please submit your papers through the online system and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by e-mail to the Guest Editors directly.

Important Dates

Manuscripts Due: 1 December 2020
Initial Round Peer Reviews to Authors: 1 February 2021
Revised Manuscripts Due: 15 March 2021
Second-round Reviews to Authors: 15 May 2021
Final Accepted Manuscript Due: 15 June 2021

Guest Editors

Xiaowen Chu (Lead)
Hong Kong Baptist University, Hong Kong, China

Fausto Giunchiglia
University of Trento, Italy

Giovanni Neglia
INRIA, France

David Gregg
Trinity College Dublin, Germany

Jiangchuan Liu
Simon Fraser University, Canada