Skip to main content
Publications lead hero image abstract pattern

Publications

Manuscript Submission Deadline

Special Issue

Call for Papers

With the development of the Internet of Things (IoT) and mobile communication technology, huge data can be obtained from various intelligent devices, e.g. smartphones, smart cameras, wearable devices, environmental sensors, household appliances, and vehicles. The massive amount of data has promoted the development of artificial intelligence, especially the performance of deep neural networks has been significantly improved. Meanwhile, artificial intelligence helps IoT/mobile devices make decisions and make them more intelligent.

Nevertheless, in pair with the improvement of the precision, the deep learning model size and computational complexity increase dramatically. It is unsuitable to deploy a large DNN model in end devices in the interests of efficiency. Besides, deep learning is energy-consuming, which is problematic for power-constrained mobile devices.

Cloud computation solutions also have some limitations, such as high bandwidth costs, high latency, and insufficient connectivity between end-devices and cloud services. It requires a massive transfer of user data, which is problematic for user privacy and data security.

Edge computing is a feasible and promising technique to meet the challenges. It places a large number of computing nodes near the terminal device to meet the high computing and low latency requirements of deep learning applications. It also provides additional benefits in terms of bandwidth efficiency, privacy, and scalability. However, the edge computing system is much more resource-sensitive than the cloud side, thus a more efficient deep network model is necessary.

Considering the distribution and heterogeneity of edge computing system, it brings great challenges to the design of efficient neural networks for edge computing. The current network design less considers the scenarios and frameworks where the model is to be deployed, and do not regard the design of efficient models for edge computing systems as specific research topics. Therefore, the efficient deep neural network design should be deeply investigated on edge computing scenarios. This special issue will bring together academic and industrial researchers to identify and discuss technical challenges and recent results related to the efficient neural network design for convergence of deep learning and edge computing.

The topics of interest for this special issue include, but are not limited to:

  • Compact and high-performing neural network design for edge computing (quantification, pruning, sparse, knowledge distillation, etc.)
  • Inference efficiency improvement for edge computing (distributed deployment strategy, end-edge-cloud joint/cooperative inference, task scheduling, intelligent migration, parallelization, intelligent perception of computing resource, distributed deep neural network, inference self-adaptive mechanism, etc.)
  • Automatic machine learning methods for deep neural networks on edge computing (NAS, on-device NAS, automatic data augmentation, automatic hyperparameter optimization, platform/resource perception NAS, software and hardware collaborative search, etc.)
  • Reasonable evaluation methods of efficiency of neural network algorithms and architectures on edge computing platforms (comprehensively considering factors, such as delay, power consumption, model size, amount of memory access, computation cost, resource constraints during inference and training, etc.)
  • Neural network training method for edge computing (distributed training, gradient compression, parameter aggregation, gradient quantization, exchange efficiency improvement of gradient parameters, federated learning, migration online learning, etc.)
  • Efficient neural network software infrastructure for edge computing (supporting deep learning frameworks for different edge computing architectures and systems, efficiency promotion for employing deep learning model on edge computing, etc.)
  • Hardware design for inference and training of neural networks on edge computing system (such as FPGA, NPU, TPU, GPU, neuromorphic computing circuits, heterogeneous hardware, reconfigurable hardware, etc.)
  • Other topics for efficient network design for deep learning and edge computing.

The papers for rigorous and well-coordinated peer-review process will be collected through the Manuscript Central System for IEEE Transactions on Network Science and Engineering.

Submission Guidelines

Prospective authors are invited to submit their manuscripts electronically, adhering to the IEEE Transactions on Network Science and Engineering guidelines. Note that the page limit is the same as that of regular papers. Please submit your papers through the online system and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by e-mail to the Guest Editors directly.

Important Dates

Manuscripts Due: 15 October 2020
Peer Reviews to Authors: 15 January 2021
Revised Manuscripts Due: 3 March 2021
Second-round Reviews to Authors: 1 May 2021
Final Accepted Manuscript Due: 31 May 2021

Guest Editors

Shiping Wen (Lead)
University of Technology Sydney, Australia

Tingwen Huang
Texas A&M University at Qatar, Qatar

Björn Schuller
Imperial College London, UK / University of Augsburg, Germany

Ahmad Taher Azar
Prince Sultan University, Saudi Arabia / Benha University, Egypt