Skip to main content
Publications lead hero image abstract pattern

Publications

Publication Date

Second Quarter 2025

Manuscript Submission Deadline

Special Issue

Call for Papers

In the conventional source-channel separation framework, identification, representation, and transmission of information are addressed by rate distortion theory and channel coding theory. The resulting paradigm of reconstruction-oriented compression and task-agnostic communications has fueled several generations of digital communication systems. The rise of machine-to-machine communications and human-machine interactions calls for a rethinking of this paradigm as faithful reconstructions are often secondary from the machine perspective. Indeed, task-specific descriptors extracted by machine learning algorithms from latent feature spaces are generally significantly more compact than their counterparts for the reconstruction purpose while end-to-end trained communication systems can dramatically outperform those based on the source-channel separation architecture under various metrics. Recent progresses in artificial intelligence and information theory have made it possible to identify universal compressed representations that are suitable for a multitude of downstream applications and to develop general-purpose pre-trained models that can realize rich functionalities. These advances open up promising prospects for task-adaptive communications without completely compromising the universal bit interface, but at the same time pose many challenges concerning the design of transceivers and the relevant feedback mechanisms. Generative models have recently taken center stage in machine learning research, and the stochastic nature of generative components in the-state-of-the-art AI technologies raises a host of new problems for data compression and physical layer communications. The multiterminal versions of source and channel coding needed for distributed learning and collaborative inference are still in the nascent stage and provide fertile ground for fruitful exploration. This Special Issue aims to focus on the design of new source and channel coding schemes via machine learning as well as the analysis of machine learning tasks with the presence of communication bottlenecks or wireless links in both point-to-point and multiterminal scenarios. It will also feature the development of new system architectures or novel approaches to leveraging the existing architectures to accommodate task-oriented and more advanced task-adaptive communications. Prospective authors are invited to submit original manuscripts on topics including but not limited to:

  • Source and channel coding via and for machine learning
  • Task-oriented joint source-channel coding
  • Multiterminal source and channel coding for distributed learning and collaborative inference
  • Supervised, semi-supervised, and self-supervised training for compression and communication
  • Generative models in source and channel coding
  • System architectures and feedback mechanisms for task-adaptive communications

Submission Guidelines

Prospective authors should submit their manuscripts following the IEEE JSAC guidelines. Papers should be submitted through EDAS (coming soon) according to the following schedule.

Important Dates

Manuscript Submission: 15 May 2024 (Deadline Extended)
First Notification: 15 September 2024
Acceptance Notification: 15 January 2025
Final Manuscript Due: 15 February 2025
Publication: Second Quarter 2025

Guest Editors

Jun Chen
McMaster University, Canada

Alexandros G. Dimakis
University of Texas at Austin, USA

Yong Fang
Chang’an University, China

Ashish Khisti
University of Toronto, Canada

Ayfer Özgür
Stanford University, USA

Nir Shlezinger
Ben-Gurion University, Israel

Chao Tian
Texas A&M University, USA