Skip to main content
Publications lead hero image abstract pattern

Publications

IEEE CTN
Written By:

Emil Björnson, Linköping University

Published: 31 Jul 2020

network

CTN Issue: July 2020

CTN July 2020 Tablet

A note from the editors:

Summer is usually a time to pause, to reflect, to ponder, and to plan ahead. Even more so in a situation as exceptional as a pandemic. It is therefore fitting that we at CTN also take the opportunity and devote this summer issue to considerations that, while technically grounded on the field of wireless communications, are of a more philosophical nature. And we have found in Prof. Emil Björnson a superb contributor on the matter. The piece below touches on the role of academia in future wireless research, on the importance of models, and on candidate technologies for such research and such models. The issue of modeling, in particular, is of enormous relevance, not only because the scope of a piece of research is only as broad as the validity of the underlying models, but because the ongoing debates on the application of machine learning have much to do with our ability (or lack thereof) of properly modeling increasingly complex communication settings.

Enjoy and stay safe.

Muhammad Zeeshan Shakir and Angel Lozano, CTN Editors

The Role of Academia in Beyond-5G Wireless Research

Emil Björnson

Emil Björnson

Linköping University

Wireless technology is constantly evolving. Even if we tend to focus on the ten-year-long generational cycles, many of the major steps consist of software and hardware developments that occur in the middle of a generation. If you listen to the marketing and research papers, 5G is the solution to almost everything. From a technical perspective, this includes huge data speeds, extreme reliability, and support for a massive number of connected things. From a practical perspective, 5G is supposed to be the solution that all the verticals have been waiting for: a platform that can be software-configured to offer connectivity ideally suited for each specific industry or scenario.

In a few years, we will have a more sober view of what 5G really became and what it might evolve into during this decade. Most likely, we will only have taken a few steps towards satisfying future communication needs. In particular, we will be far from satisfying the original targets and new unanticipated use cases will arise. One can draw a parallel to the release of 3G in 2001: it was supposed to be the technology that delivered mobile video telephony to the masses [1], but it wasn’t until the emergence of application-layer platforms (such as FaceTime in 2010) that this use case took off. At that time, 4G had already been released. With this in mind, I believe that the exploratory 6G research that is now initiated should not be application-driven because we cannot know what the future applications will be! There are already plenty of “roadmap” articles speculating around various science-fiction-like 6G applications, but I think the boring truth is that most 6G applications will be roughly the same as those considered in 5G, but with better service requirements in terms of speed, reliability, and availability.

Academic 6G research should look for disruptive new technologies with the potential of greatly improving upon what 5G can achieve in the predictable future. The performance can be measured in different ways and the search often begins at the physical and architectural levels [2], which then dictates which software solutions are required. For example, if we want to greatly increase the peak data speeds then we need more bandwidth per user, which can either be achieved by reusing existing spectrum more efficiently (which requires interference suppression and spatial multiplexing techniques at the physical layer) or moving to new frequency ranges (which requires new hardware, deployment strategies, and adaptive beamforming to achieve a sufficient link budget). It is relatively easy to demonstrate such things with simple models that capture the basic phenomena, but it is a completely different thing to make a convincing case for a disruptive performance gain under practical conditions!

We Need to Talk About the Models

During the last decade, the research community was often stuck in the use of “tractable” models that lead to analytical expressions that are easy to interpret and prove the points that the authors want to make. It has been fairly easy to publish such results in scientific journals (which is helpful for those working under publish-or-perish pressure), but the contributions will hardly convince people in the industry. Similar concerns were raised a decade ago [2] when the rhetorical question “Is the physical layer dead?” was asked. The authors of that paper were raising concerns that academia is drifting too far away from practical systems in the search for elegant theory, thereby making itself irrelevant for the industry. Even if some of the remedies that were suggested in [2] have been adequately implemented, the fundamental problem still exists.

One example is the literature on millimeter-wave communications that contains numerous papers that analyze hybrid analog-digital beamforming over flat-fading channels, which is a set of practically inconsistent assumptions. The main reason for considering the millimeter-wave bands in 5G is that we can have access to much more bandwidth than in conventional sub-6 GHz bands, which has the consequence that the channels for sure will be frequency-selective. Importantly, the main drawback of hybrid beamforming only materializes in frequency-selective channels where the analog processing must be the same for the entire band even if the channel varies. Hence, even if an elegant theory can be developed for frequency-flat channels, it is impossible to make a convincing case for using hybrid beamforming unless frequency-selective channels are considered.

The lesson to learn is not that it was wrong to begin the analysis of millimeter-wave communications using the simple flat-fading assumption. It was probably necessary to gain basic insights and knowhow when the research topic was new. The real issue is that we never really moved on from this starting point to consider more realistic models that can provide more accurate insights (some researchers did, but most didn’t).

Another example of this issue is the obsession with using the uncorrelated fading model and matched filtering when analyzing Massive MIMO systems. The initial works that made these assumptions were instrumental in demonstrating the gain of having many more antennas than users in cellular networks. The research that followed should have considered more realistic models, but the initial assumptions continued to dominate since they lead to tractable analysis. The consequence is that the achievable performance over practical channels, which feature spatially correlated fading, has been underestimated. Figure 1 exemplifies this by showing the uplink spectral efficiency that can be achieved when using different numbers of antennas at the base station (the simulation setup is defined in Figure 4.24 in [3] but the exact details are not important here). There is one curve for uncorrelated fading with matched filtering and another curve for spatially correlated fading with optimal spatial processing. The difference between the curves is small when the number of antennas is small, thus rough but tractable models can be used to analyze such systems (as we have done for decades). However, the figure shows that a proper analysis of Massive MIMO, which is characterized by having at least 64 antennas, requires more detailed modeling. The reason for the large gap in Figure 1 is that so-called pilot contamination and other types of interference behave differently when using practical channel models, which have structural properties that can be exploited in the processing.

Figure 1: Comparison between the uplink spectral efficiency achieved in a cellular setup from [3] with a simple model (uncorrelated fading) and a more realistic model (correlated fading). The analysis of Massive MIMO with hundreds of antennas requires realistic models to get practically relevant results also when the number of antennas is large.

Figure 1: Comparison between the uplink spectral efficiency achieved in a cellular setup from [3] with a simple model (uncorrelated fading) and a more realistic model (correlated fading). The analysis of Massive MIMO with hundreds of antennas requires realistic models to get practically relevant results also when the number of antennas is large.

Even if the vast majority of papers used unrealistic models, millimeter-wave and Massive MIMO technologies made it into 5G since they are fundamentally sound techniques. It was the vision put forward by academia that prevailed rather than the analytical theory it developed. However, there were other technologies that were “5G-branded” by academia but didn’t live up to the expectations. A prime example is non-orthogonal multiple access (NOMA) [4], which essentially is a way to deal with interference over single-antenna channels (by using the code and power domains). Despite the huge research efforts dedicated to NOMA, the method didn’t make it into 5G because the spectral efficiency gains that can be achieved over multi-antenna channels (i.e., on top of Massive MIMO) vanish under practical conditions. This is not to say that NOMA lacks potential, but we need to start over by considering other design targets than long-term spectral efficiency. NOMA-like techniques can be relevant when services with heterogeneous requirements need to share the same spectrum [5]. 

We should not repeat such mistakes in this new decade. Hence, we need to first identify promising new disruptive technologies, possibly using simple models, and then gradually use more realistic modeling and assumptions to validate the initial results. There is no disgrace in observing that something that seemed promising at first didn’t perform well under more practical conditions. In fact, we should encourage the publication of such “negative” results! We shouldn’t take the correctness of models and results in published papers for granted or take it personally if someone proves that our hypothesis was wrong. In experimental research, a hypothesis must be confirmed ten times, or more, before it is considered proved. We must be better at acknowledging that slightly different assumptions might lead to different answers to the same question, both when it comes to mathematical analysis and numerical results. Maybe it is time to demand that the simulation code is published along with every paper that contains numerical results?

Candidate Physical-Layer Technologies for 6G

Several candidate physical-layer technologies for 6G have already been identified [6] and started to be hyped by the academic research community. Three examples are shown in Figure 2 and will be briefly described below.

Figure 2: Three prospective physical-layer technologies for 6G.

Figure 2: Three prospective physical-layer technologies for 6G.

User-centric cell-free Massive MIMO: This is a distributed antenna system based on the vision of taking the many antennas in a 5G base station and spreading them over the coverage area. The name includes “Massive MIMO” since the same processing of the antenna signals is considered as in that technology [3], thus each user is jointly served by its surrounding antennas (no inter-cell interference exists). This is enabled by using an edge-cloud architecture where the antenna signals are co-processed in the cloud. The aim is to improve the uniformity of the data speeds over the coverage area by reducing the pathloss variations and interference [7], which are useful features in any frequency band. The basic technology features have been established in the literature, but the analysis has thus far been based on simple discrete-time models that implicitly assume perfect synchronization and negligible delay spread, which cannot be achieved in practice. Hence, there is a need for considering more realistic models in the future.

Reconfigurable intelligent surface: These are large but thin two-dimensional surfaces made of metamaterial that scatter the incident electromagnetic waves in a configurable manner [6]. Different parts of the surface incur different delays to the scattered signal, thereby synthesizing the scattering behavior of an arbitrarily shaped object of the same size. This feature can be utilized for full-duplex relaying, where the surface takes the signal from the transmitter and adjusts the delays to instantly beamform it towards the receiver. A potential benefit over conventional relaying technology is the low-power operation; instead of actively amplifying the signal, a large surface aperture is utilized to passively focus the signal. There are many visions for this technology, particularly for enabling 6G communications above 100 GHz, but it remains to find a truly convincing use case and to demonstrate that real-time reconfiguration is possible [8]. To fully understand how these surfaces behave in practice, we need to start from the first principles of electromagnetics to correctly model the interactions between electromagnetic waves and metamaterials.

Holographic radio: This is an attempt to build antenna arrays with an approximately continuous aperture using a limited number of transceiver chains [9]. The arrays can be implemented using metamaterials in a way that is reminiscent of reconfigurable intelligent surfaces, but with the key differences that the transceiver is behind the surface and the diffraction rather than scattering is controlled. As indicated by the word “holographic”, the spatial beamforming is represented by the delay pattern configured on the array surface. This pattern can be recorded in a training phase that resembles holographic imaging [6]. The vision is to achieve arrays with unprecedented spatial resolution, which can be used for spatial multiplexing of the theoretically maximum number of signals per square-meter or precise positioning. Holographic radios also pave the way to implement much larger arrays than in 5G. To perform a detailed communication theoretic study of holographic radios, we must use electromagnetic models that accurately take mutual coupling and other hardware properties into account [10].

Final Remarks

There is no guarantee that any of the aforementioned technology candidates will make it into 6G. Maybe I am just biased by my research interests, just as many others that put their favorite topics at the core of their 6G visions. What we truly need to do as a research community is to turn many stones to find the right technology candidates. For each one, we need to look beyond the initial hype and ask the fundamental scientific questions: 1) What is the closest competing technology that is existing or under development? 2) How much can the performance be improved compared to that technology and according to what metrics? 3) Are the results reproducible when considering more accurate models than today or in different simulations setups? This is how real, reproducible progress is made towards 6G. I sincerely hope that academia will play an essential role along the way.

Acknowledgment

I would like to thank Petar Popovski, Federico Boccardi, Angel Lozano, and Muhammad Zeeshan Shakir for their input.

References

  1. OECD (2004), “Development of Third-Generation Mobile Services in the OECD,” OECD Digital Economy Papers, No. 85, OECD Publishing, Paris, https://doi.org/10.1787/232562017400.
  2. M. Dohler, R. Heath Jr., A. Lozano, C. Papadias, R. Valenzuela, “Is the PHY Layer Dead?,” IEEE Communications Magazine, vol. 49, no. 4, pp. 159-165, 2011.
  3. E. Björnson, J. Hoydis, L. Sanguinetti, “Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency,” Foundations and Trends® in Signal Processing: vol. 11, no. 3-4, pp. 154–655, 2017.
  4. Y. Liu, Z. Qin, M. Elkashlan, Z. Ding, A. Nallanathan, and L. Hanzo, “Nonorthogonal Multiple Access for 5G and Beyond,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2347-2381, 2017.
  5. P. Popovski, K. Trillingsgaard, O. Simeone and G. Durisi, “5G Wireless Network Slicing for eMBB, URLLC, and mMTC: A Communication-Theoretic View,” in IEEE Access, vol. 6, pp. 55765-55779, 2018.
  6. E. Björnson, L. Sanguinetti, H. Wymeersch, J. Hoydis, T. Marzetta, “Massive MIMO is a Reality – What is Next? Five Promising Research Directions for Antenna Arrays,” Digital Signal Processing, vol. 94, pp. 3-20, November 2019.
  7. H. Ngo, A. Ashikhmin, H. Yang, E. Larsson, and T. Marzetta, “Cell-free Massive MIMO versus small cells,” IEEE Transactions of Wireless Communications, vol. 16, no. 3,  pp. 1834-1850, 2017.
  8. E. Björnson, Ö. Özdogan, E. Larsson, “Reconfigurable Intelligent Surfaces: Three Myths and Two Critical Questions,” arXiv, abs/2006.03377v1, 2020.
  9. C. Huang, S. Hu, G. Alexandropoulos, A. Zappone, C. Yuen, R. Zhang, M. Di Renzo, M. Debbah, “Holographic MIMO Surfaces for 6G Wireless Networks: Opportunities, Challenges, and Trends,” IEEE Wireless Communications Magazine, to appear.
  10. R. Williams, E. De Carvalho, T. Marzetta, “A Communication Model for Large Intelligent Surfaces,” arXiv, abs/1912.06644v1, 2020.

Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.

Sign In to Comment