Skip to main content
Publications lead hero image abstract pattern

Publications

IEEE CTN
Written By:

Jeffrey Andrews

Published: 4 May 2015

network

CTN Issue: May 2015

A note from the editor:

This month we feature Prof. Jeffrey Andrews, the Cullen Trust for Higher Education Endowed Professor of Engineering with the University of Texas at Austin. Over the last decade he has been one of the leaders in the effort to apply statistical theory to network capacity calculation. In the past few years, the importance of this work has been magnified as we have increasingly relied on Cooper’s famous law, that most capacity increase will be due to denser and denser cell deployment, to get us towards the promised land of 5G data rates. But will this law continue to apply as we move towards “densification” or will it, as Jeff notes, end up in the same messy briar patch as Moore’s law has for silicon densification? Read on. Though if you are an operator, you may want to sit down first…

Alan Gatherer, Editor-in-Chief

Will Cellular Networks Eventually Become Interference-Overloaded?

Jeffrey Andrews

A vast majority of the increased mobile data throughput we are always hearing about has been enabled by ever-increasing network densification, i.e. adding more base stations (BSs) and access points that have a wired backhaul connection [1].  This trend is set to continue for the next decade at least, primarily through the provisioning of small cells such as pico and femtocells.  What if we ever reached a point where adding more infrastructure did not allow increased wireless network throughput?  This would be comparable to the impending end of "Moore's Law"; a cataclysmic event having far-reaching consequences (i.e. beyond our own industry).

Some of my group's well-known recent work has shown that fears of this "interference overload" – where the interference swamps the useful signal in the event of extreme densification – were unfounded.  This follows since in both a macrocell-only network [2] and in an arbitrarily complex HetNet with as many different classes of Basestations (BS) as you care to model [3], we have shown the SINR (signal-to-interference-plus-noise ratio) distribution monotonically increases with density, although once the noise becomes negligible it saturates.  But at least it doesn't decrease.  This means that you can continually densify and the spectral efficiency in a cell is roughly constant, while now the spectral/time resources available increase in proportion to the density.

But is this correct?  A few caveats of the above model are important to take note of.  First is that the network must be "open access": a mobile user can connect to any BS it wants.  If it can't, then statistically speaking, the interference will increase more quickly than the desired signal as the network densifies.  A possibly more fundamental caveat is the nature of signal attenuation: the above results assume "power-law pathloss": the received power (and interference) decays like d-a over a distance d, where a is called the "path loss exponent".  The above results hold only for a > 2.  Although in ubiquitous use, this path loss model is quite idealized, and in most scenarios the path loss exponent is itself an increasing function of distance.  For example, there could easily be three distinct regimes in a practical environment: a first distance-independent "near field" where a1 = 0; second, a free-space like regime where a2 = 2, and finally some heavily-attenuated regime where a3 > 3.   Such a situation results even with a simple 2-ray ground reflection, with a3 = 4 in that case.  What happens if densification pushes many BSs into the near-field in such a situation?  What are critical values of the path loss exponents where cell splitting no longer yields throughput gains?

To answer such questions, we have extended the above results to a new and more general multislope path loss model, that can capture the aforementioned trends, and also matches well several empirical path loss models, such as the Urban Microcell (UMi) models used by 3GPP. Our mathematical results in [4] show that the SIR (signal-to-interference ratio) monotonically decreases with network density, while the converse is true for SNR (signal-to-noise ratio), and thus the network coverage probability in terms of SINR is maximized at some finite density.   With ultra-densification (technically, the network density going to infinity), there exists a phase transition in the near-field path loss exponent.  If a1> 1, unbounded potential throughput can be achieved asymptotically; otherwise ultra-densification leads in the extreme case to zero throughput! 

In summary, there do appear to be some fundamental physical limits to the amount of densification, which depend heavily on the pathloss in the 10-30 meter range.  More work is needed to figure out when exactly these limits kick in, in various environments.

  1. http://www.arraycomm.com/technology/coopers-law/
  2. J. G. Andrews, F. Baccelli, and R. K. Ganti, "A Tractable Approach to Coverage and Rate in Cellular Networks", IEEE Trans. on Communications, Vol. 59, no. 11, pp. 3122-34, Nov. 2011. 
  3. H. Dhillon, R. K. Ganti, F. Baccelli, and J. G. Andrews, "Modeling and Analysis of K-Tier Downlink Heterogeneous Cellular Networks", IEEE Journal on Sel. Areas in Comm., special issue on Femtocell Networks, Vol. 30, No. 3, pp. 550 – 560, Apr. 2012. 
  4. X. Zhang and J.G. Andrews, "Downlink Cellular Network Analysis with Multi-slope Path Loss Models", IEEE Transactions on Communications, Available via IEEE Early Access or http://arxiv.org/abs/1408.0549

Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.

Sign In to Comment