Skip to main content
Publications lead hero image abstract pattern

Publications

IEEE CTN
Written By:

James Won-Ki Hong, IEEE CTN Editor-in-Chief

Published: 2 May 2012

network

CTN Issue: May 2012

1. Secure Communication in the Low-SNR Regime

Secure transmission of confidential messages is a critical issue in communication systems and especially in wireless systems. This article addresses the issue of secure communications using multiple transmit and multiple receive antennas using low power in a wireless system with eavesdroppers. One measure of security is the secrecy capacity, which is the maximum data rate that can be obtained without an eavesdropper being able to decode the communications. This paper derives the fundamental limits on the secrecy capacity in the low signal-to-noise ratio regime, showing the minimum energy required to send bits reliably and securely, which is important to conserve the battery life of wireless devices. The paper also identifies the best transmission techniques, showing how to effectively use the multiple antennas and when just using beamforming, i.e., a single spatial channel versus multiple spatial channels, is optimal.

The paper’s results can be used as fundamental benchmarks to compare against the performance of practical systems. They also can be used to obtain design guidelines for energy-efficient, secure wireless communications, which is of great importance with the increasing use of wireless devices.

For an overview on wireless secure communications, see the articles in the "Special Issue on Information-Theoretic Security,” IEEE Trans. on Inf. Theory, June 2008.

Title and author(s) of the original paper in IEEE Xplore:
Title: Secure Communication in the Low-SNR Regime
Author: Mustafa Cenk Gursoy
This paper appears in: IEEE TRANSACTIONS ON COMMUNICATIONS
Issue Date: April 2012

2. Optimization for Time-driven Link Sleeping Reconfigurations in ISP Backbone Networks

Energy efficiency in operational ISP networks has been regarded as an increasingly important research issue in recent years. Towards this end, network resource optimization through sleeping reconfiguration has been proposed to reduce energy consumption when the traffic demands are at their low levels. The strategy is to configure a subset of network devices to the sleep mode when it is not required for the network to work at its full capacity during the off-peak time. Since it has been observed that most operational backbone networks exhibit regular diurnal traffic patterns, this offers the opportunity to apply simple time-driven link sleeping reconfigurations for energy saving purposes. Instead of relying on complicated “on-the-fly” and reactive network adaptations based on continuous network monitoring, it is possible to apply predetermined network reconfigurations on daily basis to reduce energy consumption. Such a strategy substantially simplifies relevant configuration operations from the view point of practical network management.

This paper is the first work that proposes a time-driven network topology control scheme that optimizes both the number of the links that sleep and the duration of their sleep period. The algorithm presented in the paper produces a reduced network topology, together with its off-peak enforcement duration, which achieves the same level of performance as the one for peak-time operation, while significantly reducing energy consumption. The basic strategy is to initially compute a synthetic traffic matrix (TM) derived from multiple real TMs that capture the actual traffic behavior patterns. To make the synthetic TM robust to traffic uncertainty, the actual traffic matrix data is collected at the same time points on each sampled day. The synthetic TM is used as input to the optimization algorithm that aims to put the maximum number of links to sleep without sacrificing (i.e. significantly shortening) the duration of their sleep. The unified sleep window within each day is computed in a synchronized way based on a selected expansion point where the overall traffic demand is at its lowest level during the off-peak time.

Our simulations are based on the GEANT network topology and its real traffic traces. According to our experimental results, the proposed scheme is able to achieve up to 18.6% energy saving without causing any traffic performance deterioration. The distinct contribution from this work is a practical and efficient scheme for energy efficiency based on legacy network elements, without requiring any hardware upgrades or protocol extensions.

Title and author(s) of the original paper in IEEE Xplore:
Title: Optimization for Time-driven Link Sleeping Reconfigurations in ISP Backbone Networks
Author: Frederic Francois, Ning Wang, Klaus Moessner and Stylianos Georgoulas
This paper appears in: Proc. IEEE/IFIP Network Operations and Management Symposium (NOMS 2012)
Issue Date: April 2012

3. IT Service Outages: Shorter or Fewer?

Maintaining high availability is a sine qua non for today’s IT service providers. Customer demands are on the rise, and service outages in high-profile application areas such as credit card payment systems rapidly hit the news headlines. However, despite all the attention, the most popular concept of service availability is surprisingly crude, and often reduced to just a single figure (such as 99.98%).

This article highlights the importance for IT service managers to look beyond mere average service outage times and costs. In fact, outage costs often exhibit a lot of variance, meaning that a risk analysis based on averages might be misleading. For instance, in the retail business, a single high revenue hour outage might cost as much as a dozen low revenue hour outages. Using simulations on existing sets of revenue data, an argument is made that the single-figure-approach to service availability is inadequate for businesses that want to properly manage the financial risks associated with IT service unavailability. Additional information on average duration and number of outages is also required. Should a person signing a Service Level Agreement on a certain availability level (e.g. 99.98%) prefer that the downtime is distributed over many but short outages or fewer but longer ones? The answer depends on the kind of a company the person represents.

Simply put, companies that have a high fixed cost for a restart of their main business process should prefer longer but fewer outages. A good example is a physical process, such as a rolling mill, with high working temperatures and supply chains involving thousands of metric tons. Companies where outages in the main business process entail costs that increase with the outage duration should prefer more but shorter outages. A good example is an ATM service where a short glitch just makes end users retry, but hours of interrupted service on payday would make them switch to another bank. In the paper, these phenomena are mathematically modeled, and an optimal outage length is derived.

Title and author(s) of the original paper in IEEE Xplore:
Title: Optimal IT Service Availability: Shorter Outages, or Fewer?
Author: Ulrik Franke
This paper appears in: IEEE Transactions on Network and Service Management
Issue Date: March 2012

4. Leveraging Local Image Redundancy for Efficient Virtual Machine Provisioning

Image-based provisioning provides a fast and reliable mechanism for handling the demands of Cloud Computing. Typically, a Cloud data center contains a catalog of images in the image library, multiple hypervisors with inexpensive direct attached storage (where the instances are created), and a placement mechanism that allocates and reserves resources. Image-based provisioning is a deployment and activation mechanism that clones a “golden” read-only virtual machine (VM) image residing in the image library to create a new virtual machine instance on a hypervisor. The main steps of the provisioning process are: 1) selection of hypervisor based on a placement policy; 2) copying VM image from a storage server to the direct attached storage of the hypervisor, and 3) image activation to create an instance. The image copy from the storage server to the direct attached storage of the hypervisor is time consuming and network intensive, directly contributing to user perceived provisioning latency.

This article proposes a mechanism that can reduce the network bandwidth requirements by efficiently selecting a hypervisor for placement and reconstituting the required image based on content already available on the local storage. The proposed system leverages virtual machine image similarity and provisioning frequencies to reduce the data volume transferred from the storage server to the hypervisor on which the virtual machine is being instantiated. There are multiple situations leading to significant degree of image similarity, such as: when two different images are created from the same base image, two different images have the same middleware or applications installed, or when users modify configurations and recapture images for later provisioning. In such situations, there are clusters of blocks that are the same across the images in the library. These blocks within the cluster could even be non-contiguous. The image redundancy information is used to supplement capacity based placement by utilizing overlap across images present on direct attached storage for reconstituting a virtual image. The algorithm is implemented in a testbed and also validated using extensive discrete event simulations based on a library representative of typical Cloud provider’s catalog. Analytical model and simulations measure impact of degree of image similarity, system utilization, hypervisor capacity and image provisioning frequencies on expected gain. The system achieves up to 80% reduction in amount of data transferred from the storage server to hypervisors. It is especially effective for large and highly utilized hypervisor clusters.

Title and author(s) of the original paper in IEEE Xplore:
Title: Leveraging Local Image Redundancy for Efficient Virtual Machine Provisioning
Author: Andrzej Kochut and Alexei Karve
This paper appears in: Proc. IEEE/IFIP Network Operations and Management Symposium (NOMS 2012)
Issue Date: April 2012

5. Using Global Content Balancing to Solve the Broadband Penetration Problem in the Developing World: Case Study, India

This very interesting article is unusual in that it successfully combines technical and business aspects to explain how the general expectation that growing broadband penetration is an inexorable force may not be true in all cases. Readers in the developed world (e.g. US, EU) and/or in countries that have made broadband deployment a societal goal (e.g S. Korea) have seen broadband availability and the consequent growth of the internet-based economy (e.g. online newspaper readership) steadily increase over the past decade. The growth may have been top down driven and rapid (as in S. Korea) or market driven and in fits and starts (as in the US), but the growth has taken place.

This article explains how the high cost of access to non-local content leads to a diversion of available OPEX into long haul bandwidth, leaving less for the development of local access networks. This in turn chokes off growth by reducing revenue which depends to some extent on broadband access, resulting in a vicious circle which limits overall network growth. This model is explained in the context of India, but it is easy to see how this may occur in other places as the developing world opens up to the global internet. The authors present a content balancing approach (essentially local caching in the form of data-center roll out) to increase the proportion of funding for access network growth, hoping this can trigger a virtuous circle of overall network growth. The article discusses various details like pricing strategies and expense models to justify the results.

Although this article uses the Indian experience as a case study, it is an interesting proxy for potential developments in the rest of the developing world in places where direct government intervention in broadband development does not take place Of course, if a government with sufficient money and direct control over policy (e.g. China) decides broadband is a priority, then this issue may not arise. But in a situation in which investment dollars are allocated based on market forces and public discussion, the sort of deadlock described in this article may very well arise again (e.g. in Africa, or the middle east). If this experiment succeeds, it could create a model for broadband deployment in other places. If it fails, then it may well indicate that governments, including democratic societies, need to take a less market driven and more strategic approach to broadband deployment. If this is not done, the globe may well be on its way to another form of haves and have-nots, this time in the domain of internet connectivity - a global digital divide. In either case, the outcome should be watched with interest by communications professionals since it is likely to impact all of us one way or another.

Title and author(s) of the original paper in IEEE Xplore:
Title: Using Global Content Balancing to Solve the Broadband Penetration Problem in the Developing World: Case Study, India
Author: Ashwin Gumaste, Prasad Gokhale, Tamal Das, M. K. Purohit and Peeyush Agrawal
This paper appears in: IEEE Communications Magazine
Issue Date: May 2012

Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.

Sign In to Comment