Welcome to the Media Center, where you can find the latest original video content from ComSoc's conferences and events. Featuring keynotes speakers, executive forums, keynote workshops, industry panels, and much more from ComSoc's events, including the IEEE Global Communications Conference (GLOBECOM) and the IEEE International Conference on Communications (ICC). These videos bring insights to you when you need it. Your ComSoc membership offers free access to many of these valuable contents simply by logging in with your IEEE account.
IEEE and Non-Members can purchase videos after logging into their IEEE Account. If you do not have an IEEE account, click 'Create Account" to create a FREE account to make a purchase.
Augmented reality user experiences are becoming more available to consumers through a diverse set of devices of various form factors like headsup displays, holographic displays, head mounted devices, and handheld devices. Today, nearly 80% of the world's population uses the compute power and connectivity on their smartphone to access the internet and connect to their network of people and things any time of the day, anywhere in the world. AR and spatial compute technology offer the possibility for users to be presented with additional information of the world (places, things, life) around them proactively without an explicit directed human query. In particular, the possible evolution of optical glasses worn by humans to include AR has the potential of eventually becoming a device of choice by majority users to be their source of infotainment, social connection, education, economy, health and other needs. While there are many issues to be solved to make this potential a reality, in this talk, we will go through some considerations and challenges to be met by technologists to make AR glasses a platform for users to enjoy AR experiences ubiquitously. We will explore various system-level optimizations that need to be done to deliver an intuitive, immersive, always-on AR experience everywhere. These system-level optimizations would include power consumption considerations on the AR glasses plus the connections to any companion devices on person and to the cloud where ultimately most of the information needed by the user lies.
This session will discuss how Open architecture technologies, such as OpenRAN, will be a key enabler of this digital transformation of European economies and societies allowing network operators to source RAN equipment from a more diverse range of general purpose processor hardware, software and radio antennae vendors, each specialising and competing in different parts of the RAN supply chain. OpenRAN also enables networks to be operated in entirely new ways, for example, network automation will drive operational innovation and efficiencies. The fact that the software and hardware layers are disaggregated, brings additional flexibility to network operations, allowing new features and capabilities to be introduced simply via software upgrades, enabling the delivery of flexible high quality services tailored to customers’ specific needs.
TCP/IP is not secure, a fundamental change is required. One owner environments (VPN and firewalls) do not support shared operations and devices. This presentation will examine the fundamental weaknesses of TCP/IP and why can we not fix the existing infrastructure. Also, what will be the protocol requirements for TCP/IP replacement taking into account security and efficiency considerations and how digital rights can be defined, managed and protected.
Edge devices collect massive amounts of data, opening up new potentials for machine learning applications. Machine learning at the edge can benefit from exploiting both data and processing power distributed across many wireless devices, but this brings about many new challenges including the low latency requirements of learning applications, privacy concerns preventing data sharing, and the impact of noise and interference on the convergence of the learning process. Overcoming these challenges while meeting the requirements of the machine learning tasks calls for a new paradigm of semantic-oriented communication network design tailored for learning applications. In this talk, I will present recent results on efficient distributed inference and training over wireless networks taking into account channel impairments and power and bandwidth limitations of wireless devices, as well as the semantics of the underlying learning tasks. This will involve bringing together novel communication and coding techniques with distributed learning and inference algorithms.
A burgeoning second quantum revolution promises powerful applications of quantum mechanical phenomena discovered and understood throughout the last century. While the biggest impacts seem confined to an undetermined future time frame, some quantum technologies are achieving maturation. We ask a panel of experts about the current and near-term applications of quantum technologies in information and sensing.
COVI-COM intends to leverage technological advancements and techniques in communications and AI to address disruptive, as well as regular challenges arising due to the global COVID-19 pandemic. The present-day world urgently needs resilient and sustainable solutions that can address the challenges arising out of the mandated need for periodic sanitization, intermittent quarantines and lockdowns, and social distancing. This workshop aims to mobilize the global communications and networking community for enabling long-term solutions for alleviating the social and economic constraints put in place to battle COVID-19 infections and arrest its spread. The impact of these solutions are projected to be long-term as with no definite cure or treatment in sight, human society is expected to adapt to the new social norms of intermittent lockdowns, quarantine, and social distancing. This workshop encourages the use of machine learning techniques, data mining, network science, communication technologies, and other similar technologies to counter the challenges arising out of the present COVID-19 pandemic.
Security issues in Internet communication tend not to be subtle mathematical flaws in the cryptography, but instead, broader system issues. For example, humans using the Internet. We have lovely cryptography. We have certificates. We have great protocols for doing authentication. But does that really assure a human that they are talking to what they think they are talking to? What about authenticating people? What kinds of names should people have, so that the name is unique, and someone that wants to talk to a human will know what name to use? What about distributed systems that are provably correct, provided that all the components are doing what they are supposed to be doing, but do not work correctly if some components misbehave? How can we design systems that will be robust despite misbehaving participants? Will digital signatures on data assure us that data that we read on the Internet is true? Is the simple answer to everything that we should blame users if things go wrong, and just complain that users need more training? (hint…no) Or maybe using blockchain everywhere will make everything secure? (hint…no)
At present the O-RAN architecture provides a promising solution of an open-RAN ecosystem, where based on the defined functional splits (CU, DU, RU) a multi-vendor solution can theoretically be achieved. This so called “wave 1.0” 5G that is capable of utilizing only basic (rough) virtualization as well as introducing essential interfaces to enable open-ecosystem, like: E2 for the control of CU/DU/RU as well as A1, O1, O2 for policy based management, network configuration and monitoring. The existing state-of-the art based on IS-Wireless analysis and experiences (also as O-RAN member) should be upgraded to what we call open-RAN Wave 2.0 in order to allow greater flexibility of functional split as well as improve the capability of addressing the challenges of ultra-dense networks. Flexibility of functional splits is essential to adjust open-RAN based networks to the existing infrastructure capabilities including not only fronthaul but also midhaul interfaces. Fronthaul is understood mainly as splits beyond 6 and especially the 7.2 O-RAN split that requires a certain level of capacity, which may be even quadrupled with the split 7.1. In the midhaul e.g. where the CU-CP with RIC (RAN intelligent controller), CORE, MEC and application servers are located, the infrastructure can also vary in capacity. With highly granularized network functions packaged as VNF/CNF (virtual machines of containers) and also providing multitude of split options it is easier to tailor deployment of open-RAN network to fit into available fronthaul and to optimize cost of hardware and network. Moreover, it is then more convenient to orchestrate such “workloads” (i.e. 5G radio stack functions) across edge-cloud continuum, also including edge micro data centers. In this way, multiple split association types can also be achieved naturally e.g. split per slice, per UE, per bearer. The underlying compute resources can also be utilized more efficiently as particular workloads can be fitted to a variety of acceleration cards (GPU, FPGA, SmartNIC) or computer architectures (x86, ARM). Eventually such fine grained, highly composable (orchestrated) disaggregated open-RAN can be called open-RAN Wave 2.0, as it enables achieving higher capacities for network operators who are aiming to address the challenges of ultra-dense networks. Efficient data-driven resource management (both radio and compute) with the novel paradigms like cell-free (or distributed cell-free massive MIMO) are becoming more straightforward to be implemented with such improved open-RAN architectures.
5G and Mobile Private Networks are enabling the digital transformation of manufacturing and the factories of the future. It is essential that network operators build the 5G right so we deliver all key enablers for Private networks in Industry 4.0. This talk reviews the key characteristics of Mobile Private Networks and present real use cases of how companies are now adopting 5G technology to reduce manual processes and enable highly efficient, connected, and flexible factories of the future.
In the last few years a variety of players have entered the quantum race, ranging from tech giants - such as IBM and Google – to several small start-up companies, as well as states and governments, with massive public funds to be distributed over the next years. Standardization efforts are already ongoing, such as the one within the Internet Engineering Task Force (IETF). The IEEE has become involved in this effort. Within the context of a real quantum revolution, the vision is to build a quantum network infrastructure, also known as the Quantum Internet, to interconnect remote quantum devices so that quantum communications among them are enabled. We will give an overview about the main challenges and open problems arising in the design of a distributed quantum computing architecture. Quantum computing is on the verge of sparking a paradigm shift. Software reliant on this nascent technology, one rooted in the physical laws of nature, could soon revolutionize computing forever. We will focus on the current quantum computer technology from the hardware and software point of view, providing a detailed roadmap for next years. In this context, specific integration of classical and quantum computing that represents a huge step in accelerating the execution of quantum circuits, or sequences of quantum operations, on real Quantum systems will be described.
Lifted by the network automation mega-trend, a third wave of autonomous computing and networking technologies development rises across the ICT industry. Multiple initiatives from Standards Development Organizations (SDOs), large open source projects, preeminent industry actors and renowned academic research teams have been launched in recent years and continue to emerge. This phenomenon deserves careful consideration if one wants to avoid facing the same disillusion as previous attempts at making autonomous networks a reality. While the theoretical and applied research corpus has been extensively contributed, the real world and large-scale adoption of autonomous networks has been, in contrast, relatively limited and disappointing. Since autonomous networks continue to fascinate research and engineers as a technological area full of potential and promise, the goal of this panel is to make a reality check on where we stand on the level of maturity of autonomous networks technologies and what challenges should the industry collectively address to ensure that the promises are met.
Edge computing as an evolution of cloud computing brings application hosting from centralized data centers down to the network edge, closer to consumers and the data generated by applications. It is acknowledged as one of the key pillars for meeting the demanding 5G Key Performance Indicators, especially as far as low latency and bandwidth efficiency are concerned. Moreover edge computing also plays an essential role in the transformation of the telecommunications business, where telecommunications networks are turning into versatile service platforms for industry and other specific customer segments. ETSI ISG MEC is the home of technical standards for edge computing. The group has already published a set of specifications and reports to offer fully standardized solutions to support IoT applications in distributed cloud. The emphasis of this talk is the MEC features in support of IoT use cases and requirements, as well as the MEC integration with 5G system and the MEC expansion to edge federation.
Today's mobile phones are far from mere communication devices they were just fifteen years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. Information about users’ behaviour can also be gathered by means of wearables and IoT devices as well as by sensors embedded in the fabric of our cities. Inference is not only limited to physical context and activities, but in the recent years mobile phones have been increasingly used to infer users' emotional states. The applications of these techniques are several, from positive behavioural intervention to more natural and effective human-mobile device interaction. In this talk, I will discuss the work of my lab in the area of mobile sensing for modelling and predicting human behaviour for social good. I will also discuss our research directions in the broader area of modelling human behaviour and social systems, outlining the open challenges and opportunities.
Federated Learning (FL) and Multi-agent Reinforcement Learning (MARL) are two emerging machine learning paradigms for future intelligent wireless IoT and networked systems. FL is a data-driven supervised machine learning setting where the centralized location trains a learning model by using remote devices (e.g., sensors, user devices). On the other hand, the decentralized MARL schemes, which are based on interactions of the learning agents with the environment, present suitable frameworks to solve decision and control problems considering the heterogeneity of IoT systems. In this talk, I shall discuss example applications and also the challenges of employing FL and MARL methods in resource-constrained and unreliable wireless IoT systems and networks. I shall present an FL algorithm that is suitable for a resource-constrained wireless access network and also a MARL method for a practical wireless edge computing environment. To this end, I shall discuss several of the key open research issues.
This talk is an attempt to answer the question “How can intelligent machines efficiently communicate?” which is one of the main goals of the so-called “Semantic Communication”. I will present a joint work with Daniel Bennequin which shows our progresses towards a mathematical theory of semantic communication, inspired by the foundational works of Claude Shannon and Alexander Grothendieck. To communicate efficiently we need a language. This language is intimately related to the goal or task that the semantic source has to follow. The second part of the presentation will be devoted to the Carnap and Bar-Hillel language. It will be shown on this example why a notion of semantic information measure cannot be a scalar quantity but a space. We will give some intuition on the construction of such spaces. Finally we will propose both semantic source coding and semantic channel coding theorems.
This industry keynote is on Convergence and Disaggregation, a Networking Systems Perspective. Bio: As vice president of Strategy and CTO for the ION Division at Nokia, Stephen is responsible for looking at the road ahead and determining which new projects and emerging markets the company should consider for investing its resources. Relying on a background that includes more than 20 years’ experience in the telecom and networking industries, he helps develop road maps for the company so it can seize the opportunities that are a best fit. Some of his key successes in this role include driving investments in Packet Core and CDN technology, as well as being an early advisor on the company’s work in NFV and SDN.
It is undeniable that artificial intelligence and machine learning algorithms are at the heart of a fast-growing number of new telecommunication technologies. With our panel of experts, we will explore a variety of IP protection strategies that include copyrights, trade secrets and patents, as well as their applicability to data and AI technologies.
With 5G and similar technologies such as AI gaining more and more momentum in their implementation into vertical industries and with research on them ongoing, it becomes more important to understand how they are being applied across multiple sectors of industry and society. Insights can often be gained in environments with multiple vertical ecosystem participants that may have differing perspectives and may orient part of the next stage of research on them. Initiatives such as Networld Europe, 5GPPP, PAWR, ENCQOR and others offer opportunities to apply new technologies in many areas, foster participation of SMEs and link with research initiatives in various verticals.
This industry keynote is on the Internet Needs More Engineering. Bio: Scott is the Head of the Canadian Centre for Cyber Security. The Cyber Centre is the single unified source of expert advice, guidance, services and support on cyber security for government, critical infrastructure owners and operations, the private sector and the Canadian public. Scott began his career at the Communications Security Establishment (CSE) in 1999 and has held various positions including Assistant Deputy Minister of IT Security, acting Assistant Deputy Minister of Corporate Services and Chief Financial Officer, Director General of Cyber Defence and a variety of positions of increasing responsibility across CSE, primarily in the Signals Intelligence and IT Security domains. He previously worked at the Privy Council Office as a National Security Policy Advisor in the Security & Intelligence Secretariat. Scott holds a Bachelor of Applied Science in Electronic Systems Engineering, a Bachelor of Science in Computer Science, and a Masters of Business Administration.