Welcome to the Media Center, where you can find the latest original video content from ComSoc's conferences and events. Featuring keynotes speakers, executive forums, keynote workshops, industry panels, and much more from ComSoc's events, including the IEEE Global Communications Conference (GLOBECOM) and the IEEE International Conference on Communications (ICC). These videos bring insights to you when you need it. Your ComSoc membership offers free access to many of these valuable contents simply by logging in with your IEEE account.
IEEE and Non-Members can purchase videos after logging into their IEEE Account. If you do not have an IEEE account, click 'Create Account" to create a FREE account to make a purchase.
Non-terrestrial networks (NTN) are complimentary to terrestrial network and expected to provide eMBB services to the areas where there are limitations. In recent years, this non-terrestrial industry has been evolving at an unprecedented speed. A couple of mega constellations have been initiated or planned in the coming years as the cost of launching rockets and rolling out satellites are hugely reduced, which make it possible to build mega (Low Earth Orbit) LEO/VLEO constellation. The emergence of low-cost mega LEO/VLEO constellations tends to be a game changer, since (i) their lower orbit altitude ensures a low latency close to that of a global cellular network and (ii) the scale of constellation provides much higher area capacity than a traditional satellite network. Both are crucial to providing eMBB services. The integration of NTN into terrestrial cellular networks is another important aspect in achieving truly global coverage, since it facilitates seamless roaming between cellular and non-terrestrial networks with a single device. The above-mentioned low latency, high capacity and seamless roaming will jointly contribute to a better user experience. However, a set of enablers are needed to realize the benefits of LEO/VLEO constellations. In this panel, we would like to invite experts from both industry and academia to shape the future of integrated mega constellation based network, deep dive the pain point of existing solutions and find out the potential research directions.
This talk starts from the philosophy of telecommunications using the classical Shannon channel capacity by discussing all related elements of the capacity and followed by the method in brief to achieve capacity above 1 Tbps and the related constraints. This talk then moves to the case when the medium of transmission is extended to the quantum channel followed by the use of quantum bit (qubit) for both quantum communications and quantum-based security. The recent results of the study in the international telecommunication union (ITU) on International Mobile Technology (IMT)-2030 are also introduced, especially for the topics of artificial intelligence/machine learning (AI/ML) and quantum radio access networks and quantum key distribution. This talk then provides the basic concept of quantum communications/computing and AI/ML, such as the quantum circuit, quantum channels, and types of quantum error correction codes. This talk also reviews our current developed quantum channel coding schemes on the quantum [[5,1,3]] perfect codes based on the accumulator and the new proposed quantum [[12,2,4]] accumulator codes having higher qubit rate followed by their simulated and theoretical quantum word error rate (QWER) performances. Both codes are based on syndromes and belong to the class of non-Calderbank-Shor-Steane (Non-CSS) codes. This talk also discusses, in brief, the contribution of entanglement to the success of quantum multiple access channels, which is expected to be useful for Radio Access Networks (RAN). Finally, this talk closes the session with quantum physical layer security and shows an example of quantum key distribution (QKD) for communications between Alice and Bob and their measurements using quantum gates X and Z...
This lecture starts by addressing common myths about data and Artificial Intelligence that give reasons why knowledge acquisition and knowledge modeling should be a technology consideration. The common myths are 1) a database is enough to store any type of information 2) As a database grows to more than 10TB size, one must migrate to Big Data 3) Data Science is Artificial Intelligence 4) Adopting data-driven and Artificial Intelligence is enough. The difference between data, information, and knowledge brings the audience to a common definition of those three. Later, the lecture delves into taxonomy, ontology, relations, and knowledge graph. The difference between normal search engines and semantic search engines is also highlighted. Finally, the lecture explains the connection between Natural Language Processing and Knowledge Acquisition.
Massive Ultra-Reliable and Low-Latency Communications (mURLLC), which integrates URLLC with massive access, is emerging as a new and important service class in the next generation (6G) for time-sensitive traffics and has recently received tremendous research attention. However, realizing efficient, delay-bounded, and reliable communications for a massive number of user equipment (UEs) in mURLLC, is extremely challenging as it needs to simultaneously take into account the latency, reliability, and massive access requirements. To support these requirements, the third generation partnership project (3GPP) has introduced enhanced grant-free (GF) transmission in the uplink (UL), with multiple active configured grants (CGs) for URLLC UEs. With multiple CGs (MCG) for UL, UE can choose any of these grants as soon as the data arrives, while with a single CG (SCG), UE needs to wait for the CG period to transmit the packet. In addition, non-orthogonal multiple access (NOMA) has been proposed to synergize with GF transmission to mitigate the serious transmission delay and network congestion problems. However, in the GF-NOMA scheme, the data is transmitted along with the pilot randomly, which is unknown at the BS and can lead to new research problems. In this talk, Machine Learning (ML) approaches in mURLLC systems will be presented. Promising research directions and possible ML solutions will also be discussed.
Deep learning is a type of machine learning technique that uses algorithms based on artificial neural networks. In the last decade, deep learning has shown promising results in many application areas, ranging from communication systems, signal processing, computer vision, and natural language processing. One of the main drivers for the advancement of this field and its numerous applications is the rise of a simple application programming interface (API) for the implementation of deep learning algorithms. This led to the democratization of technology and its usage. Pytorch is one of the most popular programming libraries in Python for the experiment and development of deep learning algorithms. It is developed by Facebook and provides implementations of many state-of-the-art deep learning models. In this short course, the basic concepts that include tensors, automatic differentiation, and technique for creating simple fully connected neural network layers will be covered. Students will then apply this concept to create a simple neural network model known as multilayer perceptron (MLP) to solve regression and classification problems. After this short course, students are expected to be able to use Pytorch to solve modeling problems related to classification and regression with tabular data.
Deep reinforcement learning (DRL) has received much attention and finds successful applications in various important fields, including games, robotics, transportation, and science. Despite its continuing success, DRL still faces several major challenges, including accurate value function estimation, improved sample efficiency, and efficient practical implementation. In this talk, we will present our recent results on tackling these issues in DRL. (i) Using Boltzman softmax operator for improving single-agent DRL value function estimate. We show that properly incorporating the softmax operator in continuous control helps smooth the optimization landscape and leads to efficient policy search and optimization. We then present the Softmax Deep Double Deterministic Policy Gradient (SD3) algorithm, which effectively improves the overestimation and underestimation bias and outperforms state-of-the-art methods. (ii) Using regularization and Softmax for efficient policy search in multi-agent RL (MARL). We first discover a gradient explosion issue suffered by existing methods, which severely affects value function estimation. We then propose a novel Softmax and regularization-based update scheme RES to penalize large joint action values that deviate from a baseline and demonstrate its effectiveness in policy learning. (iii) Applying DRL to sustainable computing applications. We develop highly scalable and efficient DRL algorithms for large-scale dockless bike sharing and network optimization problems, which significantly outperform state-of-the-art methods
In the past decades, we have seen a drastic evolution of the communication networks, from the conventional homogenous computer networks to the advanced heterogonous networks. As the demands on the communication systems become more stringent, the problems faced by the communication engineers also become more complex, as well as the solutions to these problems. We have seen that there are more and more solutions based on artificial intelligence, machine learning, and deep learning in the systems. This is motivated by the great success of machine learning algorithms in supporting big data analytics, parameter estimation, and complex decision-making. This talk aims to give a brief overview of the current trends of deploying machine learning algorithms in solving problems and challenges in communication systems. This talk is divided into three parts. First, a general introduction of machine learning and deep learning is given. The second part focuses on using machine learning algorithms in solving various problems in future wireless communication networks. Last but not least, the third part discusses on using deep reinforcement learning and deep federated learning to support the operation and services of internet-of-thing (IoT).
This talk will discuss technical challenges and recent results related to deep learning/machine learning in 6G network optimization. The talk is mainly divided into four parts. In the first part, this talk will introduce 6G, deep learning, and machine learning, discuss the 6G mobile network architecture, and provide some main technical challenges of deep learning/machine learning in 6G network optimization. In the second part, this talk will focus on the issue of deep learning/machine learning in 6G network optimization and provide different recent research findings that help us to develop engineering insights. In the third part, this talk will address the resource allocation and MAC layer design of deep learning/machine learning in 6G network optimization and address some key research problems. In the last part, this talk will summarize by providing a future outlook of deep learning/machine learning in 6G network optimization
Reinforcement learning (RL) is an artificial intelligence approach that enables decision makers to learn and take appropriate actions in a dynamic and unpredictable operating environment. Compared to other artificial intelligence approaches, including supervised and unsupervised learning, RL has a distinguishing aspect in which it learns through interaction with the environment. By receiving rewards (or penalties) from the environment, a decision maker can evaluate the appropriateness of its selected action under a particular environment, and so a teacher or a critic is not required to be present to tell whether an action is appropriate or inappropriate. The fact that RL has outperformed human experts in various computer games, such as the Atari games, has drawn wide interest in exploring and exploiting the application of RL to solve a diverse range of problems and enhance next-generation technologies. This talk covers the fundamental aspects of RL, including the Markov decision process problem, algorithms, state-of-the-art models and algorithms, simulation, and open issues. Ultimately, it guides participants in exploring the use of RL to provide solutions to problems and issues at hand.
Artificial Intelligence (AI) is an important subject area in computer science and engineering. This talk will introduce the past, present and future of AI and its role in networks, demonstrating its vibrancy through a variety of ideas in intelligence automation, cloud computing and machine learning for networks like the Internet, online social networks, social learning networks and autonomous transport networks.
Unlike previous generation networks that were mainly designed to meet the requirements of human-type communications, 5G networks enable the collection of data from machines with the total number of devices expected to be about 26 billion in 2026 according to Ericsson Mobility Report. The next step in 6G systems is to enable a new spectrum of control applications based on these data, such as extended reality, remote surgery, autonomous vehicle platoons. The design of communication systems for control applications requires meeting the strict delay and reliability requirements of communication systems and addressing the semantics of the control systems. This can only be achieved by using a heterogeneous network architecture, including terrestrial communication, satellites, UAVs, and underwater communication, and higher frequencies, including mmwave, THz, and optical communications, in addition to sub-6GHz transmission. All together increases the complexity of the networks while requiring their adaptivity to various applications and networks. In the first part of this talk, AI-based communication techniques, technologies, and architectures are introduced by demonstrating the usage of extreme value theory, federated learning, and reinforcement learning. In the second part of the talk, the fundamental paradigm shift from the Shannon paradigm is introduced. While the Shannon paradigm aims to guarantee the correct reception of each single transmitted bit, irrespective of the meaning conveyed by transmitted bits, communication for control applications focuses on guaranteeing the success of the task execution, such as plant stability for automated production lines, and detection accuracy in cooperative vehicle systems. Novel AI based resource allocation techniques for the joint design of control and communication systems are presented.
In this talk, the speaker begins with a discussion about security, its classical features such as confidentiality, availability, and integrity. He then covers traditional security issues, attack planes, and potential impacts. After this introductory information, the speaker discusses physical-layer key generation, recent developments, and testbed based results.
Cyber systems, including the Internet of Things (IoT), are increasingly being used ubiquitously to vastly improve operational efficiencies and reduce costs in critical areas, such as finance, transportation, defense, and healthcare. Over the past two decades, dramatic improvements in computing efficiencies and hardware costs have made most of our today’s economy increasingly ever more digitized. It is important to note that such widespread use of devices for providing various services has resulted in the generation of large amounts of rich user data which needs to be protected. Emerging trends in successful targeted cyber system breaches have shown increasing sophistication, with most of them using intelligence generated through the collection and integration of publicly available data. Such sophisticated attacks can only be thwarted by defense mechanisms that rely on specific actionable intelligence. Although it is true that more data from diverse sources are available, such data may not automatically translate to actionable intelligence. In fact, translating large quantities of such diverse datasets into actionable intelligence is a nontrivial process. It involves identifying and integrating useful pieces of information from large quantities of noisy and biased datasets. In this talk, we will discuss some useful deep learning techniques and various challenges in generating actionable pieces of intelligence utilized for thwarting such sophisticated targeted attacks.
Quantum Key Distribution (QKD) is a method of key exchange between the communicating entities with security guaranteed by the fundamental rules of quantum mechanics, that helps to detect any attempt of eavesdropping by changing the system properties irreversibly during any intrusion. With the rapid advancements in quantum computing, the security of the existing cryptographic methods, that is based on computationally hard problems, will be at stake. In 1984 Charles Bennett and Gilles Brassard proposed the first QKD protocol, known as BB84. Differential-Phase Shift (DPS) and Coherent One-Way (COW) QKD protocols have gained popularity due to the high key rate and ease of practical implementation. Although QKD offers information-theoretic security, imperfections in devices are exploited for side-channel attacks. The transmitter can be protected by using an isolator. Measurement Device-Independent (MDI) QKD protocol eliminates chances of detector-side side-channel attacks and offers perfect security. The QKD nodes, commonly known as Alice and Bob, are connected through a quantum channel reserved for qubit transmission and a classical channel reserved for synchronisation and other post-processing tasks. Key generation involves multiple steps−Classical channel authentication, qubit transmission and synchronisation, sifting, error correction, and privacy amplification. Standard development organisations like ETSI and ITU-T are developing standards for QKD. However, existing QKD solutions are mostly vendor proprietary with a limited scope of interoperability. There is a scope for the development of open and standard interfaces among different horizontal as well as vertical layers of the quantum communication network stack. The potential use of the QKD system will be in strategic communication among various government offices, military and defence networks, healthcare and pharmaceutical sectors, blockchains, banking, finance and insurance sectors, telecommunication and IT networks
Recent years have witnessed breakthrough research in quantum computers that bear the potential to break many of the classical and widely used cryptographic schemes, such as RSA, ECDSA, and El Gamal among others. All such existing cryptosystems need replacements by quantum-resistant protocols and appropriate architectures to provide the required services. The aim of this talk is to introduce post-quantum cryptographic systems, secure against both quantum and classical computers, which can be broadly classified into five categories, namely, Isogeny based, Hash based, Lattice based, Multivariate based, and Code based cryptosystems. We shall discuss the underlying hard problem for each of the cryptosystems, along with their advantages and limitations. We shall also discuss various efforts and initiatives that are being currently taken towards the standardization of post-quantum cryptography by organizations such as NIST, ETSI, etc, as well as towards smooth migration to a quantum safe world.
In this talk, the speaker begins the discussion focusing on ARM TrustZone-based Mobile Peripheral Control. Practical attestation for edge devices running compute heavy machine learning applications is then covered.
To facilitate smart applications such as intelligent manufacturing, energy-constrained devices are inter-connected through bandwidth-constrained communication protocols to form the Internet-of-things (IoT). Unfortunately, due to such constraints, the IoT networks fail to employ conventional security protocols, which makes them vulnerable to security threats. On the one hand, the IoT devices are vulnerable to illegal access and inference of sensitive information, while on the other, their users are prone to spoofing attacks through which an attacker can feed malicious data to the user. This makes it indispensable to enhance the robustness of IoT networks, specifically those utilized for security-sensitive applications. In this talk, I will discuss two specific networks: (1) Controller Area Network (CAN) which is a representative wired IoT network connecting different electronic control units within an automobile, and (2) Bluetooth Low Energy (BLE) network which is a representative wireless IoT network connecting wearables to the user’s smartphone. For both these networks, I will describe newly discovered vulnerabilities that can exploited by an attacker to launch an attack without getting detected by the deployed security mechanisms. I will conclude the talk by discussing the adopted methodologies to mitigate the discovered threat.
Machine learning (ML) and AI will play a key role in the development of 6G networks. Network virtualization and network softwarization solutions in 5G networks can support data-driven intelligent and automated networks to some extent and this trend will grow in 5G-advanced networks. Radio access network algorithms and radio resource management functions can exploit network intelligence to fine-tune network parameters to reach close-to-optimal performance in 5G networks. In 6G networks, network intelligence is envisioned to be end-to-end, and air interface is envisioned to be AI-native. The user equipment (UE) devices need to be smarter, environment and context-aware, and capable of running ML algorithms. With these capabilities on end devices, federated learning is envisioned to be one of the promising solutions that can solve the scalability and trust issues in distributed learning. This talk will focus on the main practical challenges in developing machine learning solutions in 5G use cases, related 3GPP standardization activities, and emphasize with a case study how the deployment of these solutions is much harder in a real network as compared to theoretical performance evaluation. Furthermore, the use of federated learning in wireless networks is motivated by providing example use case examples; and challenges in the use of federated learning solutions in 6G networks are explained.
Neuromorphic computing moves beyond the neuronal abstraction adopted by conventional neural networks by taking inspiration from the dynamic, sparse, event-driven signaling and processing exhibited by biological neurons. This talk will first present an overview of the state of the art in neuromorphic computing by focusing on motivation, models, and on the design of training algorithms. This will be done by distinguishing between deterministic and probabilistic models, and by concentrating on principles and intuition. Then, a novel use case for neuromorphic computing in communications will be outlined, namely neuromorphic joint source-channel coding for remote inference over wireless channels. The talk will also offer discussions on the current limitations of the technology and on open problems.
Modulation identification and target classification are important functions for intelligent RF receivers. These functions have numerous applications in cognitive radar, software-defined radio, and efficient spectrum management. To identify both communications and radar waveforms, it is necessary to classify them by modulation type. For this, you can extract meaningful features which can be input to a classifier. While effective, this procedure can require effort and domain knowledge to yield an accurate identification. A similar challenge exists for target classification. In this workshop, we will demonstrate data synthesis techniques that can be used to train Deep Learning networks for a range of radar communications systems including: • Data pre-processing and wave generation • Develop a model using a pre-trained model (SqueezeNet) using the Deep Network Designer app • Deep Learning modeling