Welcome to the Media Center, where you can find the latest original video content from ComSoc's conferences and events. Featuring keynotes speakers, executive forums, keynote workshops, industry panels, and much more from ComSoc's events, including the IEEE Global Communications Conference (GLOBECOM) and the IEEE International Conference on Communications (ICC). These videos bring insights to you when you need it. Your ComSoc membership offers free access to many of these valuable contents simply by logging in with your IEEE account.
IEEE and Non-Members can purchase videos after logging into their IEEE Account. If you do not have an IEEE account, click 'Create Account" to create a FREE account to make a purchase.
Massive Ultra-Reliable and Low-Latency Communications (mURLLC), which integrates URLLC with massive access, is emerging as a new and important service class in the next generation (6G) for time-sensitive traffics and has recently received tremendous research attention. However, realizing efficient, delay-bounded, and reliable communications for a massive number of user equipment (UEs) in mURLLC, is extremely challenging as it needs to simultaneously take into account the latency, reliability, and massive access requirements. To support these requirements, the third generation partnership project (3GPP) has introduced enhanced grant-free (GF) transmission in the uplink (UL), with multiple active configured grants (CGs) for URLLC UEs. With multiple CGs (MCG) for UL, UE can choose any of these grants as soon as the data arrives, while with a single CG (SCG), UE needs to wait for the CG period to transmit the packet. In addition, non-orthogonal multiple access (NOMA) has been proposed to synergize with GF transmission to mitigate the serious transmission delay and network congestion problems. However, in the GF-NOMA scheme, the data is transmitted along with the pilot randomly, which is unknown at the BS and can lead to new research problems. In this talk, Machine Learning (ML) approaches in mURLLC systems will be presented. Promising research directions and possible ML solutions will also be discussed.
Deep learning is a type of machine learning technique that uses algorithms based on artificial neural networks. In the last decade, deep learning has shown promising results in many application areas, ranging from communication systems, signal processing, computer vision, and natural language processing. One of the main drivers for the advancement of this field and its numerous applications is the rise of a simple application programming interface (API) for the implementation of deep learning algorithms. This led to the democratization of technology and its usage. Pytorch is one of the most popular programming libraries in Python for the experiment and development of deep learning algorithms. It is developed by Facebook and provides implementations of many state-of-the-art deep learning models. In this short course, the basic concepts that include tensors, automatic differentiation, and technique for creating simple fully connected neural network layers will be covered. Students will then apply this concept to create a simple neural network model known as multilayer perceptron (MLP) to solve regression and classification problems. After this short course, students are expected to be able to use Pytorch to solve modeling problems related to classification and regression with tabular data.
In the past decades, we have seen a drastic evolution of the communication networks, from the conventional homogenous computer networks to the advanced heterogonous networks. As the demands on the communication systems become more stringent, the problems faced by the communication engineers also become more complex, as well as the solutions to these problems. We have seen that there are more and more solutions based on artificial intelligence, machine learning, and deep learning in the systems. This is motivated by the great success of machine learning algorithms in supporting big data analytics, parameter estimation, and complex decision-making. This talk aims to give a brief overview of the current trends of deploying machine learning algorithms in solving problems and challenges in communication systems. This talk is divided into three parts. First, a general introduction of machine learning and deep learning is given. The second part focuses on using machine learning algorithms in solving various problems in future wireless communication networks. Last but not least, the third part discusses on using deep reinforcement learning and deep federated learning to support the operation and services of internet-of-thing (IoT).
This talk will discuss technical challenges and recent results related to deep learning/machine learning in 6G network optimization. The talk is mainly divided into four parts. In the first part, this talk will introduce 6G, deep learning, and machine learning, discuss the 6G mobile network architecture, and provide some main technical challenges of deep learning/machine learning in 6G network optimization. In the second part, this talk will focus on the issue of deep learning/machine learning in 6G network optimization and provide different recent research findings that help us to develop engineering insights. In the third part, this talk will address the resource allocation and MAC layer design of deep learning/machine learning in 6G network optimization and address some key research problems. In the last part, this talk will summarize by providing a future outlook of deep learning/machine learning in 6G network optimization
In this talk, the speaker begins the discussion focusing on ARM TrustZone-based Mobile Peripheral Control. Practical attestation for edge devices running compute heavy machine learning applications is then covered.
Machine learning (ML) and AI will play a key role in the development of 6G networks. Network virtualization and network softwarization solutions in 5G networks can support data-driven intelligent and automated networks to some extent and this trend will grow in 5G-advanced networks. Radio access network algorithms and radio resource management functions can exploit network intelligence to fine-tune network parameters to reach close-to-optimal performance in 5G networks. In 6G networks, network intelligence is envisioned to be end-to-end, and air interface is envisioned to be AI-native. The user equipment (UE) devices need to be smarter, environment and context-aware, and capable of running ML algorithms. With these capabilities on end devices, federated learning is envisioned to be one of the promising solutions that can solve the scalability and trust issues in distributed learning. This talk will focus on the main practical challenges in developing machine learning solutions in 5G use cases, related 3GPP standardization activities, and emphasize with a case study how the deployment of these solutions is much harder in a real network as compared to theoretical performance evaluation. Furthermore, the use of federated learning in wireless networks is motivated by providing example use case examples; and challenges in the use of federated learning solutions in 6G networks are explained.
Confidential computing based on Fully Homomorphic Encryption (FHE) is gaining attention. FHE enables arbitrary computation on encrypted data and thus enables the privacy of users' data. Ever since the first FHE scheme based on lattice was proposed by Craig Gentry in the year 2009, many FHE schemes are designed. Some of prominent FHE schemes BGV, CKKS, TFHE . In the meantime, many open source libraries for FHE schemes such as HElib SEAL, HEAAN, PALiSADE, etc. are also developed. This enabled to realize some of the applications in the area of data analytics and private AI (private inference) based on FHE. However, due to considerable computation/memory overhead, the performance of FHE schemes is slow of the order of 6 when compared to computation on plaintext data. Hence to address this, there are several research works devoted to accelerating FHE schemes. As part of this talk, we cover, optimizations in FHE for realizing privacy preserving machine learning applications in an efficient manner.