Skip to main content
Publications lead hero image abstract pattern

Publications

IEEE CTN
Written By:

Mona Ghassemian, Chathura Sarathchandra, and Ulises Olvera-Hernandez, InterDigital Europe, London

Published: 29 Mar 2022

network

CTN Issue: March 2022

A note from the editors:

For the longest time humans have strive to communicate human emotions.  From artistic expressions like works of art, books, movies and audio, to more recently all kinds of audio and visual expression over a variety of communication mechanisms, humas have tried to provide a sense to the recipient, of the overall emotional state of the sender, that’d be real or fictional.  Up to now, this has been a two-dimensional representation (vision and hearing) that the human brain can extrapolate into further emotions or “feelings”.  I remember taking 4D rides at Disney where movement and smell were incorporate into the experience, making it “more realistic”, in that particular case the rider is present and the environment around him or her can be easily engineered and manipulated.  But how about a long-distance communication of other senses like tact and more importantly haptics?  These have posed a challenge to researchers and engineers due to the many complexities involved in transmission and then reproduction.  “Think about the sensation of searching for your keys in the bottom of a backpack in the dark – you know how they feel without being able to see them and can differentiate them from the other items in the bag”, the authors of this article write.  That is indeed a frontier that if reached, can open an un-imaginable new world of possibilities.

In this article, the authors provide us with a historical progression of sensory communication, discuss the challenges of “getting in touch”, provide a perspective of the applications that can benefit from tactile communication and discuss the world of an “emotional internet”.  In the end, “emotional” communication is a multidisciplinary effort; the journey towards 6G will play a strong hand in our ability to construct networks that can deliver the needed resources to make it real and with it, open up this new world of opportunities.

Miguel Dajer, CTN Editor in Chief

Multi-Modal Communications: Toward a Tactile, Haptic and Emotional Wireless Internet

Mona Ghassemian

Mona Ghassemian

InterDigital Europe, London

Chathura Sarathchandra

Chathura Sarathchandra

InterDigital Europe, London

Ulises Olvera-Hernandez

Ulises Olvera-Hernandez

InterDigital Europe, London

Communication theory was defined in 1949 by Claude E. Shannon and Warren Weaver to broadly include all the procedures by which one mind might affect another. The procedures include not only written and oral speech, but also music, the visual arts, theatre and ballet, and all aspects of human behaviour.

Reflecting on the history and evolution of wireless technology to communicate human sensory information modes, our earliest phones first tapped into our sense of hearing. When the technology evolved to send and receive data and stream video, a connection with our sense of sight was established. While many telecoms vendors suggest that wireless communication will never need to involve our senses of taste or smell, the next frontier for sensory connection is, in fact, touch.

Standard development organizations, like IEEE have begun to develop tactile internet architecture and codec standardization and other technical activities to enable a wireless device to transfer the sense of touch and allow remote monitoring and operation. Our teams leading Next Generation Networking research are committed to leading those activities to show that one day wireless networks can be capable of transmitting emotional data between users in real time to communicate human behavior.

Figure 1

It is important to remember that each generation of wireless has been built upon the foundation of earlier generations. For example, when cell phones first emerged on the market, in the era we now call 1G, they were developed with the same essential functionality as landline phones, with added analog radio technology to make them mobile. Then came 2G, which made these calls digital and added SMS functionality and some very early (albeit limited and slow) internet connectivity. 3G added video calling, interactive gaming, and expanded wireless internet connectivity, though at comparatively low data rates to what we see today. 4G increased those data rates dramatically, giving us the ability to stream high-definition video seamlessly nearly anywhere and to do things like interactive AR gaming. As we look at 5G technology and toward 6G, the paradigms might significantly change in ways that appeal to our human senses beyond just sight and sound. It is now within reach to envision a wireless internet that goes far beyond the smartphone form factor, and toward functionality that includes tactile, haptic, and even emotional sensory capabilities. What will it take to realize this vision? Let's take a look.

Getting in Touch

Before we delve into the technology, it’s important to talk about the sense of touch because it's a unique sense. The human sense of touch is the only one we have capable of simultaneous input and output. These sensory neurons can receive the sensation of being touched at the same time they can perceive and respond to the sense of touching another object. We can't see our own eyes or smell our own nose, but we can touch while being touched which makes haptic sensory modalities and feedback very complex.

It's important for us first to define some terms – sense of touch is really about perception, and we’ll discuss two forms here. Haptic perception refers to active touch or the human ability to manipulate things while receiving sensory input about the interaction. Think about the sensation of searching for your keys in the bottom of a backpack in the dark – you know how they feel without being able to see them and can differentiate them from the other items in the bag. Tactile perception, on the other hand, is a more passive concept that refers to the sense of being touched. When someone taps you on the shoulder to get your attention, your perception of that tap is the tactile sense in action.

In a technology scenario, haptic technology works to replicate the haptic sense by providing real time tactile feedback and force feedback to the user through a variety of sensor-enabled devices, from gloves to body suits to other wearable devices. This technology is geared towards delivering new services and devices that will enrich the user experience, using new sensors and new data modalities. To maintaining a human model for remote use, multimodal communication involves the exchange of haptic data (including position, velocity, and interaction forces) and other user modalities (like audio, visual, gestures, head movements and posture, eye contact, facial expressions, and user’s emotion). Typically, haptic information is composed of two distinct types of feedback: kinaesthetic feedback, which provides information of force, torque, position, and velocity and tactile feedback, which provides information about surface texture, friction, etc. The former is perceived by the muscles, joints, and tendons of the body. Tactile internet requires a combination of sensing, communicating, and actuating, or controlling for tele-operation, with smart networks and systems.

The sensing part of this technology is rooted in industrial sensor technology. The communication side of it comes from the evolution, and convergence, of wireless sensor networks and cellular communications technology. This convergence was largely made possible by the emergence of 5G technology and its combination of extreme high availability, reliability, and ultra-low latency.

Because the human body reacts so quickly, the tactile internet requires latency that does not exceed one millisecond to achieve accuracy because longer latencies induce feelings of cyber-sickness or nausea in users. Achieving one-millisecond latency takes a truly end-to-end journey: from the sensor all the way to the core network, back through the radio, and to the actuator device. That one-millisecond is the same duration of time that our neurological system can transfer to our brain the sense of touch from the part of the body where touch is being felt. Simply put, this level of latency in wireless wasn't a practical reality before 5G.

Uses for a Haptic/Tactile Internet

Unlocking haptic and tactile capabilities reveals some exciting use cases. For example, new feats like remote surgery will become possible, and even early trial surgeries in China have been successful. There are also several industrial/robotic control applications, as well as interactive multiplayer gaming, education and edutainment, automotive, athletics training, and more. Outside of their vertical applications, haptics has proven to be very effective for delivering alerts because the neurons associated with touch respond more quickly than those associated with visual or auditory stimuli. This sort of application has a variety of horizontal uses such as in guiding and alerting visually-impaired or hearing-impaired people, or in applications where sights or sounds are not desirable, like in certain military operations.

An Emotional Internet?

As we look even further beyond 5G and toward 6G use cases, these emerging sensory capabilities might expand beyond the sense of touch into emotional sensing and feedback. Use cases related to emotion would operate beyond the human-to-human and human-to-computer interactions we see today. At that stage of evolution, networks will become capable of considering the emotional state of the user based on ambient sensors, wearable or other sensors, and data sources from which information can be collected.

What does this mean? Today, many people wear fitness trackers and devices like the Apple Watch or FitBit that can measure your steps, heart rate, and a handful of other biometric data points. But that's about the extent of it. In the future, wearable sensors may be embedded in garments and could monitor a broader range of data, including:

  • ECG (for heart activity),
  • EEG (for brain signals),
  • EMG (Electromyography, which measures muscle response),
  • BVP (blood volume pulse - which measures heart rate),
  • Blood glucose level,
  • GSR (galvanic skin resonance, a measure of emotional state. Basically, when we stress, we sweat and GSR measures that),
  • PPG (Photoplethysmography - a noninvasive technique to measure volumetric changes in blood flow in the skin)

These wearable devices may also include accelerometers to measure body position and angle and posture. Some sensors may be implantable in situations where biomolecular or chemical sensing is necessary or desirable. Together, these data points can tell us a lot about a person's emotional state.

Beyond the health-related metrics that can be collected by wearable devices, new algorithms will be developed from these data sets that help analyze and assess the emotional state of the user. Layered on top of the biometric data are other data points like social media information, and ambient sensors to gauge a user’s level of movement in a room. For instance, when we are in a depressed mood, we may sit and watch TV longer, but if we're in an elevated mood we may move around more and be prone to getting more exercise. These types of behaviors are measurable and the next generation of sensory applications, XR, and other intelligent systems will move toward this kind of technology.

Example: Haptic and Multimodal Feedback for Elite Athlete Precision Training

Seeking the apex of athletic ability, elite professional athletes can benefit from precision training by collecting haptic feedback and emotional data to better tailor training to their temperament to enhance results. Furthermore, the COVID-19 pandemic placed a pause on in-person sport training and activities and impacted the routines of elite athletes and sporting bodies while forcing many of them to navigate training in a virtual setting.

With the assistance of 5G networks, a coach can remotely and safely communicate and monitor a training session based on data collected from wearable and ambient sensors, and assess the performance based on data collected from the athletes prior to training. Furthermore, the ambient or emotional information collected from athletes during the training would enable the coach to change an on-going training and use the data to adjust future sets to tailor training to each athlete’s temperament. Among the myriad examples, running, football, swimming, boxing, and fencing are some of the sports that would benefit from this use case.

For example, Coach Juli sets and monitors a training for the elite athlete Jenny to qualify for the Olympics using the multi-modal sensory data collected from the wearable devices that are interconnected through the 5G network with the Monitoring and Coaching Unit. The Monitoring and Coaching Unit allows Juli to receive and analyze data received from Jenny using tactics like AI or ML. Haptic feedback collected from Jenny’s shoe soles and other data points from her smart outfit like muscle fatigue, temprature, blood oxygen level, sweat, and heart rate can provide information about her performance across different environments and climates for her training program. Furthermore, in case of an anomaly detected in the tracking of biometrics, Coach Juli can use a predefined haptic signal to alert Jenny to reduce her speed gradually or instantly to avoid possible damage to her physical health. Coach Juli can also deliver an audio communication to speak with the athlete and provide remote instructions.

The mobile network operator (MNO) provides service data flows for multimodal information delivered over one common session to different devices or pieces of user equipment (UE). For example, a runner can receive a haptic alert to her or his haptic glove while audio can be communicated to her or his headset to assist navigation during training or a game.

Figure 2
Example of multimodal feedback for elite athletes training.

Building the System

Not only will user equipment need to evolve, but we will also need new system architecture models and capabilities -- based on different service requirements -- to accommodate and enable user experiences for these types of use cases. For example, edge computing architecture will bring data closer to the end user to help achieve the required latency to prevent cyber sickness. Network slicing will improve latency and also help increase the available dedicated resources for these use cases. For example, we will be able to design network slices capable of delivering modalities to satisfy a specific user requirement on the fly, e.g., an “Emotion Slice.” These would deliver a combination of audio, video, and haptics and provide resources capable of handling these types of data communications, which can increase the reliability and overall performance of the service.

The Standards Landscape

As new devices and system architectures evolve to address these new requirements, new interoperability challenges arise, As such, industry standards will need to evolve and some of this work is already underway. In 3GPP, for instance, the service requirements group (SA1) has introduced several specific multimodality communications features into Release 18 (R18). These features are still being studied and several use cases have been introduced into that group for study. Elsewhere in 3GPP, in the system architecture group (but also for R18), companies have started bringing in study items for multimodality communications. These groups are looking into the future, into specific aspects of how the 3GPP system can also evolve to deliver multimodality communications that satisfy key use cases, and how this evolution will affect or impact the standard.

Beyond 3GPP, there are other similar initiatives underway in standards bodies including IEEE, IETF, and ITU-T. Historically, the first IEEE activities on tactile internet began in 2016 with 1918.1, following the very first ITU-T reference report about tactile internet in 2014. Research and industry are still early in the development of these types of standards, but by no means is this haptics development new or confined to a single standards body. Thus far, the standards being developed in these bodies are focused on the haptic technology, and have not yet addressed the emotional or brain elements. The expectation is that the roadmap for several of these standards will address emotions over the coming years, likely as we get closer to more concrete standardization of 6G. It will be exciting to watch.

In addition to shaping technology capabilities, the standards are defining the service requirements as well, with many of the key performance indicators (KPIs) for this  technology coming directly from neuroscientists. In 3GPP SA1, for example, service requirements are being explored for various use cases. From telesurgery to gaming to simply alerting pedestrians on the street, KPIs like one-millisecond latency are coming directly from neuroscientists, biologists, and other individuals who understand what the minimum perceived delay in a tactile or haptic scenario needs to be. While the standards targeting one millisecond latency are geared towards interactions involving humans, applications involving robotics can perceive much lower latencies –as low as   125 microseconds – at levels that humans cannot perceive. Those specifications will also need to be written into the standards as well, based on information from the appropriate robotics experts.

An Interdisciplinary Effort

As previously discussed, this technology and standards development process is a truly interdisciplinary effort. Networking and wireless technology companies will need to be engaged to specify things like latencies in the core and radio networks, but the engagement won't end there. For tactile internet applications, the effort will require many others beyond the traditional communications, internet and computing technology industries. For example, physicians and medical device companies will need to be involved in both the technology development and the standards development for remote surgery applications, while behavioral psychologists will have to be involved to help draw the most useful information from emotional sensing technologies. Certain applications may require artificial intelligence, not only to process the large amount of human sensory information, but also to help anticipate certain responses, so AI may need to be not only specified, but potentially standardized in some cases.

Looking ahead to the next decade and 6G, the wireless internet experience will be much more complex than it currently is. 6G will bring a range of multi-sensory experiences, and a more autonomous intelligent system that can interact with us far beyond just the sights and sounds we sense today.

Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.

Sign In to Comment