A Digital Finger On A Warm Pulse: Wearables and the future of healthcare

Taken together, blood pressure, glucose and oxygenation levels, sympathetic neural activity (stress levels), skin temperature, exertion levels and geo-location data provide a detailed, in-the-moment picture of one’s physiological status and activity. Today, all of that is provided by clinical-grade smart monitors (often worn on one’s body) that are used in medical research projects around the world. Subtle changes in patterns over time can provide very early warnings of many disease and dysfunctional states (see this article in the Journal Artificial Intelligence in Medicine).

It is well established that clinical outcomes are highly correlated with timely diagnosis and an efficient differential diagnosis. In the not too distant future your “guardian angel” is probably going to be  an artificial intelligence medic (medic-AI). Medic-AI will individualize your precise clinical norms  and match them against an ever-growing library of norms harvested from the data inside the Cloud. In the future, you may never experience your first cardiac event because you take the advice of your medic-AI and make subtle modifications to your lifestyle, or if things do go wrong, the paramedics will arrive well before your event becaue the medic-AI has sent them your symptoms! The concept of a machine-run medical future is a hot topic in research at the moment and Nature’s recent article on wearable electronics reviews technology that hints at this future, namely wearable sensors that could harvest body data and consequently transform the health care industry altogether.

In this scenario, the effectors are people and the peripheral nervous system of the Cloud is simply relaying information. There are, however, more direct ways in which your medic-AI could help manage your physiological status. Many chronic conditions today are dealt with using embedded drug delivery systems but those need to be coupled with periodic hospital visits for blood tests and status examinations. Wirelessly connecting your embedded health management system (which includes an array of advanced sensors) to your medic-AI could avoid all that. In fact, the health management system can be designed to ensure that a wide range of physiological parameter remain within their normal ranges despite the occasional healthy living lapse of its host.

For me as a neuroscientist, the most exciting developments in the areas of sensor technology are in the ambulatory measurement of brain activity. Recently, a number of research laboratories have utilized various ways to measure the brain activity of people listening to multiple talkers in conversations, not unlike a person in the cocktail party scenario. What they have found is nothing short of amazing. Using relatively simple EEG recordings with scalp electrodes and the audio streams of the concurrent talkers together with rather sophisticated machine learning and decoding, these systems are able to detect which talker the listener is attending to. Some research even indicates that the person and the spatial location can be decoded from the EEG signal and that this process is quite resistant to acoustic clutter in the surrounding environment.

This finding is key, for it shows how we can follow the intention of the listener in terms of how they are directing their attention and how this varies over time. This also provides important information we can use to direct the signal processing produced by a hearing aid to focus on the spatial location of the listeners and also to enhance the information being processed that the listener wants to hear — effectively defining for us signal and what is noise when the environment is full of talkers of which only one is of interest at any particular instance.

Other recent work has demonstrated just how few EEG electrodes are needed to get robust signals for decoding once the researchers know what to look for. Furthermore, the recordings systems themselves are now sufficiently miniaturized so that these experiments can now be performed outside the laboratory while the listeners are actually engaged in real-world listening activities. One group of researchers at Oxford University actually have their listeners cycling around the campus while doing the experiments!

These developments demonstrate that the bio-sensors necessary to help us develop better hearing devices are, in principal, sufficiently mature enough  to allow for cognitively controlled signal processing that can produce targeted hearing enhancement. This scenario also provides a wonderful example of how the hearing instrument can share the processing load depending on the time constraints of the processing. The decoding of the EEG signals will require significant processing but this processing is not time dependent – a few 100 ms is neither here nor there – a syllable or two in the conversation. The obvious solution is that the cloud takes the processing load and then sends the appropriate control codes back to the hearing aid either directly or via its paired smartphone. As the smartphone is also listening into the same auditory scene as the hearing aid, it can also provide another access point for sound data that could also provide additional and timelier processing capability for other more time critical elements.

But nobody is going to walk around wearing an EEG cap with wires and electrodes connected to their hearing aid. So how do we take the necessary technology and incorporate it into a socially friendly and acceptable design? We start by examining developments in the world-wide trend of wearables and examining some mid-term technologies that could play into the market as artistic and symbols of status.

 

By Simon Carlile, Ph.D.

Archive