Brain Computer Interfaces and Hearables

The peripheral nervous system conveys afferent (in-going) sensory information about the world to the brain and also carries efferent (out-going) signals from the brain to the muscles and other effectors of the body. Stimulating the afferent sensory pathways can lead to perceptual experiences (e.g. how an electrical current is sent to the audio nerve (afferent) and generates a sensation of sound through a cochlear implant) and likewise, stimulation of the efferent system can lead to movements (e.g. stimulating the nerve going to a muscle (efferent) makes the muscle contract or enables control of a prosthetic limb). 

Since the work of Giovanni Aldini in 1802, researchers have been developing so-called neuroprosthetics – systems and devices that are aimed at restoring auditory or visual sensory capabilities or that work to reassert motor control of damaged muscular or nervous systems. While this work now usually involves putting a computer in the loop, the term brain computer interface (BCI) is generally used to refer to a system that measures brain activity and maps that activity onto perceptual inputs, motor outputs or other mental events or intentions.

BCI research began in the early 1970s at the University of California (UCLA) when Jacques Vidal demonstrated that brain EEG activity evoked by visual stimulation could be analyzed in real time by a computer and then used by the subject to move a cursor around on a screen to solve a maze. There are a range of other non-invasive, brain imaging techniques that have complemented EEG over the years, but with the exception of magnet encephalography (MEG), such systems have a much lower temporal resolution — although they do offer higher spatial resolution. MEG offers excellent temporal resolution and quite good spatial resolution but is not a a good step forward into developing a practical BCI as the systems cost millions of dollars and require gallons of super cooled liquid helium.

While on the one hand MEG is unlikely to morph into a viable practical BCI any time soon, it has provided some incredibly important insights into how the brain processes speech and how we are able to hear and isolate the voice of one talker in a complex listening environment (e.g. the cocktail party problem). For example, Jonathon Simon and colleagues at the University of Maryland demonstrated that when listening to two concurrent talkers, the focus of a listener’s attention modulates the strength of the auditory cortical coding of each talker – acting like a volume control on the cortical processing. In simpler terms, the brain uses the listener’s intent (attention) to determine which speaker’s voice is more important and adjusts the processing accordingly. The listener’s attention drives the processing and using EEG, we can read that intent.

Other recent work listening into how the brain listens has shown that this can also be achieved using EEG signals! Ed Lalor and colleagues at Trinity College in Dublin also asked listeners to focus their attention on one of two concurrent talkers. The audio for each talker as well as the multichannel EEG was fed into a machine learning algorithm. Association between the phase of the low-frequency EEG and the overall amplitude envelope of the attended-to talker were used to identify the focus of attention. These so-called cortical speech decoders were able to make predictions after listening to only a minute of speech.

Lalor and his colleagues also demonstrated that success was even greater when the decoders were also provided with a stream of information that represented the stream of phonetic features for each talker.

Maarten de Vos in Oldenburg (and now at Oxford University) and colleagues have shown that cortical speech decoders that had learned on a range of other talkers were able to identify the attended-to talker at the beginning, although their long-term success was not as good as a decoder that learned on the attended-to talker’s voice originally.

Impact for Hearing Aids

What this means for hearing aids is that knowing which of the talkers a listener is focusing on could allow the hearing aid to preferentially process the attended-to talker. Such processing could be as simple as directing a beam former* to only the target talker to increase the signal-to-noise ratio, but regardless of the resulting process, the goal is to fully enhance the target talker’s voice over the background and any other competing voices. The issue here of course is that to use this approach, the hearing aid will also have to parse the speech of different talkers to be able to work out which of the talkers the listeners is attending to.

We will look at some of issues regarding impacting hearing aids in future blogs. Two main issues we will focus on include: (1) enhancing the attended-to-talker at the beginning of their speech and (2) segregating the multiple talkers in the auditor stream.

While large numbers of EEG electrodes are unlikely ever to be worn by hearing aid wearers, there is room on each custom ear piece for two or three. Maarten de Vos’s recent work using arrays of 10 electrode placed around the ear has demonstrated that a significant amount of auditory processing information can be captured from this relatively limited array. Likewise, in the last two years there have been more than eight consumer-based EEG monitoring systems released, ranging in cost from $100 to $500 with between one and 16 electrodes included. These commercial offerings are beginning to demonstrate both interesting application use and market traction. And more importantly, these same systems hint at a welcoming future for advanced hearable technology and an accepting, if not demanding, attitude towards such technology by users and professionals alike.

*Definition Beam Former: a technical term for a very highly directional set of micorphones that can be "steered" usin gvarious processing algorithms to point in different directions. 

By Simon Carlile, Ph.D.

Archive