In my last article, we explored the impact of the rate of scientific discovery and technology change on research in general and on hearing aid research in particular. Now I want to look more closely at how some of that change will manifest itself in the everyday technologies of tomorrow.
There are two main technological forces at play in this story – computing power and connectivity. These are the backbone from which many other profoundly influential players will derive their power, and if there was only one dominant idea here, it would be ubiquitous computing – a term coined by the brilliant computer scientist Mark Weiser in 1991 in his influential Scientific America article “The Computer for the 21st Century.” As head of computer science at the Palo Alto Research Center he envisaged a future where our world was inextricably interwoven with a technological fabric of networked “smart” devices. Such a network, he envisioned, has the capability to manage our environments from the macro down to a detailed, individualized level – everything from the power grid to the time and temperature of that morning latte.
But these devices are also inputs to the system – detectors and sensors feeding a huge river of information into the central core of data, or the cloud as we now know it. Many of these are already in existence today. We know them as smartphones, wearables and hearables. These devices provide us with lifestyle benefits relating to health, professional, personal and entertainment purposes, and the sophistication and bio-monitoring capability of these devices are always evolving. Moreover, some of these sensors provide highly detailed knowledge about a consumer’s transactions – cashless transactions record the person the time the place and the goods, getting on and off public transport, taking a taxi, an Uber, booking flight, logging a Facebook post, inputting a street address into the Maps app, street closed-circuit television security systems, your IP address, cookies and the browser trail, etc. etc.
Notwithstanding the issues of privacy (if indeed that still exists), our devices send data of some form to the cloud, to us and if they are so designed, to the public. Big Data is here and it’s here to stay, and the revolutionary capabilities we currently see with smartphones, wearables and hearables is only the beginning.
But in looking at the present, I recall back in 1998 when I was fortunate enough to attend the World Wide Web conference where Tim Berners-Lee, the man who invented the World Wide Web while working at CERN in 1989, began promoting the idea of the Semantic Web – a means by which machines can efficiently communicate data between each other. In the ensuing years much work has gone into developing the standards and implementing the systems. In that time however, two other massive developments have also occurred that may overshadow or subsume these efforts: On the one hand - natural language processing has matured using both text and audio in the forms of Siri, Google Talk and Cortana to mention just a few. On the other hand, driven by huge strides in cognitive neuroscience, processing power and advanced machine learning, we are witnessing a rebirth of Artificial Intelligence (AI) and the promise of so-called Super Intelligence.
So just how can we design listening (hearables) technologies, hearing aids in particular, that can capitalize on these profound developments? Well, let’s take a sneak peek at what a future hearing aid might look like in this brave new world.
Imagine a hearing aid that can listen to the world around the wearer and break down that complex auditory scene into the key relevant pieces – sorting the meaningful from the clutter. A hearing aid that can also tune into the brain activity of the listener and identify the wearer’s focus of attention and then enhance the information from that source as it is coded by the brain. A hearing aid that in fact, does not simply allow one to hear better but is a device that people wear all the time as a communication channel for other people and machines, for their entertainment, as a brain and body monitor that also maps their passage through space. Such a hearable provides support in adverse listening conditions to the normally hearing and the hearing impaired alike – it simply changes modes of operation as appropriate. Hearables in the future will work with both the auditory and neurological systems to attempt to create, shall we say, ideal hearing?
Possibly the most surprising thing about this scenario is that, in advanced research laboratories around the world (including Starkey Research), the technologies that would enable such a device are already in the works, the components for many already in existence. Of course they are not developed to provide the level of sophisticated sensing and control that are required to give life to this vision, nor are they in a form that people can put in their ears just yet. But, they do exist and if we have learned anything from watching the progress of science and technology over the last few decades, their emergence as the Universal Hearable Version 1.0 will happen sooner than we think.