Ubiquitous Computing a.k.a. "The Internet of Things"

In The Fabric of Tomorrow, I spoke briefly about Mark Weiser’s influential article in Scientific America where he coined the term “ubiquitous computing,” now often referred to as the "Internet of Things" (Kevin Ashton, 1999).  As with many great ideas, this has a long and illustrious lineage and indeed has continued to evolve. A complete timeline spanning 1950 to the 1970s can be viewed here: The Internet of Things History.

Weiser said, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” This disappearing act is a positive thing though, indicative of how technology has seamlessly become an essential part of our everyday lives. These devices have simply become part of the process of engaging in ordinary activities. We would miss them if they were gone – as anyone knows when they are separated from their smartphone—but when present, they are invisible because they have become part of us.

This is our goal for hearing technologies — to make hearables as invisible yet essential as a smartphone.

Mark Weiser’s particular goals at Xerox's Palo Alto Research Center (PARC) centered around augmenting human interaction and problem solving through smart devices. He conceived three classes of smart devices including wearable devices measured in inches (smartbadges), hand held devices the size of legal writing pads, and yard-sized devices designed for physical interaction and display (e.g. smart boards). Others have since formulated device classes on a sub millimetre scale including mm-and sub-mm-sized micro-electro mechanical systems (MEMS), minute wirelessly enabled sensors; fabrics based on light emitting polymers and flexible organic devices such as OLED displays; and Clay, ensembles of MEMS devices that can form configurable three dimensional shapes and act as so called tangible interfaces that people can interact with.  

These latter classes of devices usher in new ways of thinking about how users and the devices interact with the environment. Early thinking looked at the relationship as simply providing tools for interaction and collaboration but the latter device cateogires  encourage the idea of not only interaction and collaboration but also influence and control. We see the focus on the sensor and the identity of that which is sensed. This begins to fill out our analogy of the peripheral nervous system of the Cloud that we explored in The Power of the Cloud and also  begins to inform how we might exploit these ideas in the development of next generation of hearing technologies.

For example, in Starkey Research we are currently working on a project that combines the listening and analytical capabilities of a smart phone to analyze a particular acoustic environment with the ability to record the hearing aid settings selected by the user in that environment via Bluetooth. By uploading that information to the Cloud we can then “crowd source” user preferences for different environment classifications thereby enabling the creation of better adaptive pre-sets and controls.

The wireless connection of the smartphone to the hearing aid is only the first step along the road enabled by the "Internet of Things." The hope is for a hearing aid to connect to the phone and to anything the phone can connect too, including the Cloud. In the example above, the hearing aid off-loads the processing around the environmental classification to the phone which in turn uploads the data to the Cloud. The offline analysis of the data amassed from a wide range of hearing aid users then provides the associations between environments and the settings, that is, the knowledge that can then inform our designs. On the other hand, there is no reason, in principle, why the data in the Cloud couldn’t also be used to modify, on the fly, the processing in the hearing instrument.

The point is that, under the new paradigm, the hearing aid could become a technological instrument that could be updated and modified on the fly using machine-level capabilities or human interaction, or even a combination of the two. The user, the health professional, the manufacturer, the cloud, all can be leveraged to increase the performance of the hearing aid.  In short, the hearing aid itself becomes a source of data that can be used to optimize its own operation, or in aggregate with the cloud, the operation of classes of hearing aids.  

The capacity to optimize the instrument at the level of the individual will be dependent, in part, on the nature and quality of the data it can provide, something that will be influenced by the revolution in biosensor technology.

 

By Simon Carlile, Ph.D.

Archive