In “The Fabric of Tomorrow,” I laid out a rather high level road map for the developments science and technology will bring about for the future. Now it is time to start digging a bit more into understanding how these developments can be leveraged effectively by what we do at Starkey Research.
Let’s start with the Cloud!
First the inputs: Ubiquitous computing and seamless interconnectivity are like the peripheral nervous system to the Cloud. Through them, the Cloud receives all its sensory data — the “afferent” information about the world
Second the outputs: This peripheral nervous system also takes the “efferent” signal produced by the Cloud and sends it to the machines and displays that show the changes in the world. We will come back to the peripheral nervous system and its sensors and effectors later – for the moment let’s focus on the Cloud.
People’s expectations and predictions about technology are often replete with incorrect assumptions. Take for example, what many said about the computer:
"I think there is a world market for maybe five computers." -- Thomas Watson, chairman of IBM, 1943
"There is no reason anyone would want a computer in their home." -- Ken Olson, president, chairman and founder of Digital Equipment Corp., 1977.
"640K ought to be enough [memory] for anybody." -- Attributed to Bill Gates, 1981.
As you can see, pessism is a strong element when considering the future, most likely because the everchanging currency of science and technology makes the future so hard to forsee. On the other hand, with what we do at Starkey Research, we need to be a little more like these great leaders and temper our enthusiasm so as to properly position our work to deliver in 5 or 10 years time into the future. In short, pessism in small dosages can help lead to proper perspective. At Starkey Research, we can dream, but we also have to keep in mind that our ultimate goal is to build and deliver real things that solve real problems!
So with that cautionary perspective in mind, what then can we say about the Cloud? Electronics Magazine solicited an article from Gordon Moore in 1965 where he observed and predicted that the number of components on an integrated circuit board would continue to double each year for at least the next 10 years (he later revised the doubling period to two years). Dubbed by Carver Mead as “Moore’s law,” this finding came to represent not only a prediction about the capacity of chip foundries and lithographers to miniaturise circuits but also a general rubric for improvements in computing power (i.e. Moore’s Law V2.0 & V3.0).
The Cloud, while still based on the chips described by Moore’s law, presents as a virtually unlimited source of practical computing power. The single entity computational behemoths will likely live on in the high security compounds of the world’s defence and research agencies, but for the rest of us, server farms such as those provided by Amazon (AWS), Google (GCE) and Windows (Azure) can provide a virtually unlimited source of processing power. No longer are we tied to the capacity of the platform we are using. As long as that platform can connect to the Cloud, then the device can share its processing needs with this highly scalable service.
But this comes at a price and that price is time. Although fast, network communications have delays that relate to the switching and routing of the message packets, the request itself is queued and the processing itself takes a finite interval of time before the results are sent back along the network to the requesting device. At the moment, with a fast network and a modest processing request, the time taken amounts to about the time it takes to blink (~350 ms). For hearing technology this is a very important limitation as the ear is exquisitely sensitive to changes over time. When sounds are taken in through the ear, there is a delay between processing and comprehension, a delay that can detrimentally influence not only how the sound is interpreted but also a person’s ability to successfully maintain a conversation. We need to find ways to locally process sounds that are time sensitive and to off-load the sounds where a delay isn’t necessarily important.
And where the Cloud and hearing technology converge is that as the cloud continues to digest incredible amounts of data, hearing technology devices will eventualy have to work with the Cloud in order to allow for the ranking of sounds. In short, for hearing aids to be able to focus on the processing of important sounds and delay the “non-important” sounds a bit, hearing aids will need to work with the Cloud and similar to the Cloud. But as we can already see with the Cloud, estimating, let alone comprehending, the data currently transmitted across this peripheral nervous system and potentially stored in the Cloud is no mean feat.
It’s Big Data! But what this implies,is that a whole new range of technologies and tools need to be developed to be able to manipulate data to quickly and efficiently derive information. Big Data and Informatics in general have huge implications for the way we conceive how we manage hearing impairment and deliver technologies to support listening in adverse environments.
Want to read the first installment of Simon's blog series on the future of hearing aids? Click here!