Why are we living in one of the most interesting ages in computer history? As part of our monthly Cafe Sci lecture series, we recently had the pleasure of welcoming John L. Hennessy, former Stanford University president, chairman of Alphabet Inc., and Turing award winner for a talk on the future of computing. Learn more about his thoughts on the end of road for general purpose computing and where computing could be headed next, specifically with domain-specific architectures.
According to Hennessy, we are living in one of the most fascinating ages of computer architecture and the future of computing — a time that is unrivaled in its potential for change in computing ever since the rise of the microprocessor in the early 1980s.
The Golden Era of Computing
Over the last 40 years, we’ve seen about a 40% annual increase in computing power, made possible by two circumstances related to microchip technology: an exponential increase in computing power (as observed by Moore’s Law) and an exponential increase in energy efficiency (based on Dennard Scaling).
Times are a-changing
This era of growth has come to an end and we find ourselves at a turning point in the history of computing, one of diminishing returns in terms of computing power and energy efficiency. Energy needed per computation is not decreasing as fast as proposed by Dennard scaling and the cost per transistor is not falling as fast as it used to under Moore’s law. In fact, Hennessy argues that Moore’s law was never a physical reality — it was merely a “hope and a prayer […] that set a benchmark for the industry” and played a key role in generating the immense investments into technology we have seen over decades.
Additionally, the focus of applications has shifted: While the desktop dominated until about 10 years ago, we are now mainly using mobile devices and ultra-scale cloud computing, creating new constraints on computing.
The importance of energy efficiency
The end of Dennard scaling and the resulting diminishing returns in energy efficiency are causing a crisis, with processors reaching their power limit while energy consumption continues to increase, especially for mobile devices, IoT, and large clouds. Given that most modern devices solely run on battery, the focus that the market should place on energy efficiency becomes even more apparent.
The Road Ahead
As gains in both performance and energy efficiency from parallel processing are limited and multicore scaling has ended, multi-core processors cannot be the future. So, what opportunities are left?
On the hardware side, Hennessy argues that Domain Specific Architectures (DSAs) as opposed to general purpose computing are the only path left, as they run more efficiently by matching applications to the processor architecture. Examples of DSAs include neural network processors for machine learning or graphics processing units for graphics or virtual reality. While in a traditional processor, large amounts of energy are used by the control unit, DSAs essentially use energy much more efficiently.
What type of applications should we focus on with DSAs? The most prominent field appears to be Deep Learning — interestingly, the growth rate of academic papers published in this field parallels the growth rate in computing power postulated by Moore’s Law. The following graph shows the rate of papers published in the field of Deep Learning plotted against Moore’s Law’s growth.
Hennessy is optimistic: “the number of times we have seen AI not make that breakthrough is incredible, but we are beyond this point and at a massive breakthrough now because of the massive amounts of data for training and massive amounts of compute power.”
Hennessy is looking forward to seeing the next generation of computer engineers tackle the challenges he outlined, thereby writing the next chapter of computer history.