Computing Beyond More Moore

Introduction

The development of universal computers based on the von-Neumann architecture and general-purpose arithmetic logic units (ALUs) built from the complementary metal-oxide-semiconductors (CMOS) logic is a tremendous success story which drove the world into the current digital age. They are used in all technical objects, computing centers, roboters, cars, mobile phones to name only a few. Computing systems could be built around layers of abstractions, like processing hardware, low-level programming language and high-level languages. This allowed hardware developers to improve performance under the hood as long as they ensured compatibility to the abstraction layer the low-level programming languages were built on, while software developers could come up with algorithms that would not yet run on available hardware within reasonable time, but were known to be executable on future iterations of hardware, motivating further progress even more.

Since the 1960s Moore’s law, which states that the number of transistors on a chip doubles every two years, and the semiconductor industry’s deliberately chosen commitment to it was a market-driven self-fulfilling prophecy. By scaling down transistor sizes the performance of a few general-purpose chip types – mostly processors and memory – could be increased regularly and sold in huge quantities what covered the R&D cost of upgrading fabrication facilities and self-reinforced the scaling cycle. When the chip making process became too complex to be organised and driven forward by individual companies or national associations in 1998 the international semiconductor industry came together in a unique kind of effort not seen in any other industry before to publish a yearly roadmap, which coordinated the efforts of international industry associations to keep up with Moore’s law. That made it rather an ambition or a goal than a law of nature.

Although the semiconductor industry managed to overcome different obstacles in the past, with the transistor sizes reaching the physical limit of 2-3 nanometres Moore’s law in its original understanding will find its end. An example of such an obstacle was the breakdown of Dennard Scaling, an observation that as transistors get smaller, their power density stays the same, meaning the operating frequency could be scaled up without higher power consumption. The main solution to overcome this was switching from single core to multi-core architectures. This allowed to further increase performance for parallelizable algorithms without increasing clock frequencies and it could be fit into the used abstraction layers with only minor changes. Today’s graphic processor units (GPUs) with their massive parallel computing architecture still increase performance based on this concept and have thereby driven the advance of machine learning algorithms which rely on highly parallelizable multiply-accumulate operations.

Increasing performance by parallelization alone though cannot be the future of computing. On the one hand there are algorithms or problems, for example the simulation of event-based systems, which do not profit from parallelization in general. On the other hand, with the slowdown of Moore’s law the prize for more parallelization is deploying an increased amount of computing devices and supplying them with energy. For large computing centres this is not a sustainable path into the future without any major breakthroughs in the generation of clean energy. For the increasing amount of portable computing devices (edge devices) which are limited by the energy constraints of the batteries they run on this is also not a solution. There is a need to explore different paths towards the future of computing.

Paths towards the next Generation of computing

The semiconductor industry acknowledged the need for new paths by changing the name of their roadmap from “International Technology Roadmap for Semiconductors” to “International Roadmap for Devices and Systems”. The new name introduces two out of the three approaches, which could individually or in combination lead to needed innovations for the future of computing.

The first approach is investigating new devices instead of CMOS logic. This includes the use of other materials (e.g. Germanium based semiconductors), which may even allow to use other physical effects (e.g. Spintronic) for the storage and manipulation of logic states.

The second approach is the development of new system architectures instead of the von-Neumann architecture with general-purpose ALUs. The goal hereby is typically to optimize performance for specific types of calculations. This is in general achieved by trading-off the performance of the system as a general-purpose computing device or completely losing the system’s general-purpose character. This approach has a long history in application specific integrated circuit (ASIC) or FPGA-based hardware design. While GPUs have already been an example for this approach for a long time there is a lot of recent investment in the development of even more specializes computing chips by start-ups and major computing chip manufacturers like Intel with their neuromorphic Loihi Chip.

The third approach is coming up with completely new computing concepts that might not perform computation and data processing based on the same atomic operations provided by conventional ALUs. Examples for this are quantum computing, neuromorphic computing or biochemical computing. This is probably the most disruptive out of the three approaches because for some of them only few of the theoretical concepts, software stacks and hardware technologies from conventional computing are reusable. That results in long development cycles for hardware and software and requires new training for programmers who want to solve their real-world problems with new computing concepts. But it might make problems solvable, which were not solvable before on conventional hardware at all or within reasonable time.

The advancement and complexity of the three research fields individually and combined makes it very difficult to predict the future of computing. With the decline of Moore’s law also past inventions, which did not make it to market before, become interesting and relevant again. Maybe there will be a technological breakthrough, that allows general-purpose ALU-based computers to increase performance further without increasing the energy consumption. Maybe one of the new computing concepts proofs to have general-purpose character and takes over the market of general-purpose computer. Maybe none of the above happens and the ecosystem of computation becomes more heterogeneous with a broader range of application specific computing devices being deployed in computing centres and in portable devices. In each case these are very exciting times for the next generation of computers.

The aim of our project is to generate an overview of past and recent developments in devices, architectures and concepts for computing, to identify new approaches and evaluate their possible target applications, their advantages and disadvantages as well as their state of the art.

Leave a Reply

Your email address will not be published. Required fields are marked *