Analog & asynchronous

Analog computation is often categorized as the opposite of digital computation. While digital computation works with discrete values analog computation can compute with quantities continuous in time and value. Historically though computation systems were called analog, when they were built to exactly behave as the system they were modelling, meaning they computed by analogy.

Today’s dominantly digital computation generates an abstraction barrier between the modelled and the primary computation system. A digital synchronous computer models quantities of the primary system discrete in value and in time. For a primary system with physical quantities that are continuous in value and time there is therefore no direct physical relationship to the modelled quantities. The precision of the model in time and value can be critical for certain computing applications and increase the cost of computation on a digital synchronous computer.

Analog computation tries to reduce the abstraction depth of digital synchronous computation by modelling the physical quantities of a system more direct with the physical processes and states within the computer. This can have advantages in speed and power consumption of certain computations, like integration or differentiation. Sampling intermediate analog results with high precision for a sequential computation is very challenging with analog memory. This has two major disadvantages. One is that analog computers are therefore typically built to run a complete program in parallel. This limits the size of a program by the available hardware. The other disadvantage is that the non-sequential computation forces the removal of abstraction levels, what increases the entanglement between hardware and applications and results in highly application specific computation systems, which can not be considered to be general purpose.

Analog computation is not limited to electronics, which is the most prominent form, but could also be performed for example mechanical or with photonics. Analog electronic computation has advantages in speed, cost, integrability and therefore also parallelisation but has the disadvantages that it can only integrate and differentiate with respect to time and that the precision is limited by the manufacturing accuracy and power consumption. Especially the limited precision and difficulties of making analog computers programmable for general purpose compared to digital synchronous computers is considered as one of the main reasons for the “eclipse” of analog computation when digital computers emerged. Now that Moore’s law slowed down and applications emerged that do not require high precision computation, but rely on high parallelisation, like neural networks, analog computation becomes of interest again.

Analog Computing Units in Digital Systems

One way to use analog computation also in a sequential program is to build synchronous, hybrid analog and digital systems, which sample intermediate results not as analog, but as digital values. Examples of this are the crossbar computation architectures used for in-memory-computing (IMC), where digital inputs are converted to analog signals by digital-to-analog converters (DACs) and analog outputs are converted to digital signals by analog-to-digital converters (ADCs). The challenge is not to make the data converters the energy and speed bottleneck of the system. It is also worth to mention that the analog value must not necessarily be in form of a voltage or current, but could also be a frequency.

In this case analog computation units are integrated in a synchronous digital system like an arithmetic logic unit (ALU) in a CPU. This works under the constraint that the analog computation unit shows a similar deterministic processing delay as digital circuits, meaning when the inputs change, the outputs keep a constant value after a maximum time delay. In this case the analog computation is operated with continuous values, but in discrete time steps. Without this constraint the result of the computation depends on the timing of sampling the outputs.

Asynchronous Computing

It is important to differentiate between asynchronous computation and asynchronous hardware. Asynchronous computation can refer to a set of programs, which are executed asynchronously, while each program can be sequential synchronous program, or a single asynchronous program consisting of multiple processes that run asynchronously. Asynchronous hardware refers to hardware design techniques that do not rely on a global clock, which ensures synchrony between individual circuits or circuit elements, but design circuits which time themselves, for what reason asynchronous circuits where referred to as self-timed circuits in early publications. That does not mean that a program running on asynchronous hardware necessarily has to be asynchronous. Examples for hybrid approaches are globally asynchronous locally synchronous (GALS) systems-on-chip (SoC) or architectures found in neuromorphic computing, that use fully asynchronous hardware with global program synchronization handshakes to maintain a certain level of synchrony in the executed program.

The most prominent form of asynchronous computation comes from the context of dealing with events independent of a main program. An example is a server that receives computation tasks (programs) from different users as events. The server itself runs a main program, that asynchronously receives the events, deals with them by assigning hardware resources to them and sends back results without being blocked itself while the different tasks are running. Here asynchrony exists only between the different programs running on the same system, while the programs are independent and can each be sequential synchronous programs. This introduces parallelism between multiple independent programs, while the timing of events (tasks) itself has no effect on the result of each program. Such systems can also be called event-based.

When asynchrony is introduced within a program, meaning between multiple concurrent threads or processes of a program, which communicate with each other with events, these programs are often called event-based. As long as such a program is executed in an event-based simulator that runs a single-threaded event loop, the simulated time is independent on the execution time and it is possible to maintain accurate event timing. In this case encoding information in event timing comes with as little risk as given by the implementation of the simulator. When the different asynchronous processes of a program are executed on different physical processors in hardware, that must communicate with each other by events, ensuring deterministic, predictable event timing becomes very challenging as it depends on many factors, like process execution time on processors, traffic and mapping dependent delays of event communication between processors. To avoid problems of inaccurate event timing one could introduce program synchronization steps between the asynchronous processes, as it is common for neuromorphic hardware, or design programs only with delay insensitive (non-interfering) event-chains as it is common for designing asynchronous hardware as for example communicating hardware processes (CHP).

Asynchronous hardware can have an efficiency benefit over synchronous hardware, when events are sparse in time, since no dynamic power is consumed without events due to the missing clock. If the hardware utilization is very high asynchronous hardware could be less efficient due to the hardware overhead needed for handshaking.

For asynchronous hardware and programs without synchronization steps, which would allow the usage of classical sequential programming methods, there is a need for new programming languages and computation theory, that allows encoding information not only in the content, but also in the timing of events to fully exploit the potential of asynchronous hardware and programs.