0% found this document useful (0 votes)
48 views

Neuromorphic Computing

Neuromorphic computers are inspired by biological neural networks and composed of neurons and synapses. They differ from traditional von Neumann computers by distributing both processing and memory in the neural network instead of separating CPUs and memory. Programs in neuromorphic computers are defined by the neural network structure and parameters rather than explicit instructions. They also use spike-based communication rather than binary encoding of information.

Uploaded by

Mayuri Anandikar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Neuromorphic Computing

Neuromorphic computers are inspired by biological neural networks and composed of neurons and synapses. They differ from traditional von Neumann computers by distributing both processing and memory in the neural network instead of separating CPUs and memory. Programs in neuromorphic computers are defined by the neural network structure and parameters rather than explicit instructions. They also use spike-based communication rather than binary encoding of information.

Uploaded by

Mayuri Anandikar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

We define neuromorphic computers as non-von Neumann computers whose structure and function

are inspired by brains and that are composed of neurons and synapses. Von Neumann computers
are composed of separate CPUs and memory units, where data and instructions are stored in the
latter. In a neuromorphic computer, on the other hand, both processing and memory are governed
by the neurons and the synapses. Programs in neuromorphic computers are defined by the structure
of the neural network and its parameters, rather than by explicit instructions as in a von Neumann
computer. In addition, while von Neumann computers encode information as numerical values
represented by binary values, neuromorphic computers receive spikes as input, where the
associated time at which they occur, their magnitude and their shape can be used to encode
numerical information. Binary values can be turned into spikes and vice versa, but the precise way to
perform this conversion is still an area of study in neuromorphic computing.

Neuromorphic systems are integrated circuits designed to mimic the event-driven computations in a
mammalian brain

[1]. They enable execution of Spiking Neural Networks (SNNs), which are computation models
designed using spiking neurons and bio-inspired learning algorithms

[2]. SNNs enable powerful computations due to their spatiotemporal information encoding
capabilities

[3]. SNNs can implement different machine learning approaches such as supervised learning

[4] unsupervised learning

[5] reinforcement learning and lifelong learning

In an SNN, neurons are connected via synapses. A neuron can be implemented as an integrate-and-
fire (IF) logic [8], which is illustrated in Figure 1 (left). Here, an input current spike U(t) from a pre-
synaptic neuron raises the membrane voltage of a post-synaptic neuron. When this voltage crosses a
threshold Vth, the IF logic emits a spike, which propagates to is post-synaptic neuron. Figure 1
(middle) illustrates the membrane voltage due to input spike trains. Moments of threshold crossing,
i.e., the firing times are illustrated in Figure 1 (right).
Figure 1: A leaky integrate-and-fire (LIF) neuron with current input U(t) (left). The membrane
potential over time of the neuron (middle). The spike output of the neuron representing its firing
times (right). SNNs can be implemented on a CPU or a GPU. However, due to their limited memory
bandwidth, performance of SNNs on such devices is usually slow and the power overhead is high. In
SNNs, neural computations and synaptic storage are tightly integrated. They present a highly-
distributed computing paradigm which cannot be leveraged by CPU and GPU devices. A
neuromorphic hardware can eliminate the performance and energy bottlenecks of CPUs and GPUs,
thanks to their low-power analog and digital neuron designs, distributed in-place neural
computation and synaptic storage architecture, and the use of Non-Volatile Memory (NVM) for high-
density synaptic storage [9, 10, 11, 12, 13, 14, 15]. Due to their low energy overhead, a
neuromorphic hardware can implement machine learning tasks on energy-constrained embedded
systems and edge devices of the Internet-of-Things (IoT) [16].

A neuromorphic hardware is implemented as a tiled-based architecture [17], where tiles are


interconnected via a shared interconnect. A tile may include

1) a neuromorphic core, which implements neuron and synapse circuitries,

2) peripheral logic to encode and decode spikes into Address Event Representation (AER)

3) a network interface to send and receive AER packets from the interconnect. Switches are place on
the interconnect to route AER packets to their destination tiles. Table 1 illustrates the capacity of
some recent neuromorphic hardware cores.
Building Blocks

In functional terms, the simplest, most naïve properties of the various devices and their function in
the brain areas include the following.

1. Somata (also known as neuron bodies), which function as integrators and threshold spiking
devices

2. Synapses, which provide dynamical interconnections between neurons

3. Axons, which provide long-distance output connection between a presynaptic to a postsynaptic


neuron

4. Dendrites, which provide multiple, distributed inputs into the neurons

Spiking neural networks are a particular type of artificial neural network in which the function of the
neurons and the synapses in the network are more inspired by biology than other types of artificial
neural networks such as multilayer perceptrons. The key difference between traditional artificial
neural networks and SNNs is that SNNs take into account timing in their operation. Neuron models
implemented in SNNs in the literature range from simple integrate and fire models, in which charge
is integrated over time until a threshold value is reached, to much more complex and biologically
plausible models, such as the Hodgkin–Huxley neuron model, which approximates the functionality
of specific aspects of biological neurons such as ion channels. Both neurons and synapses in SNNs
can include time components that affect their functionality.

Neurons in spiking neural networks accumulate charge over time from either the environment (via
input information to the network) or from internal communications (usually via spikes from other
neurons in the network). Neurons have an associated threshold value, and when the charge value on
that neuron reaches the threshold value, it fires, sending communications along all of its outgoing
synapses. Neurons may also include a notion of leakage, where the accumulated charge that is not
above the threshold dissipates as time passes. Furthermore, neurons may have an associated axonal
delay, in which outgoing information from the neuron is delayed before it affects its outgoing
synapses. Synapses form the connections between neurons, and each synapse has a pre-synaptic
neuron and a post-synaptic neuron. Synapses have an associated weight value, which may be
positive (excitatory) or negative (inhibitory). Synapses may have an associated delay value such that
communications from the presynaptic neuron are delayed in reaching the post-synaptic neuron.
Synapses also commonly include a learning mechanism in which the weight value of the synapse
changes over time based on activity in the network. Neuromorphic computers often realize a
particular fabric of connectivity, but the synapses may be turned on and off to realize a network
structure within that connectivity. Furthermore, parameters of the neurons and synapses such as
neuron thresholds, synaptic weights, axonal delays and synaptic delays are often programmable
within a neuromorphic architecture.

Unlike traditional artificial neural networks, in which information is received at the input and then
synchronously passed between layers in the network, in SNNs, even if input information is received
at the same time and the SNN is organized into layers, as the delays on each synapse and neuron
may be different, information is propagated asynchronously throughout the network, arriving at
different times; this is beneficial for realizing SNNs on neuromorphic hardware, which can be
designed to operate in an event-driven or asynchronous manner that fits well with the temporal
dynamics of spiking neurons and synapses. An example SNN and how it operates in the temporal
domain is shown in the figure. In this example, synapses are depicted with a time delay. Information
is communicated by spikes passed throughout the network. In this example, the network’s operation
at time t (left) and time t+1 (right) is depicted, to show how the network’s state changes with time.

Neuromorphic Concepts

The following concepts play an important role in the operation of a system, which imitates the brain.
It should be mentioned that sometimes the definitions listed below are used in slightly different
ways by different investigators.

Spiking. Signals are communicated between neurons through voltage or current spikes. This
communication is different from that used in current digital systems, in which the signals are binary,
or an analogue implementation, which relies on the manipulation of continuous signals. Spiking
signaling systems are time encoded and transmitted via “action potentials”.

Plasticity. A conventional device has a unique response to a particular stimulus or input. In contrast,
the typical neuromorphic architecture relies on changing the properties of an element or device
depending on the past history. Plasticity is a key property that allows the complex neuromorphic
circuits to be modified (“learn”) as they are exposed to different signals.

Fan-in/fan-out. In conventional computational circuits, the different elements generally are


interconnected by a few connections between the individual devices. In the brain, however, the
number of dendrites is several orders of magnitude larger (e.g., 10,000). Further research is needed
to determine how essential this is to the fundamental computing model of neuromorphic systems.

Hebbian learning/dynamical resistance change. Long-term changes in the synapse resistance after
repeated spiking by the presynaptic neuron. This is also sometimes referred to as spike time-
dependent plasticity (STDP). An alternative characterization in Hebbian learning is “devices that fire
together, wire together”.

Adaptability. Biological brains generally start with multiple connections out of which, through a
selection or learning process, some are chosen and others abandoned. This process may be
important for improving the fault tolerance of individual devices as well as for selecting the most
efficient computational path. In contrast, in conventional computing the system architecture is rigid
and fixed from the beginning.

Criticality. The brain typically must operate close to a critical point at which the system is plastic
enough that it can be switched from one state to another, neither extremely stable nor very volatile.
At the same time, it may be important for the system to be able to explore many closely lying states.
In terms of materials science, for example, the system may be close to some critical state such as a
phase transition.
Accelerators. The ultimate construction of a neuromorphic–based thinking machine requires
intermediate steps, working toward small-scale applications based on neuromorphic ideas. Some of
these types of applications require combining sensors with some limited computation.

You might also like