0% found this document useful (0 votes)
14 views

Real Time Visual Data Processing Using Neuromorphic Systems

Uploaded by

Neel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Real Time Visual Data Processing Using Neuromorphic Systems

Uploaded by

Neel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Real time Visual Data Processing using Neuromorphic Systems

1
Neel Ghoshal and 2B.K. Tripathy
1
School of Computer Science and Engineering, VIT, Vellore, 2School of Information
technology and Engineering, VIT, Vellore, TN, India
E-mail:[email protected], [email protected]

Abstract:

From visual sensor systems to video object detection, the use of real time Image Processing
systems have been prevalent and are in need for quite a while. A lot of work has been carried
out on different use cases for image processing of a given input, such as image segmentation,
object detection etc., but, implementing these computationally complex algorithms for real
time applications pose significant difficulties and impose a lack of efficiency upon the
concerned systems. Neuromorphic based architectures and their applications provide a new
solution to this specific need by using its parallel and scalable architecture to pave the way
for faster, more efficient and cohesive solutions to real-time image processing applications. In
this chapter we shall discuss and explain upcoming systems and methodologies for the
application of this use case utilizing Neuromorphic architectures. A pivotal focus point of real
time image processing systems and its discussions is on Neuromorphic vision chips and its
variations. A part of this deliberation is the current usage of concepts like Photonic Synapses
for temporary memory storage useful in real-time processing that mimics the synapses
between neurons in our brains. In further works, it is planned to elaborate the use of
Neuromorphic VLSI chips in this pursuit and its implementations using novel methods
including 2-D selective attention systems. The discussion would also include Neuromorphic
processor architectures and their subtle and elaborate solutions in solving the problem of
efficiency and quality for real time image processing use cases with Neuromorphic systems.
Some of the topics to be covered are hand posture detection, robots, video content retrieval
etc. An analytical approach would cover the framing of these systems using Neuromorphic
computing capabilities.

Keywords: Visual sensor, Object detection, Neuromorphic architecture, VLSI, Use case,
Content retrieval

1. Introduction

Neuromorphic computing, in its essence, can be thought of as a methodology to capture the


nuances of the intricate and detailed structures of the human brain and apply this
sophisticated information to create corresponding hardware and mechanisms in the pursuit of
parallel, scalable, energy efficient and fault tolerant systems. The designs of these systems
follow the premise of the workings of biological synapses and neurons. A Neuromorphic chip
corresponds to the brain’s biological structure by mimicking its components with artificial
hardware components as depicted in figure 1, where the neurons generally can hold and
process data whereas the synapses allow for electronic transfer of signal data. The systems
take into account the ideas of the characteristics underlined via the methodology done by
following the idea of “physical modeling” [1].

Given the significant advances being made towards this domain, and the need for low power
and sustainable AI, Neuromorphic computing and engineering can prove to be a pivotal
stepping stone for artificial intelligence developments [2].
Figure 1: Basic structure of a Neuromorphic chip

The term, ‘Visual Data Processing’, can be defined as a threefold methodology of i) mining
and obtaining image or video-based data through sensors, electronic devices, databases etc.
followed by ii) performing analytical and inferential techniques on this data to extract
information from them, such as, object detection, image classification etc. after which iii) the
inferences are then used in real world applications and technologies. ‘Real time Visual Data
Processing’ portrays the simultaneous implementation of the above three steps at the same
time.

2. Background

In recent times, almost every computer-vision based application or technology is equipped


with technical structures to collect data from its environment and surroundings, either for
direct used in the embedded application or for a secondary use case in an alternate aspect
related to the technology. The former of these can be considered in the real time use case
category, as data collected directly affects the workings and proceedings of the corresponding
embedded system. Due to a variety of reasons, analysis of this data for the gain of valuable
insights, has grown increasingly difficult in recent times, and simultaneously, for several
applications, due to the vast amount of available data and very limited and constricted
available methods to explore the information for inferences, the extracted and available data
henceforth are reduced to being obsolete [3].

Furthermore, Dynamic processes as entailed in these types of systems in general output and
generate a huge amount of real time data, such as those of weather-based application, website
databases, time-based logging applications and many more. Hence, there is a significant need
of inferential and managerial applications as the vast amount of data is incomprehensible in
its raw form and at the same time, can’t be sustained indefinitely [4].

The need for a real-time data stream processing system arises, as it is fast and processes the
data in a minimal time with low latency [5]. Real time data analysis systems simultaneously
need to be accurate, fault tolerant, precise and energy efficient systems which can deal with
the problems posed by current technology. Due to the requirement of technologies which can
solve these issues corresponding to real time data processing, many researchers and
developers have delved into the creation of alternate solutions.

Analysis models which are based on real time technology have to be highly efficient and
specific to the task assigned to it. Due to the vast amounts of data, lack of highly parallel
system availability and uniqueness of currently tackled issues, processing models and
corresponding system architectures need to be co-aligned to the specific use case and work in
coherence with each other. The current technological culture and its alignment with vast
amounts of real-time data leads for enhancements and improvements in processing this
critical information [6].

Neuromorphic computing systems provide a unique and elegant solution for these challenges
due to its inherent nature of its asynchronous, event-driven and parallel nature, allowing the
above-mentioned challenges to be solved in a comprehensive manner.

3. Neuromorphic Vision Sensors

In general, a digital vision sensor is one which possesses an input device referred to as a
camera, which enables the sensor to determine certain characteristics like the detection,
orientation and accuracy metrics for the likewise. These sensors are systems which contain
the camera and controller in a single unit. A vision sensor generally follows one of two types:

a. Monochrome Model: Images passing through the camera of the lens, in the form of
light, is converted into an electrical signal, this functionality is typically followed
through by a light receiving element.
b. Colour Model: Here, the light receiving element has inbuilt characteristics to
accommodate coloured images. The intensity range of the specific information pixels
is categorized via RGB values allowing for distinguishing between targets, even with
minimal intensity differences.

Digital vision sensors have typically been developed and worked upon using Conventional
Complimentary Metal-Oxide Silicon (CMOS) imagers or Charge-Coupled Devices (CCD)
cameras, which are generally coalesced to computing substructures which allow for the
processing of vision based programs and software on either serial or parallel architectures [7].
Unfortunately, due to high power consumption, cost, size and other issues, the performance
and efficiency of these devices fall short of necessary values and metrics. A visualized
depiction of a digital vision sensor is portrayed in figure 2.

Figure 2: General model of a vision sensor

On the contrary, the methodology behind a Neuromorphic sensor is based on parallel,


frame/event driven mechanisms and asynchronous activities mimicking the functionalities
followed by biological analogous systems.

The developments concerning Neuromorphic sensing is varied and has recently seen strides
in multiple practical avenues, among the spectrum, a few of these developments including a
variety of biological principle based sensors, focusing on efficiency and implementation [8].
A depiction of a generalized Neuromorphic vision sensor along with its internal components
is given in figure 3.
Figure 3: General model of a Neuromorphic vision sensor

A. DVS: An abbreviation for Dynamic Vision Sensors, a DVS based vision sensor has a
three-layer retina is modeled implementing a simplified photoreceptor-bipolar-ganglion
pathway [9].

The general working of this system follows the methodology that each pixel in the concerned
subsystem is sensitive to stimulus concerning the current scenario and its elements and can
respond to the dynamic and temporary optical variations with independent spike activations.
The working of a DVS based Neuromorphic chip takes into account a direct flow of optical
information from the image followed by conversion into spike data and then compression,
after which the processed information is either transmitted or stored, as described in figure 4.

Figure 4: DVS vision sensor

B. ATIS: This sensor works on the methodology of event-driven technology, whereby, it has
the intrinsic characteristic of dynamic event change detection, and gains the intensity values
of these. Due to the aforementioned reasons, 2 types of AER (Asynchronous Address-Event
Representation) events are outputted independently [8]. In essence the entire methodology
works on the functionalities of location-based tracking pertaining to where an object [10] [11]
may be and also detection methodologies to infer what the object is [12]. The working
methodology followed by this sensor would be taking into account the sensors and pixels and
categorizing the data into spike events, as depicted in figure 5.
Figure 5: ATIS vision sensor

C. DAVIS: This sensor mechanism can be thought of as an updation, or collaboration with a


DVS system. Basically, it’s a combination of a conventional APS (Advanced Photo Sensing)
with a DVS based circuit. It focuses on involving both static and dynamic information in a
single pixel [13]. Its components and substructures are depicted via figure 6.

Figure 6: DAVIS vision sensor

Hence, systems involving embedded Neuromorphic sensor technologies enable a vast array
of use cases and specific advantages. These co-align to achieve performance-based results
enhancing efficiency, economic feasibility and power consumption.

4 Neuromorphic Vision Chips

Before diving into the understandings and implications of a Neuromorphic vision chips, we
shall first explain the basic functionalities and characteristics of a general vision chip.

A vision chip is generally considered as a device consisting of an embedded circuitry which


possesses an image sensing as well as processing mechanisms. The output of this device is
either a specific output pertaining to the characteristic need of the specific application the
chip is built for, which carries information related to inferences obtained about the observed
scene. The information is generally taken in thorough a dedicated sensing mechanism after
which conversion of the information into sensor arrays occur, after which the information is
passed through a parallel processor architecture and finally outputted via a desired state, as
mentioned in figure 7.

Figure 7: General model of a vision chip

A Neuromorphic vision chip is one that possesses sensors which mimic and are built with the
inspiration to the cerebral substructures of sensory organs and are based on aVLSI
mechanisms which overall allow for the generation of spiking outputs which categorize the
physical and informational data as analogous components of the neuro-biological circuit
systems [14]. The variations in the types of Neuromorphic vision chips arise from its
implementation methodology, ranging from frame-driven (FD) and event-driven (ED) vision
chips. The FD Neuromorphic reconfigurable vision chip comprises a high-speed image
sensor, a processing element array and self-organizing map neural network. Frame Driven
vision chips have several components, including image sensing technology which is set-apart
due to its high speed, a processing circuitry system and SNN mechanisms to encapsulate the
inferences obtained from the data, on the other hand, Event Driven chips are generally based
on AER sensing mechanisms and have parallel processing embedded [15]. The general model
and structure of a Neuromorphic vision chip is depicted in figure 8.

Figure 8: General model of a Neuromorphic vision chip

The subdivisions of Neuromorphic vision chips can be classified as

4.1 Frame Driven Vision Chips: This vision chip contains some 2-D based information of
processing mechanisms. These processing elements include a photo-diode based pixel and
also a simultaneous embedded processing circuitry, which overall function in a SIMD
methodology [16]. This chip has recently been developed and possesses the ability to
reconfigure its hardware dynamically by tethering embedded mechanisms while performing
visual image processing algorithms and functionalities [17]. Some advantages obtained in
these kinds of chips would be in areas pertaining to image detection, processing and physical
efficiency [18]. The information flow working of a general frame driven vision chip is
depicted below in figure 9.

Figure 9: Frame Driven vision chip

4.2 Event Driven Vision Chips: These are chips based on AER-based image sensors and
possession of distributed memories. Images are inputted directly as pulses or spikes, the pixel
analog data is encoded as a spike frequency. This chip also performs more efficiently with
regard to image processing and conveying tasks [18]. The information flow working of an
event driven vision chip is depicted below in figure 10.
Figure 10: Event Driven vision chip

Some examples concerning of various research developments in past and current times in the
field of Neuromorphic vision chips are, for example, the work of Bartolozzi et al who have
developed a robot named as iCub, a versatile medium for developing biological and
neurological concepts for vision in robotics [19]. Other works involve a chip for controlling
vision using event-driven concepts and streams of information [20]. Research has also
extended to applications involving sensors in their chips allowing for planetary landing tasks,
specifically focusing on the application of a low and high level test runs of a Neuromorphic
VLSI based sensor for visual data inferencing [21].

Overall, Neuromorphic chips have the intrinsic capability of enhancing and developing
applications in multiple avenues of practical implementations, henceforth proving to be a
valuable asset to be used, specifically in relation to vision centric tasks.

5. Neuromemristive Systems for Visual Data

Neuromemristive systems can be thought of as a sub-category of Neuromorphic based


computing systems that primarily implements the mechanisms of memristors to implement
plasticity. The term “plasticity” in the Neuromorphic context refers to the ability of a
Neuromorphic system to be able to dynamically grow and re-organize its internal structures
via external stimulus or pre-programmed inputs.

Memristors allow for implementation of architectural functionalities like that of neuron


inspired logic functions [22] and also in applications that could replace traditional logical
circuit gates [23].

Generally memristive circuits follow the Caravelli-Traversa-Di Ventra equation, for the
internal memory of the circuits given by (1)

d 1
( X ) = −aX + ( I − x  X )−1  S (1)
dt b

In the above equation, a i.e. the “forgetting” time scale constant should ideally be 0, x is the
ratio of off and on values of limit resistances of the memristors and S is the vector of the
sources of the circuit and 𝜌 is a projector of fundamental loops of a circuit, b has the
dimension of a voltage associated with properties of a memristors, X is the internal
value of memristors, with values lying between 0 and 1 [24].

The theology and methodologies followed while developing Neuromemristive systems


focuses primarily on the need for developing neuroplasticity, mimicking the functionalities
and structures of a biological analogous model. The requirements and characteristics sought
after would be reconfigurability, noise tolerance, reliability and resilience. Memory
requirements for implementing these are a natural characteristic for neuromemristive
systems.

Hence, methodologies and practices involving developing these systems allowing for
improvements with regard to conventional STDP learning rule mechanisms allows us to
create a pathway for the creation of systems which can incorporate both the functionalities
and systems involving Neuromorphic and neuromemristive systems [25].

STDP is the ability of natural or artificial synapses to change their strength according to the
precise timing of individual pre- and/or post-synaptic spikes [26]. The dynamic changes of
these weights present in the equation model can be expresses as an equation described by (2)
[27].

 A+ et / , if t  0;
+

W (t ) =  (2)
− A−et / , if t  0.

In the above equation, W(Δt) refers to the synaptic weight which works in dependence with
the time difference i.e. Δt which would be the difference between the firing times. A (both +
and -) refers to a positive constant which increase the metrics of potentiation and depression
respectfully, whereas τ (both + and -) are positive time constants which define the breadth of
learning window values.

Works consisting of memristive systems, were generally implemented either on


functionalities inspired by CMOS circuits or processor-based software which limited the
betterments related to scalability factors, and other efficiency and energy based factors [28].
Henceforth, the need of dynamic, adaptive and efficient systems arose, paving the way for
integrations into Neuromorphic systems.

Wang et al, in 2018, published their work enabling unsupervised learning based pattern
classification which was achieved via fully memristive neural networks. They used silver
nanoparticles and created an artificial neuron which they demonstrably used concepts of to
achieve a diffusive memristors using the same [29]. They were successfully able to
experiment and develop these techniques to perform synaptic weight updation and pattern
classification forthwith.

Similar to the above-mentioned work, pattern classification was also implemented using
memristive crossbar circuits implemented via Alibart et al. Through their work, they were
able to calculate synaptic connection equations, which they were able to obtain [30].

Legenstein et al, published their work comprising of plasticity functionalities based on


reward modulated spike timing learning theories. They were able to implement STDP
standards which allow and enable neurons to distinguish complex firing patterns of
presynaptic neurons, even for data-based standard forms of STDP, and without the need for a
supervisor that tells the neuron when it should spike [31]. The work enabled and focused on
the ability of STDP mechanism to fulfil the need of spontaneity and exploring recurrent
networks and their firing chains.

These developments elucidate many features and advantages that can be used for a variety of
real-world applications, but their have emerged a few challenges and issues pertaining to the
development of these systems in recent times, such as the issues face by redox memristors,
causing the development of large-scale systems impractical due to the requirement of a large
number of devices to emulate a single synapse [32]. Issues also arise in production based
procedures, as these systems’ development tends to cause proneness to challenges like
increase in circuit footprints, causing un-ideal density [33]. Problems also arise in relation to
uncontrolled film thinning, increased variability and electrode shortings in production
standpoints [34].

7. Visual Data Analysis under Neuromorphic computing

Processing of data generally done by analytical models like that of a Spiking Neural
Network. Before diving into the specific application-based explanations of the highly used
analytical models, let us understand a few basic terminologies and methodologies.

7.1 Spiking Neural Networks (SNN)

An SNN can be considered as a variation of an artificial neural network which is inspired by


the activities and methodologies of biological neurons. Unlike in traditional artificial neural
[35] networks, where the representation of neuronic activities are done via continuous signals
or equation based systems, Spiking Neural Networks use discrete spikes or pulses to
represent them. In an SNN, the inbuilt neurons in the system receive input signals from other
neurons or external sources and later integrate them in a recurrent manner. Here, if the signal
exceeds a pre-calculated threshold, then the neuron emits a spike or a pulse, which then gets
transmitted to other neurons. Further inferences about the input signals and network activity
can be obtained via observing the rate and the timing of the spikes. One of the most common
models for approximation the functionality of a neuron can be considered to be the Spike
Response Model (SRM) pertaining to the reason of its high similarity to a biological neuron
[36]. SNNs generally follow a methodology known as WTA i.e. Winner-Take-All, which
algorithmically means that, neurons compete for activation and only the one with the highest
activation remains active, while the others are not taken into account. The structure and
mechanisms followed by a Spiking neural Network architecture is depicted briefly in figure

11.

Figure 11: Spiking Neural Network Architecture

7.2 Applications
There are various applications and techniques developed in current times for the analytical
procedures required to infer information from inputted data, the requirements of the analysis
of the data is generally categorized according to the specific need of a Neuromorphic system,
according to which specifications are developed and built and frameworks are implemented.

7.2.1 Visual Pattern Recognition and Classification

For the need of specific recognition of real-world visual trends and patterns, Neuromorphic
techniques have worked very well in unison with current methodologies, while
simultaneously providing newer and broader avenues for further enhancement into these
technologies. Works involving this domain has seen the usage of many different algorithms
and quantification techniques to encapsulate the necessary information and output required.
For example, the work done by Maashri et al. [37] have encapsulated the usage of the
algorithm called HMAX, which consists of inputting an image into the system which is then
passed through a Center-Surround layer, after which the image is scaled and then passed
through several filters of convolutions and pooling. Convoluting the image, briefly means,
that the image is passed through filters which enhance and/or de-enhance certain visual
characteristics of the image, according to a chosen algorithmic way. And pooling is the
procedure of aggregating the pixel and choosing the highest values therein.

The HMAX algorithm is capable of high ended functionalities like, Object Recognition,
Action Recognition, Face Processing etc., its description is done visually in figure 12.

Figure 12: HMAX Algorithm processes

The works done via Wu et al. [38] demonstrate the use of STDP and WTA (Winner Take All)
algorithm for developments of their SNN substructures, according to which they were
successfully able to create a pattern recognition application for OCR (Optical Character
Recognition). The general structure followed by these systems is described in the figure 13.

Figure 13: SNN with respect to OCR

Other works, like that of Baumgartner et al. show the usage of on-chip learning for pattern
recognition using DVS sensing and local based plasticity rules [39]. Also, work has also been
done in pattern recognition and image processing using fabricated Neuromorphic vision chips
using three-dimensional integration technology [40]. RRAM Synapses and Threshold-
Controlled Neurons have also been experimented with via Jiang et al. who have created a
system which enables inputting of images from its surroundings and performs classification
the data [41]. Similar work on RRAMs with a variation on the usage of fast weight transfer
for real-time online learning has also been developed [42]. Development of systems
concerning memristors emulator arrays and neuron circuits [43], Hybrid memristors and
Crossbar Array Structures [44], and Neuromorphic tactile sensing systems have also been
done for the purpose of pattern recognition tasks [45]. For image classification tasks,
variations of SNNs are used in unison with parameter tweaks and updated models. For
example, recent work in image classification has been done by Iakymchuk et al, who have
used methodologies involving SRMs, PSPs, a simplified Spiking Neural model, and STDP
based memristic plasticity learning to enable successful learning for plasticity. Whereas
improvements have been done in feedforward neural networks for SNN based chips via
newer learning algorithms, neuron models, proton induced single events, content-based
retrieval systems [46] [47], and Residual SNNs have also been done [48][49][50][51].

7.2.2 Navigation

Research in the field of navigator control either in ground-based vehicles, Unmanned Ariel
Vehicles or for Robot Navigation have been prevalent for a long time. Past and current
techniques have widely focused on the use of application techniques involving use of
machine learning algorithms and frameworks which enable these systems to be able to
navigate autonomously in complex and dynamic environments. The general model and
processes followed by Neuromorphic based autonomous navigation systems are visually
described in figure 14.

Figure 14: General Model of a Neuromorphic autonomous navigation system

Current analogous research has also been conducted with regard to developing these systems
with collaboration of Neuromorphic computing techniques, for example Viale et al. [52] have
developed an algorithm they call CarSNN, which is an event based autonomous vehicle
navigation system, the system receives input from a dual polarity channel and the learning
rule for the system is STBP [53], i.e. Spatio Temporal Backpropagation i.e. the methodology
behind the Spiking Neural Network used and the combining of the layer-by-layer spatial
domain (SD) and the timing-dependent temporal domain (TD) [54]. Similar to the
aforementioned work, there has also been advancements in using similar event-driven vision
sensors to provide three times better speed and power consumption metrics for the control of
high-speed unmanned drones [55].

Researches concerning the development of UAVs are also being conducted prevalently. For
instance, NeoN was developed in 2017, which uses their novel DANNA (Dynamic Adaptive
Neural Network Array) which enabled the development of a robot navigation system [56]. Et
al. have conducted research into the development of efficient systems capable of conducting
landing mechanisms of the concerned ariel vehicle, i.e. Loihi Neuromorphic chip embedded
for a flying robot [57]. While, Bhandari et al have also developed methods for mobile target
tracking, basically utilizing target tracking and action selection uses Neuromorphic
computing [58].

Works concerning developing robot navigation systems have also been carried out by many,
with collaboration of Neuromorphic computing techniques. The use of mixed signal
analog/digital Neuromorphic processing is done to create and implement interfacing of
ROLLS, a Neuromorphic DVS based vision sensor used for performance results in avoiding
obstacles and acquiring targets [59]. Works to develop indoor navigation for robots have also
been carried out via Fuentes et al. who have created a ROS (Robotic Operating System)
based platform via pertains to the classification output of an AI-edge CNN [60] [61]
accelerator for a DVS Neuromorphic system [62]. Furthermore, Tang et al. have created an
energy efficient and with capabilities of deep reinforcement learning and enablement of map-
less navigation for the systems [63].

7.2.3 Motion Detection

Motion detection can be defined as the process of detecting a change in the position or
movement of an object in a specified location. Current research into motion detection being
developed on Neuromorphic systems are numerous and manifold. A general model typically
used while developing Neuromorphic motion detection systems is illustrated in figure 15.

Figure 15: General Model of a Neuromorphic motion detection system

For instance, Tsur et. al. have developed a Neuromorphic based system for motion and
direction detection [64]. Works involving has also been done for the use case of motion
detection and orientation selectivity using volatile resistive switching memories [65]. A
multi-differential approach is also applied to this specific use case via creating a model which
is evidential systems motion pathway based algorithms [66]. Milde et al have done work
consisting of developing a circuit, which is asynchronous in nature, for detecting elementary
motion via data obtained via event-based vision sensors [67].

Related, advanced and alternate works have also been conducted regarding the various
possible subdomains concerning the analytical components for Neuromorphic systems, a few
of which were discussed above. The works concerning these methodologies of a
Neuromorphic system is one of the most crucial ones to exist, and is essential to efficiently
and cohesively process the data obtained by the system and produce valuable and inferential
outputs.

8. Implementations of Neuromorphic visual systems

The implementation of Neuromorphic computing techniques, SNNs and other methodologies


relating to these require efficient and cohesive hardware and software substructures to be able
to handle the processing, memory and system needs for various applications. In this section,
we shall initially understand the methodologies, features, characteristics and mechanisms
regarding Neuromorphic hardware and software.

8.1 Hardware Systems

In this section, a few hardware implementation systems are briefly explained; their features,
integrability and mechanisms are commented upon.

8.1.1 IBM TrueNorth

TrueNorth is a Neuromorphic based CMOS integrated circuit produced by IBM in 2014 [68].
It has 4096 cores, each of which possess 256 simulated neurons, which are programmable.
Each neuron among these possesses 256 programmable synapses, which are beneficial as
they propagate signals between each other. TrueNorth is successfully able to overcome the
Von Neumann based architecture limitations. It is highly efficient, and has a power
consumption of 70 mili-watts and also has a power density which beats conventional
microprocessors by a 10,000 factor [69]. Neuron emulation for this system is done using a
Linear-Leak Integrate-and-Fire (LLIF) model [70].

8.1.2 Loihi

Developed by Intel, the Loihi chip is a self-learning Neuromorphic chip. It is hypothesized


that this system is more energy efficient by a factor of 1000 than an analogous general
contemporary models required matching Loihi’s performance. The chip can harbour popular
machine learning and deep learning frameworks for application specific usage. The chip was
developed using Intel’s 14nm creation processes and hosts 128 clusters each consisting of
1024 artificial neurons having a total of 131,072 simulated neurons [71]. The chip possesses
130 million synapses.

8.1.3 SpiNNaker

SpiNNaker can be considered as a parallel supercomputer-based architecture. SpiNNaker


hosts about 57,600 processing nodes, and each of these consists of 18 ARM9 processors
(specifically ARM968) and 128 MB of mobile DDR SDRAM, totaling 1,036,800 cores and
over 7 TB of RAM [72]. The system has 1000 cores and each core has enough capability to
emulate 1000 neurons cohesively with its internal architecture [73]. The energy requirements
of the machine tend towards about 100 kW from a 240 V supply, the machine also required
the need of an air-conditioned environment [74].

8.2 Software Systems

In this section, a few software systems and frameworks are briefly explained, their features,
components and characteristics are discussed.
8.2.1 NEST

NEST is a software-based system which in essence is a simulator for SNN models. It focuses
on dynamics, size and structure of these systems. It can simulate any given network for up to
millions of neurons and billions of synapse connections. It also provides for different variants
of STDP parameters, while simultaneously providing efficient and user-friendly interaction
systems to implement network based structures which can either be algorithmic or
information-based [75].

8.2.2 BrainScaleS

It is a Neuromorphic computing system that consists of custom-designed electronic chips that


mimic the practical behavior of neurons and synapses between them. It is designed to
perform real-time simulations of large-scale neural networks with up to a billion neurons. It
works on the methodology of Spatial scales have a range of individual neurons and also
works towards large scale neurons. This software works analogous to brain functional areas
and also has functionalities extending to plasticity requirements [76].

8.2.3 PyNN

It is a simulator-independent language for building spiking neural network models. It


supports a range of simulators, including NEST, Brian, and NEURON, and enables users to
encode and create models once and run them on multiple heterogeneous simulators. The main
aim of the project revolves around creating support structures at abstraction levels, and also
simultaneously allowing for the access to the details pertaining to neurons and their
connections [77].

9. Conclusions

Overall, in this chapter, we have started with the basics and brief understandings behind the
workings of a Neuromorphic system. We have discussed the differences between Von
Neumann based architectures and Neuromorphic computing along with their advantages. We
have then provided insights into the current applications and research works being done in the
domain of Neuromorphic computing and some challenges and solutions encountered in this
space. We have then provided insights into the mechanisms and inner workings of
Neuromorphic vision sensors, their types, features and methodologies. We have then
discussed the principles behind Neuromorphic vision sensors, their types and their coalitions
with Neuromorphic vision sensors and process methods followed by them. We have then
discussed the concepts behind Neuromemristive systems and how these Neuromorphic
systems allow for intrinsic plasticity in their workings. After this, we have elaborated upon
how the inputted data is analysed within the systems, and how the analytical component and
processes differ for specific visual use cases. We have then discussed how neuromorphic
systems are implemented and the different frameworks present to implement these, both from
a hardware and from a software perspective.

10. Future Scope

In the aforementioned sections in this chapter, a lot of solutions are presented for a wide
variety of real-world applications. But, with these present solutions, a lot of avenues are
opened for future work with regard to this domain. Further work can be done in avenues of
Edge Computing devices for small application-based usages. Systems can also be made for
organizational needs and use-case specific tasks. Works based on increasing resolution,
quality, bandwidth, accuracy, gain factors etc. can all be increased. Using current
developments, further advancements can also be made into creating visual systems. For
example, motion detection advancements can be utilized to create theft alarm systems for
homes, also pattern classification systems could be used for mask detection tasks at necessary
organizations and so on.

References

[1] https://doi.org/10.3389/fnins.2019.00260
[2] Giacomo Indiveri 2021 Neuromorph. Comput. Eng. 1 010401, 10.1088/2634-4386/ac0a5b
[3] D. A. Keim. Information visualization and visual data mining. IEEE Transactions on Visualization
and C
[4] D. A. Keim, F. Mansmann, J. Schneidewind and H. Ziegler, "Challenges in Visual Data Analysis,"
Tenth International Conference on Information Visualisation (IV'06), London, UK, 2006, pp. 9-16, doi:
10.1109/IV.2006.31.
[5] Soumaya, O. U. N. A. C. E. R., et al. "Real-time data stream processing challenges and
perspectives." International Journal of Computer Science Issues (IJCSI) 14.5 (2017): 6-12.
[6] Zheng, Zhigao, et al. "Real-time big data processing framework: challenges and solutions."
Applied Mathematics & Information Sciences 9.6 (2015): 3169.
[7] Science 19 May 2000: Vol. 288. no. 5469, pp. 1189 - 1190 DOI: 10.1126/science.288.5469.1189
[8] F Y Liao, F C Zhou, and Y Chai, Neuromorphic vision sensors: Principle, progress and
perspectives[J]. J. Semicond., 2021, 42(1), 013105. http://doi.org/10.1088/1674-
4926/42/1/013105
[9] Posch C, Serrano-Gotarredona T, Linares-Barranco B, et al. Retinomorphic event-based vision
sensors: bioinspired cameras with spiking output. Proceedings of the IEEE, 2014, 102(10): 1470-
1484.
[10] B.K. Tripathy and V. Raghuveer: An Object Oriented Approach to Improve the Precision of
Learning Object retrieval in a Self-Learning Environment, International Journal of E-learning and
Learning Objects, vol.8, (2013), pp.193-214
[11] B.K. Tripathy and V. R. Raghuveer: Affinity-based learning object retrieval in an e-learning
environment using evolutionary learner profile, Knowledge Management and E-Learning, Volume 8,
Issue 1, March 2016, pp. 182-199
[12] B.K. Tripathy, D. Dutta, Surajit Das and T. Chido: Social internet of things (SIoT): Transforming
small objects to social object, IIMT Research Network, ISBN 878-93-82208-77-8, (2015), pp. 23-27
[13] Berner R, Brandli C, Yang M, et al. A 240 × 180 10 mW 12 μs latency sparse-output vision sensor
for mobile applications. 2013 Symposium on VLSI Circuits, 2013, C186
[14] Volume 10 - 2016 | https://doi.org/10.3389/fnins.2016.00115
[15] Wu, N. Neuromorphic vision chips. Sci. China Inf. Sci. 61, 060421 (2018).
https://doi.org/10.1007/s11432-017-9303-0
[16] Citation: SCIENCE CHINA Information Sciences 61, 060421 (2018); doi: 10.1007/s11432-017-
9303-0
[17] Komuro T, Kagami S, Ishikawa M. A dynamically reconfigurable SIMD processor for a vision chip.
IEEE J Solid-State Circ, 2004, 39: 265–268
[18] Camunas-Mesa L, Zamarreno-Ramos C, Linares-Barranco A, et al. An event-driven multi-kernel
convolution processor module for event-driven vision sensors. IEEE J Solid-State Circ, 2012, 47: 504–
517
[19] Bartolozzi, Chiara, et al. "Embedded neuromorphic vision for humanoid robots." CVPR 2011
workshops. IEEE, 2011.
[20] Vitale, Antonio, et al. "Event-driven vision and control for UAVs on a neuromorphic chip." 2021
IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.
[21] Orchard, Garrick, Chiara Bartolozzi, and Giacomo Indiveri. "Applying neuromorphic vision
sensors to planetary landing tasks." 2009 IEEE Biomedical Circuits and Systems Conference. IEEE,
2009.
[22] Maan, A. K.; Jayadevi, D. A.; James, A. P. (January 1, 2016). "A Survey of Memristive Threshold
Logic Circuits". IEEE Transactions on Neural Networks and Learning Systems. PP (99): 1734–1746.
arXiv:1604.07121. Bibcode:2016arXiv160407121M. doi:10.1109/TNNLS.2016.2547842. ISSN 2162-
237X. PMID 27164608. S2CID 1798273.
[23] James, A.P.; Kumar, D.S.; Ajayan, A. (November 1, 2015). "Threshold Logic Computing:
Memristive-CMOS Circuits for Fast Fourier Transform and Vedic Multiplication". IEEE Transactions on
Very Large Scale Integration (VLSI) Systems. 23 (11): 2690–2694. arXiv:1411.5255.
doi:10.1109/TVLSI.2014.2371857. ISSN 1063-8210. S2CID 6076956.
[24] Caravelli; et al. (2021). "Global minimization via classical tunneling assisted by collective force
field formation". Science Advances. 7 (52): 022140. arXiv:1608.08651. Bibcode:2021SciA....7.1542C.
doi:10.1126/sciadv.abh1542. PMID 28297937. S2CID 231847346.
[25] Covi, Erika, et al. "Spike-driven threshold-based learning with memristive synapses and
neuromorphic silicon neurons." Journal of Physics D: Applied Physics 51.34 (2018): 344003.
[26] Goldberg, David H., Gert Cauwenberghs, and Andreas G. Andreou. "Probabilistic synaptic
weighting in a reconfigurable network of VLSI integrate-and-fire neurons." Neural Networks 14.6-7
(2001): 781-793.
[27] Abbott LF, Nelson SB (2000) Synaptic plasticity: taming the beast. Nat Neurosci 3: 1178–1183
[28] Goi, E., Zhang, Q., Chen, X. et al. Perspective on photonic memristive neuromorphic
computing. PhotoniX 1, 3 (2020). https://doi.org/10.1186/s43074-020-0001-6
[29] Wang, Z., Joshi, S., Savel’ev, S. et al. Fully memristive neural networks for pattern classification
with unsupervised learning. Nat Electron 1, 137–145 (2018). https://doi.org/10.1038/s41928-018-
0023-2
[30] Alibart, F., Zamanidoost, E. & Strukov, D. Pattern classification by memristive crossbar circuits
using ex situ and in situ training. Nat Commun 4, 2072 (2013).
https://doi.org/10.1038/ncomms3072.28 https://doi.org/10.1371/journal.pcbi.1000180
[31] Legenstein R, Pecevski D, Maass W. A learning theory for reward-modulated spike-timing-
dependent plasticity with application to biofeedback. PLoS Comput Biol. 2008 Oct;4(10):e1000180.
doi: 10.1371/journal.pcbi.1000180. Epub 2008 Oct 10. PMID: 18846203; PMCID: PMC2543108.
[32] Adam, G.C., Khiat, A. & Prodromakis, T. Challenges hindering memristive neuromorphic
hardware from going mainstream. Nat Commun 9, 5267 (2018). https://doi.org/10.1038/s41467-
018-07565-4
[33] Xu, C. et al. Design implications of memristor-based RRAM cross-point structures. In 2011
Design, Automation & Test in Europe Conference & Exhibition (DATE) (IEEE, Grenoble, France, 2011).
[34] Baek, I. G. et al. Realization of vertical resistive memory (VRRAM) using cost effective 3D
process. In 2011 IEEE International Electron Devices Meeting (IEDM), pp. 737–740 (IEEE,
Washington, DC, USA, 2011).
[35] B.K. Tripathy and J. Amerada: Soft Computing- Advances and Applications, Cengage Learning
publishers, New Delhi, (2015). ASIN: 8131526194, ISBN-10: 9788131526194
[36] Gerstner W, Kistler WM. Spiking Neuron Models: Single Neurons, Populations, Plasticity.
Cambridge, United Kingdom: Cambridge University Press; 2002, p. 494.
[37] Maashri, Ahmed Al, et al. "Accelerating neuromorphic vision algorithms for
recognition." Proceedings of the 49th annual design automation conference. 2012.
[38] Wu, Xinyu, Vishal Saxena, and Kehan Zhu. "Homogeneous spiking neuromorphic system for real-
world pattern recognition." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 5.2
(2015): 254-266.
[39] Baumgartner, Sandro, et al. "Visual pattern recognition with on on-chip learning: towards a fully
neuromorphic approach." 2020 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE,
2020.
[40] M. Koyanagi et al., "Neuromorphic vision chip fabricated using three-dimensional integration
technology," 2001 IEEE International Solid-State Circuits Conference. Digest of Technical Papers.
ISSCC (Cat. No.01CH37177), San Francisco, CA, USA, 2001, pp. 270-271, doi:
10.1109/ISSCC.2001.912633.
[41] Jiang, Yuning, et al. "Design and hardware implementation of neuromorphic systems with RRAM
synapses and threshold-controlled neurons for pattern recognition." IEEE Transactions on Circuits
and Systems I: Regular Papers 65.9 (2018): 2726-2738.
[42] Kim, Min-Hwi, et al. "A fast weight transfer method for real-time online learning in RRAM-based
neuromorphic system." IEEE Access 10 (2022): 37030-37038.
[43] Ranjan, Rajeev, et al. "Integrated circuit with memristor emulator array and neuron circuits for
biologically inspired neuromorphic pattern recognition." Journal of Circuits, Systems and
Computers 26.11 (2017): 1750183.
[44] Min, Jin-Gi, Hamin Park, and Won-Ju Cho. "Milk–Ta2O5 Hybrid Memristors with Crossbar Array
Structure for Bio-Organic Neuromorphic Chip Applications." Nanomaterials 12.17 (2022): 2978.
[45] Rasouli, Mahdi, et al. "An extreme learning machine-based neuromorphic tactile sensing system
for texture recognition." IEEE transactions on biomedical circuits and systems 12.2 (2018): 313-325.
[46] B.K. Tripathy and V.R. Raghuveer: On demand analysis of learning experiences for adaptive
content retrieval in an eLearning environment, Journal of e-Learning and Knowledge Society,
Italy,vol.11, no.1, (2015), pp.139-156.
[47] V. R. Raghuveer, B. K. Tripathy, T. Singh and S. Khanna, "Reinforcement learning approach
towards effective content recommendation in MOOC environments," 2014 IEEE International
Conference on MOOC, Innovation and Technology in Education (MITE), 2014, pp. 285-289, doi:
10.1109/MITE.2014.7020289.
[48] Yepes, Antonio Jimeno, Jianbin Tang, and Benjamin Scott Mashford. "Improving classification
accuracy of feedforward neural networks for spiking neuromorphic chips." arXiv preprint
arXiv:1705.07755 (2017).
[49] Brewer, Rachel M., et al. "The impact of proton-induced single events on image classification in
a neuromorphic computing architecture." IEEE Transactions on Nuclear Science 67.1 (2019): 108-
115.
[50] Liu, Te-Yuan, et al. "Neuromorphic computing for content-based image retrieval." Plos one 17.4
(2022): e0264364.
[51] Zou, Chenglong, et al. "Residual Spiking Neural Network on a Programmable Neuromorphic
Hardware for Speech Keyword Spotting." 2022 IEEE 16th International Conference on Solid-State &
Integrated Circuit Technology (ICSICT). IEEE, 2022.
[52] Viale, Alberto, et al. "Carsnn: An efficient spiking neural network for event-based autonomous
cars on the loihi neuromorphic research processor." 2021 International Joint Conference on Neural
Networks (IJCNN). IEEE, 2021.
[53] Wu Y, Deng L, Li G, Zhu J, Shi L. Spatio-temporal backpropagation for training high-performance
spiking neural networks. Frontiers in Neuroscience. 2018;12:331
[54] https://doi.org/10.3389/fnins.2018.00331
[55] Vitale, Antonio, et al. "Event-driven vision and control for UAVs on a neuromorphic chip." 2021
IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.
[56] Mitchell, J. Parker, et al. "NeoN: Neuromorphic control for autonomous robotic
navigation." 2017 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS). IEEE,
2017.
[57] Dupeyroux, Julien, et al. "Neuromorphic control for optic-flow-based landing of MAVs using the
Loihi processor." 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.
[58] Bhandari, Subodh, et al. "Tracking of Mobile Targets using Unmanned Aerial Vehicles." AIAA
Guidance, Navigation, and Control Conference. 2012.
[59] Milde, Moritz B., et al. "Obstacle avoidance and target acquisition for robot navigation using a
mixed signal analog/digital neuromorphic processing system." Frontiers in neurorobotics 11 (2017):
28.
[60] Karan Maheswari, Aditya Shaha, Dhruv Arya, B. K. Tripathy and R. Rajkumar: Convolutional
Neural Networks: A Bottom-Up Approach, (Ed: S. Bhattacharyya, A. E. Hassanian, S. Saha and B.K.
Tripathy, Deep Learning Research with Engineering Applications), ), De Gruyter Publications, (2020),
pp.21-50. DOI: 10.1515/9783110670905-002
[61] S. Bhattacharyya, V. Snasel, A. E. Hassanian, S. Saha and B. K. Tripathy: Deep Learning Research
with Engineering Applications, De Gruyter Publications, (2020). ISBN: 3110670909, 9783110670905.
DOI: 10.1515/9783110670905
[62] Piñero-Fuentes, Enrique, et al. "Autonomous driving of a rover-like robot using neuromorphic
computing." Advances in Computational Intelligence: 16th International Work-Conference on
Artificial Neural Networks, IWANN 2021, Virtual Event, June 16–18, 2021, Proceedings, Part II. Cham:
Springer International Publishing, 2021.
[63] https://doi.org/10.48550/arXiv.2003.01157
[64] Tsur, Elishai Ezra, and Michal Rivlin-Etzion. "Neuromorphic implementation of motion detection
using oscillation interference." Neurocomputing 374 (2020): 54-63.
[65] Wang, Wei, et al. "Neuromorphic motion detection and orientation selectivity by volatile
resistive switching memories." Advanced Intelligent Systems 3.4 (2021): 2000224.
[66] McOwan, Peter W., et al. "A multi-differential neuromorphic approach to motion
detection." International Journal of Neural Systems 9.05 (1999): 429-434.
[67] Milde, Moritz B., et al. "Spiking elementary motion detector in neuromorphic systems." Neural
computation 30.9 (2018): 2384-2417.
[68] Merolla, P. A.; Arthur, J. V.; Alvarez-Icaza, R.; Cassidy, A. S.; Sawada, J.; Akopyan, F.; Jackson, B.
L.; Imam, N.; Guo, C.; Nakamura, Y.; Brezzo, B.; Vo, I.; Esser, S. K.; Appuswamy, R.; Taba, B.; Amir, A.;
Flickner, M. D.; Risk, W. P.; Manohar, R.; Modha, D. S. (2014). "A million spiking-neuron integrated
circuit with a scalable communication network and interface". Science. 345 (6197): 668–73.
[69] https://spectrum.ieee.org/computing/hardware/how-ibm-got-brainlike-efficiency-from-
the-truenorth-chip
[70] "The brain's architecture, efficiency… on a chip"
[71] "Why Intel built a neuromorphic chip"
[72] http://apt.cs.manchester.ac.uk/projects/SpiNNaker/SpiNNchip/
[73] https://www.youtube.com/watch?v=2e06C-yUwlc
[74] http://apt.cs.manchester.ac.uk/projects/SpiNNaker/hardware/
[75] https://www.nest-simulator.org/
[76] https://brainscales.kip.uni-heidelberg.de/
[77] https://neuralensemble.org/PyNN/

You might also like