Seminar Report Format
Seminar Report Format
A Seminar Report
Submitted to the APJ Abdul Kalam Technological
University in partial fulfillment of requirements for
the award of degree
Bachelor of Technology
in
Electronics and Communication Engineering
by
Jeeshma Krishnan K
TLY19EC048
CERTIFICATE
him/her under our guidance and supervision. This report in any form has not been
Mr. JITHENDRA K. B.
Head of the Department, ECE
College of Engineering
Thalassery
DECLARATION
Computing, submitted for partial fulfillment of the requirements for the award of
guide.
This submission represents my ideas in my own words and where ideas or words
of others have been included; I have adequately and accurately cited and referenced
the original sources.
I also declare that I have adhered to ethics of academic honesty and integrity and
have not misrepresented or fabricated any data or idea or fact or source in my
submission. I understand that any violation of the above will be a cause for
disciplinary action by the institute and/or the University and can also evoke penal
action from the sources which have thus not been properly cited or from whom
proper permission has not been obtained. This report has not been previously formed
as the basis for the award of any degree, diploma or similar title of any other
University.
Jeeshma Krishnan K
Thalassery
Date
ABSTRACT
Biological systems have many key answers for our current limitations in scaling.
Miniaturization and more speed were the driving forces for VLSI technology in past few
decades. As we are reaching the dead end of Moore's law, now the paradigm has shifted
towards intelligent machines. Many efforts are made to mimic the commonsense
observed in animals. Automation and smart devices have taken a great momentum in
recent years, but to imitate human intelligence in machines with existing hardware is not
viable. By only developing complex algorithms and implementing software, intelligence
is not accomplishable. This paper brings out the very basic differences between brain and
computers. The complexity of human brain and flaws of current computing architecture
is discussed. Neuromorphic Engineering emerges as a realistic solution whose
architecture and circuit components resemble to their biological counterparts. This paper
acts as a primer for Neuromorphic engineering. Brain functionality being analogous, the
limitation of only digital systems is addressed by the mixed mode operation of ICs.
Modelling of Neuron, various software and hardware available to realize these morphed
architectures are studied. The gap between the software simulation and Hardware
emulation, FPGA and VLSI implementation is debated.
Jeeshma Krishnan K
ACKNOWLEDGEMENT
I take this opportunity to express my deepest sense of gratitude and sincere thanks to
everyone who helped me to complete this work successfully. I express my sincere
thanks to Mr. Jithendra K. B. Head of Department, Electronics and
CommunicationEngineering, College of Engineering Thalassery for providing me
with all the necessary facilities and support.
I would like to express my sincere gratitude to ., Assistant Professor,
Department of Electronics and Communication Engineering, College of Engineering
Thalassery for his support and co-operation as seminar co-ordinator. I would like to
place on record my sincere gratitude to my seminar guide Ms. Lakshmi Prabha K
K, Assistant Professor, Department of Electronics and Communication Engineering,
College of Engineering Thalassery for the guidance and mentorship throughout the
course.
Finally, I thank my family, and friends who contributed to the successful
fulfillment of this seminar work.
Jeeshma Krishnan K
CONTENTS
Abstract i
Acknowledgement ii
List of Figures iv
List of Tables v
List of Symbols vi
1 Introduction 1
2 Literature Review 2
2.1 section1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.1 title 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Technology Description 4
4 Conclusion 6
References
LIST OF FIGURES
LIST OF SYMBOLS
Ω Unit of Resistance
Speed of light
λ Wavelength
δ Delta
Chapter 1
Introduction
Neuromorphic Computing is a concept developed by Caver Mead.In neuromorphic
computing, you basically take inspiration from the principles of the brain and try to
mimic those on hardware utilising knowledge from nanoelectronics and VLSI.
Neuromorphic Computing refers to computing structures and algorithms currently it is
used in a wide variety of applications include recognition tasks, decision making, and
forecasting.Neuromorphic architectures needed to: produce lower energy consumption,
potential novel nanostructured materials, and enhanced computation.
“Machine learning” software is used to tackle problems with complex and noisy
datasets that cannot be solved with conventional “non-learning” algorithms. Considerable
progress has been made recently in this area using parallel processors. These methods are
proving so effective that all major Internet and computing companies now have “deep
learning”— the branch of machine learning that builds tools based on deep (multilayer)
neural networks—research groups. Moreover, most major research universities have
machine learning groups in computer science, mathematics, or statistics. Machine
learning is such a rapidly growing field that it was recently called the “infrastructure for
everything.”
Over the years, a number of groups have been working on direct hardware
implementations of deep neural networks. These designs vary from specialized but
conventional processors optimized for machine learning “kernels” to systems that attempt
to directly simulate an ensemble of “silicon” neurons, better known as neuromorphic
computing. While the former approaches can achieve dramatic results, e.g., 120 times
lower power compared with that of general-purpose processors, they are not
fundamentally different from existing CPUs. The latter neuromorphic systems are more
in line with what researchers began working on in the 1980s with the development of
analog CMOS-based devices with an architecture that is modeled after biological
neurons. One of the more recent accomplishments in neuromorphic computing has come
from IBM research, namely, a biologically inspired chip (“TrueNorth”) that implements
one million spiking neurons and 256 million synapses on a chip with 5.5 billion
transistors with a Neuromorphic Computing: From Materials to Systems Architecture 6
typical power draw of 70 milliwatts. As impressive as this system is, if scaled up to the
size of the human brain, it is still about 10,000 times too power intensive. Clearly,
progress on improvements in CMOS and in computer hardware more generally will not
be self-sustaining forever. Well-supported predictions, based on solid scientific and
engineering data, indicate that conventional approaches to computation will hit a wall in
the next 10 years. Principally, this situation is due to three major factors: (1) fundamental
(atomic) limits exist beyond which devices cannot be miniaturized, (2) the local energy
dissipation limits the device packing density, and (3) the increase and lack of foreseeable
limit in overall energy consumption are becoming prohibitive. Novel approaches and new
concepts are needed in order to achieve the goals of developing increasingly capable
computers that consume decreasing amounts of power.
Chapter 2
Literature Review
In neuromorphic computing, you basically take inspiration from the principles of the
brain and try to mimic those on hardware utilising knowledge from nanoelectronics and
VLSI. Neuromorphic Computing refers to computing structures and algorithms
currently it is used in a wide variety of applications include recognition tasks, decision
making, and forecasting.Neuromorphic architectures needed to: produce lower energy
consumption, potential novel nanostructured materials, and enhanced computation.
Neuromorphic computing systems are aimed at addressing these needs. They will
have much lower power consumption than conventional processors and they are
explicitly designed to support dynamic learning in the context of complex and
unstructured data. Early signs of this need show up in the Office of Science
portfolio with the emergence of machine learning based methods applied to
problems where traditional approaches are inadequate.
In contrast, the brain is a working system that has major advantages in these aspects.
The energy efficiency is markedly—many orders of magnitude—superior. In
addition, the memory and processors in the brain are collocated because the
constituents can have different roles depending on a learning process. Moreover, the
brain is a flexible system able to adapt to complex environments, self-programming,
and capable of complex processing. While the design, development, and
implementation of a computational system similar to the brain is beyond the scope of
today’s science and engineering, some important steps in this direction can be taken
by imitating nature.
A major difference is also present at the device level (see Figure 2.3). Classical von
Neumann computing is based on transistors, resistors, capacitors, inductors and
communication connections as the basic devices. While these conventional devices
have some unique characteristics (e.g., speed, size, operation range), they are limited
in other crucial aspects (e.g., energy consumption, rigid design and functionality,
inability to tolerate faults, and limited connectivity). In contrast, the brain is based on
large collections of neurons, each of which has a body (soma), synapses, axon, and
dendrites that are adaptable and fault tolerant. Also, the connectivity between the
various elements in the brain is much more complex than in a conventional
computational circuit .
2.1.3 Performance
The performance gap between neuromorphic and our general silicon machines comes
from a variety of factors, including unclear internal knowledge about the neuronal
model/ connection topology/ coding scheme/ learning algorithm, discrete state space
with limited precision, and weak external supports of benchmark data resource/
computing platform/ programming tool, all of which are not sophisticated than those in
machine learning. Currently, researchers in different sub-domains for the exploration of
neuromorphic computing usually have distinct optimization objective, such as
reproducing cell-level or circuitlevel neural behaviors, emulating the brain-like
functionalities at macro level, or just reducing the execution cost from hardware
perspective. Neuromorphic computing still lags behind machine learning models if we
cannot have clear target and only consider the application accuracy.
Therefore, we need to rethink the true advantage of human brain (e.g. strong
generalization, multi-modal processing and association, memory-based computing) and
the goal of neuromorphic computing, rather than coming to a dilemma of confronting
machine learning. We should make efforts to understand and bridge the current “gap”
between neuromorphic computing and machine learning.
The following concepts play an important role in the operation of a system, which imitates the
brain.
Spiking - Signals are communicated between neurons through voltage or current spikes.
This communication is different from that used in current digital systems, in which the
signals are binary, or an analogue implementation, which relies on the manipulation of
continuous signals. Spiking signaling systems are time encoded and transmitted via
“action potentials”.
Adaptability - Biological brains generally start with multiple connections out of which,
through a selection or learning process, some are chosen and others abandoned. This
process may be important for improving the fault tolerance of individual devices as well
as for selecting the most efficient computational path. In contrast, in conventional
computing the system architecture is rigid and fixed from the beginning.
Criticality - The brain typically must operate close to a critical point at which the
system is plastic enough that it can be switched from one state to another, neither
extremely stable nor very volatile. At the same time, it may be important for the system
to be able to explore many closely lying states. In terms of materials science, for
example, the system may be close to some critical state such as a phase transition.
In functional terms, the simplest, most naïve properties of the various devices and their
function in the brain areas include the following.
To implement a neuromorphic system that mimics the functioning of the brain requires
collaboration of materials scientists, condensed matter scientists, physicists, systems
architects, and device designers in order to advance the science and engineering of the
various steps in such a system. As a first step, individual components must be
engineered to resemble the properties of the individual components in the brain.
Synapse/Memristor. The synapses are the most advanced elements that have thus far
been simulated and constructed. These have two important properties: switching and
plasticity. The implementation of a synapse is frequently accomplished in a two-
terminal device such as a memristor. This type of devices exhibits a pinched (at V=0),
hysteretic I-V characteristic.
Axon/Long wire. The role of the axon has commonly (perhaps wrongly) been assumed
simply to provide a circuit connection and a time delay line. Consequently, little
research has been done on this element despite the fact that much of the dissipation may
occur in the transmission of information. Recent research indicates that the axon has an
additional signal-conditioning role.
2.3.1 Architecture
Ultimately, an architecture that can scale neuromorphic systems to “brain scale” and
beyond is needed. A brain scale system integrates approximately 1011 neurons and 1015
synapses into a single system. The high-level neuromorphic architecture illustrated in
Figure 1 consists of several large-scale synapse arrays connected to soma arrays such
that flexible layering of neurons (including recurrent networks) is possible and that off-
chip communication uses the address event representation (AER) approach to enable
digital communication to link spiking analog circuits. Currently, most neuromorphic
designs implement synapses and somata as discrete sub-circuits connected via wires
implementing dendrites and axons. In the future, new materials and new devices are
expected to enable integrated constructs as the basis for neuronal connections in
largescale systems. For this, progress is needed in each of the discrete components with
the primary focus on identification of materials and devices that would dramatically
improve the implementation of synapses and somata.
One might imagine a generic architectural framework that separates the implementation
of the synapses from the soma in order to enable alternative materials and devices for
synapses to be tested with common learning/spiking circuits (see Figure 6). A
reasonable progression for novel materials test devices would be the following: (1)
single synapsedendrite-axon-soma feasibility test devices, (2) chips with dozens of
neurons and hundreds of synapses, followed by (3) demonstration chips with hundreds
of neurons and tens of thousands of synapses.Once hundreds of neurons and tens of
thousands of synapses have been demonstrated in a novel system, it may be
straightforward to scale these building blocks to the scale of systems competitive with
the largest CMOS implementations.
State-of-the-art neural networks that support object and speech recognition can have
tens of millions of synapses and networks with thousands of inputs and thousands of
outputs. Simple street-scene recognition needed for autonomous vehicles require
hundreds of thousands of synapses and tens of thousands of neurons. The largest
networks that have been published—using over a billion synapses and a million neurons
—have been used for face detection and object recognition in large video databases.
Fig 2.5: Block diagram of neuromorphic computing
2.3.2 Properties
There are many companies and projects that are leading applications in this space.
For instance, as a part of the Loihi project by Intel, it has created a Loihi chip with
130000 neurons and 130 million synapses and excels at self-learning. Because the
hardware is optimised specifically for SNNs, it supports dramatically accelerated
learning in unstructured environments for systems that require autonomous
operation and continuous learning, with extremely low power consumption, plus
high performance and capacity.
which was estimated at $2.3 billion in the year 2020, is projected to reach a revised
size of $10.4 billion by 2027. These numbers only suggest that neuromorphic
computers are a way ahead in AI-based research and development.
Their camera captures good quality images but extracting information from the images in
real time is the bottleneck. Drones are connected to Wi-Fi and send images to the base
station. The base station processes the data and sends the signals back. This is a power-
hungry process and can be intercepted too. If the drone itself has the capacity to process
the data, it can take immediate action.
D. Space Applications
Even in space, rovers capture images and send them back to the earth station for
processing. If an energy-efficient processing system is installed there, we'll have Mars
rovers and Moon rovers driving on their own
As we consider the building of large-scale systems from neuron like building blocks,
there are a large number of challenges that must be overcome.A number of critical issues
remain as we consider the artificial physical implementation of a system that partially
resembles a brain-like architecture such as:
1. What are the minimal physical elements needed for a working artificial
structure: dendrite, soma, axon, and synapse?
2. What are the minimal characteristics of each one of these elements needed
in order to have a first proven system?
3. What are the essential conceptual ideas needed to implement a minimal
system: spike-dependent plasticity, learning, reconfigurability, criticality, short-
and long term memory, fault tolerance, co-location of memory and processing,
distributed processing, large fan-in/fan-out, dimensionality? Can we organize
these in order of importance?
4. What are the advantages and disadvantages of a chemical vs. a solid-state
implementation?
5. What features must neuromorphic architecture have to support critical
testing of new materials and building block implementations?
6. What intermediate applications would best be used to prove the concept?
These and certainly additional questions should be part of a coherent approach to
investigating the development of neuromorphic computing systems. The field could also
use a comprehensive review of what has been achieved already in the exploration of
novel materials, as there are a number of excellent groups that are pursuing new materials
and new device architectures. Many of these activities could benefit from a framework
that can be evaluated on simple applications. At the same time, there is a considerable
gap in our understanding of what it will take to implement state-of-the-art applications on
with accelerators for the development and training of deep neural networks. Moving
neuromorphic hardware out of the research phase into applications and end use would be
helpful. This would require advances which support training of the device itself and to
show performance above that of artificial neural Neuromorphic Computing: From
Materials to Systems Architecture 24 networks already implemented in conventional
hardware. These improvements are necessary both regarding power efficiency and
ultimate performance.
Chapter 3
Results
Chapter 4
Conclusion
The conclusions we derived from our study of neuromorphic system are as follows:
1. Creating the architectural design for neuromorphic computing requires an
integrative, interdisciplinary approach between computer scientists, engineers,
physicists, and materials scientists.
2. Creating a new computational system will require developing new system
architectures to accommodate all needed functionalities.
3. One or more reference architectures should be used to enable comparisons of
alternative devices and materials.
4. The basis for the devices to be used in these new computational systems require
the development of novel nano and meso structured materials; this will be
accomplished by unlocking the properties of quantum materials based on new
materials physics.
5. The most promising materials require fundamental understanding of strongly
correlated materials, understanding formation and migration of ions, defects and
clusters, developing novel spin based devices, and/or discovering new quantum
functional materials.
6. The development of a new brain-like computational system will not evolve in a
single step; it is important to implement well-defined intermediate steps that give
useful scientific and technological information.
References
[1] Furber, S. (2016). Large-scale neuromorphic computing systems. Journal of
neural engineering, 13(5), 051001.
[2] Schuman, C. D., Potok, T. E., Patton, R. M., Birdwell, J. D., Dean, M. E.,
Rose, G. S., & Plank, J. S. (2017). A survey of neuromorphic computing and
neural networks in hardware. arXiv preprint arXiv:1705.06963.
[3] van De Burgt, Y., Melianas, A., Keene, S. T., Malliaras, G., & Salleo, A.
(2018). Organic electronics for neuromorphic computing. Nature
Electronics, 1(7), 386-397.
[4] Monroe, D. (2014). Neuromorphic computing gets ready for the (really) big
time.
[5] Burr, G. W., Shelby, R. M., Sebastian, A., Kim, S., Kim, S., Sidler, S., ... &
Leblebici, Y. (2017). Neuromorphic computing using non-volatile memory.
Advances in Physics: X, 2(1), 89-124.
[6] Torrejon, J., Riou, M., Araujo, F. A., Tsunegi, S., Khalsa, G., Querlioz,
D., ... & Grollier, J. (2017). Neuromorphic computing with nanoscale
spintronic oscillators. Nature, 547(7664), 428-431.
[7] Roy, K., Jaiswal, A., & Panda, P. (2019). Towards spike-based machine
intelligence with neuromorphic computing. Nature, 575(7784), 607-617.
[8] Esser, S. K., Appuswamy, R., Merolla, P., Arthur, J. V., & Modha, D. S.
(2015). Backpropagation for energy-efficient neuromorphic computing.
Advances in neural information processing systems, 28, 1117-1125.
[9] Pfeil, T., Grübl, A., Jeltsch, S., Müller, E., Müller, P., Petrovici, M. A., ... &
Meier, K. (2013). Six networks on a universal neuromorphic computing
substrate. Frontiers in neuroscience, 7, 11.