0% found this document useful (0 votes)
1 views

___________(1)

The document discusses the influence of neuroscience on artificial intelligence, particularly in the development of biologically plausible neural networks and spiking neural networks. It highlights the limitations of current AI models and suggests that insights from brain science can help overcome these challenges, ultimately advancing towards human-level AI. Key areas of focus include brain network analysis and reservoir computing, which leverage the dynamic nature of brain mechanisms.

Uploaded by

leeminuk112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

___________(1)

The document discusses the influence of neuroscience on artificial intelligence, particularly in the development of biologically plausible neural networks and spiking neural networks. It highlights the limitations of current AI models and suggests that insights from brain science can help overcome these challenges, ultimately advancing towards human-level AI. Key areas of focus include brain network analysis and reservoir computing, which leverage the dynamic nature of brain mechanisms.

Uploaded by

leeminuk112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Electronics and Telecommunications Trends

브레인 모사 인공지능 기술
Brain-Inspired Artificial Intelligence

김철호 (C.H. Kim, [email protected]) 사이버브레인연구실 선임연구원/기술총괄


이정훈 (J.H. Lee, [email protected]) 사이버브레인연구실 연구원
이성엽 (S.Y. Lee, [email protected]) 사이버브레인연구실 연구원
우영춘 (Y.C. Woo, [email protected]) 사이버브레인연구실 책임연구원
백옥기 (O.K. Baek, [email protected]) 사이버브레인연구실 책임연구원/연구위원
원희선 (H.S. Won, [email protected]) 사이버브레인연구실 책임연구원/실장

ABSTRACT

The field of brain science (or neuroscience in a broader sense) has inspired researchers in artificial intelligence
(AI) for a long time. The outcomes of neuroscience such as Hebb’s rule had profound effects on the early AI
models, and the models have developed to become the current state-of-the-art artificial neural networks.
However, the recent progress in AI led by deep learning architectures is mainly due to elaborate mathematical
methods and the rapid growth of computing power rather than neuroscientific inspiration. Meanwhile, major
limitations such as opacity, lack of common sense, narrowness, and brittleness have not been thoroughly
resolved. To address those problems, many AI researchers turn their attention to neuroscience to get insights
and inspirations again. Biologically plausible neural networks, spiking neural networks, and connectome-
based networks exemplify such neuroscience-inspired approaches. In addition, the more recent field of brain
network analysis is unveiling complex brain mechanisms by handling the brain as dynamic graph models. We
argue that the progress toward the human-level AI, which is the goal of AI, can be accelerated by leveraging
the novel findings of the human brain network.

KEYWORDS 인공지능, 인공신경망, 심층신경망, 신경과학, 브레인 네트워크, 생물학적 근사성, 스파이킹 신경망, 레저버
컴퓨팅, 커넥톰

Ⅰ. 서론 Neuroscience Machine Learning


AI Artificial Intelligence
Brain Science 1 AI

DOIhttpsdoi org 10 22648 ETRI 2021 J 360311


21ZS1100

본 저작물은 공공누리 제4유형


©2021 한국전자통신연구원
107

AI
Neuronal Network 1a
5
AI
1 Regional Network
10 AI 1b6
Anatomical
Computational Limit 1c7
2 Functional 1d 8
Opacity Lack of Com
mon Sense Narrow AI
ness Brittleness AI

34
AI Ⅱ. 심층신경망(DNN)

1. 초기 모델
1
1 가. 최초의 신경 모델

1871 Henry Bowditch

All or None Law


(a) 뉴런 네트워크 (b) 영역 네트워크

9
1943
66개의 서브영역 1000개의 서브영역

뉴런 및 뉴런 간의 상호작용 영역 및 영역 간의 상호작용

McCulloch Pitts
(c) 해부학적 모델 (d) 기능적 모델
10

Tractography from diffusion MRI Active region visualization from fMRI

신경의 해부학적 구조 신경활동의 동적 메커니즘

Threshold
출처 뉴런 네트워크[5] CC BY-SA 4.0, 영역 네트워크[6] CC BY 3.0, 해
부학적 모델[7] CC BY-SA 4.0, 기능적 모델[8] CC BY-NC 2.0 개작.

1
108 전자 신 36 3 2021 6

나. 뉴런의 학습 모델 14 2

1949 Donald Hebb


11 15
Neurons that fire together wire together
Hebb

Hebb
16

1981 Elie 라. 신경망의 발전


Bienenstock Leon Cooper Paul Munro 1982 John Hopfield
BCM Hopfield
12 1982 Hopfield Hebb
Erkki Oja

13 17
1984
Geoffrey Hinton
Boltzmann
다. 퍼셉트론 Boltzmann machine
1958 Frank Rosenblatt 18 2000 Hinton
McCull Boltzmann machine
och Pitts Restricted Boltzmann
Machine RBM
Deep Belief
Network 19

2
김철호 브레인 모사 인공 기 109

2. 심층신경망 Biological Implausibility

가. 심층신경망

3. 심층신경망의 생물학적 비근사성


DNN Deep Neural Network
1980
Hinton Backpropagation
Biological Plausibility
2000 AI
AI

나. 심층신경망에 반영된 생물학적 특징

가. 생물학적 근사성이 가지는 의미


CNN
20
Vision 1 1
Transfer Learning Continual
Learning AI
Catastrophic
Forgetting 20 AI

one shot few shot 24


21
LSTM Long Short Term Memory
22 나. 역전파에 존재하는 비근사성
Attention 23

1) 가중치 전달 문제
110 36 3 2021 6

25

Weight Transport DNN


Problem 28

2) 활성화 함수의 도함수 전달 문제

DNN

26

4. 생물학적 근사성을 반영한 인공신경망


26

3) 선형 연산

가. 지역적 학습 알고리즘

Global Information
Local Information
27

다. 기타 비근사성 Local Learning


111

25
2
32

1 33
8
34

29 다. 하이브리드 구조
30

24
35

NCP Neural Circuit Policy


CNN
63

나. 양자화 신경망

32 64
1 5
31 Ⅲ. 스파이킹 신경망(SNN)

ONN Quantized Neural


Network 32 34
QNN

SNN Spiking Neural


Network
112 전 신동 제36 제3호 2021 6

STDP
Spike Timing Dependent Plasticity
40 Hebb

3
Long Term Potentiation

1. 스파이킹 신경망의 코드화


Long Term Depression
Action Potential

Spike Train

36 37 3
STDP
Encoding

Rate Coding
41
Temporal Coding TTFS
Time To First Spike Coding

Burst Coding
Phase Coding SpikeProp 42
38

3. 스파이킹 신경망의 전망
39

2. 스파이킹 신경망의 학습
43
113

28 Network Neuroscience
44

DTI Diffusion
Tensor Imaging EEG Electroencephalogram

Ⅳ. 브레인 네트워크 DTI


45

fMRI functional Magnetic Resonance


Imaging

Pearson Correla
tion ICA Independent Component Analysis
45

Thresholding
Brain Network Analysis
Consensus Network

1. 브레인 네트워크 모델
2. 레저버 컴퓨팅

Node 가. 개요
Edge RC Reservoir Computing
Dynamics 4
Control RNN Recur
rent Neural Network 46 RC
114 36 3 2021 6

𝑊 다. LSM
𝑊𝑖𝑛
LSM
Computational Neuroscience
𝑊𝑜𝑢𝑡

Error feedback
ESN
Reservoir
LSM
4

SNN
49

3. 브레인과 레저버 컴퓨팅


RNN
가. 커넥톰

50

나. ESN
RC ESN Echo State 2010
Network LSM Liquid State
Machines ESN 2001
Herbert Jaeger
302
51

Jaeger ESP Echo State Property

ESP RC Modularity Small


World
47 ESP
50 52
ESN
Universal Approximator 53 54
48 55 56
브레인 모사 인 지 기 115

나. 브레인 네트워크 기반 레저버 컴퓨팅 57


RC 19 EEG
Prior Knowledge
ESN ESP
LSM 10
58

55

57 59
55 56
Adja
cency Matrix
RC Memory
Capacity AI

Wiring Cost
55 56
2. 모티프

Motif

Ⅴ. 브레인 네트워크의 유의미한 특성들


60
AI
AI 60

61

1. 동시 진동

Synchronized Oscillation
116 36 3 2021 6

AI

Ⅵ. 결론
3. 동적 재구성

AI

Hebb
Dynamic Reconfiguration
62 63

AI
Pattern Recognition DNN SNN
Cognition

AI

4. 생성 모델 AI

Generative Model AI Human Level AI

약어 정리

64 AI Artificial Intelligence
CNN Convolutional Neural Network
DNN Deep Neural Network
DTI Diffusion Tensor Imaging
117

[14] F. Rosenblatt, “The perceptron: A probabilistic model for


EEG Electroencephalogram
information storage and organization in the brain,” Psychol.
ESN Echo State Network Rev., vol. 65, no. 6, 1958, pp. 386-408.
[15] F. Rosenblatt, “Principles of neurodynamics. Perceptrons and
ESP Echo State Property
the theory of brain mechanisms,” Cornell Aeronautical Lab,
fMRI functional Magnetic Resonance Buffalo NY, USA, 1961.

Imaging [16] M. Minsky and S.A. Papert, Perceptrons: An introduction to


computational geometry, MIT press, London, UK, 2017.
ICA Independent Component Analysis [17] J.J. Hopfield, “Neural networks and physical systems with
LSM Liquid State Machine emergent collective computational abilities,” PNAS, vol. 79, no.
8, 1982, pp. 2554-2558.
LSTM Long Short Term Memory
[18] G.E. Hinton and T.J. Sejnowski, “Learning and relearning
QNN Quantized Neural Network in Boltzmann machines,” Parallel Distrib. Process.: Explor.
Microstruct. Cogn., vol. 2, no. 1, 1986, pp. 282-317.
RC Reservoir Computing
[19] G.E. Hinton, S. Osindero, and Y.W. Teh, “A fast learning
RL Reinforcement Learning algorithm for deep belief nets,” Neural Comput., vol. 18, no. 7,
July 2006, pp. 1527–1554.
RNN Recurrent Neural Network
[20] R.M. French, “Catastrophic forgetting in connectionist
networks,” Trends Cogn. Sci., vol. 3, no. 4, 1999, pp. 128-135.
참고문헌 [21] T. Hospedales et al., “Meta-learning in neural networks: a
survey,” Nov. 2020, arXiv: 2004.05439.
[1] D. Hassabis et al., “Neuroscience-inspired artificial intelligence,”
Neuron, vol. 95, no. 2, July 2017, pp. 245-258. [22] S. Hochreiter et al., “Long short-term memory,” Neural
Comput., vol. 9, no. 8, 1997, pp. 1735-1780.
[2] N.C. Thompson et al., “The computational limits of deep
learning,” July 2020, arXiv: 2007.05558. [23] A. Vaswani et al., “Attention is all you need,” 2017, arXiv:
1706.03762.
[3] M.M. Waldrop, “What are the limits of deep learning?,” PNAS,
vol. 116, no. 4, Jan. 2019, pp. 1074-1077. [24] T.P. Lillicrap et al., “Backpropagation and the brain,” Nat. Rev.
Neurosci., vol. 21, Apr. 2020, pp. 335–346.
[4] D. Heaven, “Deep trouble for deep learning,” Nature, vol. 574,
no. 7777, Oct. 2019, pp. 163-166. [25] T.P. Lillicrap et al., “Random synaptic feedback weights
support error backpropagation for deep learning,” Nat.
[5] Wikimedia Commons: Components of neuron, https://
Commun., vol. 7, no. 1, Dec. 2016, pp. 1-10.
commons.wikimedia.org/wiki/File:Components_of_neuron.jpg.
[26] Y. Bengio et al., “Towards biologically plausible deep learning,”
[6] Wikimedia Commons: Connectome extraction procedure,
Aug. 2016, arXiv: 1502.04156.
https://commons.wikimedia.org/wiki/File:Connectome_extract
ion_procedure.jpg. [27] J.C.R. Whittington et al., “Theories of error back-propagation
in the brain,” Trends Cogn. Sci., vol. 23, no. 3, Mar. 2019, pp.
[7] W ikimedia Commons: The Human Connectome, https://
235–250.
commons.wikimedia.org/wiki/File:The_Human_Connectome.png.
[28] A. Tavanaei et al., “Deep learning in spiking neural networks,”
[8] https://www.flickr.com/photos/nihgov/46551667272/.
Neural Netw., vol. 111, Mar. 2019, pp. 47–63.
[9] K. Lucas, “The ‘all or none’ contraction of the amphibian skeletal
[29] W. Xiao et al., “Biologically-plausible learning algorithms can
muscle fibre,” J. Physiol., vol. 38, no. 2–3, 1909, pp. 113–133.
scale to large datasets,” Dec. 2018, arXiv: 1811.03567.
[10] W.S. Mcculloch et al., “A logical calculus of the ideas immanent
[30] M. Akrout et al., “Deep learning without weight transport,”
in nervous activity,” Bull. Math. Biophys, vol. 5, no. 4, 1943, pp.
Jan. 2020, arXiv: 1904.05391.
115-133.
[31] C. Baldassi et al., “Learning may need only a few bits of
[11] D.O. Hebb, The organization of behavior: A neuropsychological
synaptic precision,” Phys. Rev. E, vol. 93, no. 5, May 2016.
theory, Psychology Press, London, UK, 2005.
[32] W. Wen et al., “TernGrad: Ternary gradients to reduce
[12] E.L. Bienenstock et al., “Theory for the development of neuron
communication in distributed deep learning,” Dec. 2017, arXiv:
selectivity: Orientation specificity and binocular interaction in
1705.07878.
visual cortex”, J. Neurosci., vol. 2, no. 1, 1982, pp. 32-48.
[33] M. Rastegari et al., “XNoR-Net: ImageNet classification using
[13] E. Oja, “Simplified neuron model as a principal component
binary convolutional neural networks,” in Computer Vision–
analyzer,” J. Math. Biol., vol. 15, no. 3, 1982, pp. 267-273.
118 36 3 2021 6

ECCV 2016, vol. 9908, Springer, Cham Switzerland, 2016, pp. [48] L. Grigoryeva et al., “Echo state networks are universal,”
525–542. Neural Netw., vol. 108, Dec. 2018, pp. 495-508.
[34] Y. Yang et al., “Training high-performance and large-scale [49] H. Jaeger, W. Maass, and J. Principe, “Special issue on echo
deep neural networks with full 8-bit integers,” Neural Netw., state networks and liquid state machines,” Neural Netw., vol.
vol. 125, May 2020, pp. 70–82. 20, no. 3, Apr. 2017, pp. 287-289.
[35] M. Lechner et al., “Neural circuit policies enabling auditable [50] O. Sporns, “The human connectome: Origins and challenges,”
autonomy,” Nat. Mach. Intell., vol. 2, Oct. 2020, pp. 642-652. NeuroImage, vol. 80, Oct. 2013. pp. 53-61.
[36] E.D. Adrian et al., “The impulses produced by sensory nerve [51] S.W. Oh et al., “A mesoscale connectome of the mouse brain,”
endings,” J. Physiol., vol. 61 no. 4, 1926, pp. 465-483. Nature, vol. 508, no. 7495, Apr. 2014.
[37] G.Q. Bi et al., “Synaptic modifications in cultured hippocampal [52] D. Meunier et al., “Hierarchical modularity in human brain
neurons: Dependence on spike timing, synaptic strength, and functional networks,” Front. Neuroinform., vol. 3, 2009.
postsynaptic cell type,” J. Neurosci., vol. 18, no. 24, 1998, pp. [53] N.T. Markov et al., “A weighted and directed interareal
10464-10472. connectivity matrix for macaque cerebral cortex,” Cereb.
[38] W. Guo et al., “Neural coding in spiking neural networks: A Cortex, vol. 24, 2014.
comparative study for robust neuromorphic systems,” Front. [54] R.F. Betzel and D.S. Bassett, “Specificity and robustness
Behav. Neurosci., vol. 15, 2021. of long-distance connections in weighted, interareal
[39] S.J. Thorpe, “Spike arrival times: A highly efficient coding connectomes,” PNAS, vol. 115, no. 2, May 2018.
scheme for neural networks,” Parallel Process. Neural Syst., [55] F. Damicelli et al., “Brain connectivity meets reservoir
1990, pp. 91-94. computing,” Neurosci., Jan. 2021.
[40] D.E. Feldman, “The spike-timing dependence of plasticity,” [56] L.E. Suarez et al., “Learning function from structure in
Neuron, vol. 75, no. 4, 2012, pp. 556-571. neuromorphic networks,” Preprint form Biology, Nov. 2020,
[41] T. Masquelier et al., “Spike timing dependent plasticity finds doi: 10.1101/2020.11.10.350876.
the start of repeating patterns in continuous spike trains,” [57] W. Luo and Ji-Song Guan, “Do brain oscillations orchestrate
PloS one, vol. 3, no. 1, 2008, e1377. memory?,” Brain Sci. Adv., vol. 4, no. 1, Oct. 2018. pp. 16-33.
[42] S.M. Bohte et al., “SpikeProp: Backpropagation for networks [58] R. Fuevara Erra et al., “Neural synchronization from the
of spiking neurons,” in Proc. ESANN, Bruges, Belgium, Apr. perspective of non-linear dynamics,” Front. Comput.
2000, pp. 419-424. Neurosci., Oct. 2017.
[43] A. Kugele et al., “Efficient processing of spatio-temporal data [59] P. Fries, “Rhythms for cognition: Communication through
streams with spiking neural networks,” Front. Neurosci., vol. coherence,” Neuron, vol. 88, no. 1, Oct. 2015.
14, 2020.
[60] C. Duclos et al., “Brain network motifs are markers of loss and
[44] D.S. Bassett, et al., “Network neuroscience,” Nat. Neurosci., recovery of consciousness,” Sci. Rep., vol. 11, Mar. 2020.
Mar. 2017.
[61] O. Sporns et al., “Motifs in brain networks,” PLoS Biol., vol. 2,
[45] A. Fomito and E.T. Bullmore, “Connectomic intermediate Nov. 2004.
phenotypes for psychiatric disorders,” Front. Psychiatry, Apr.
[62] D.S. Bassert et al., “Dynamic reconfiguration of human brain
2012.
networks during learning,” PNAS, May 2011, pp. 7641-7646.
[46] M. Lukosevicius, H. Jaeger, and B. Schrauwen, “Reservoir
[63] M. Pedersenet al., “Multilayer network switching rate predicts
Comput. Trends,” vol. 26, May 2012, pp. 365–371.
brain performance,” PNAS, vol. 115, Dec. 2018.
[47] H. Jaeger, “The “echo state” approach to analysing and training
[64] R .F. Betzel et al., “Generative models for network
recurrent neural networks–with an erratum note,” Bonn, GMD
neuroscience: Prospects and promise,” J. R. Soc., vol. 14, no.
Tech. Rep. vol. 148, Jan. 2010.
136, Jun. 2017.

You might also like