SlideShare a Scribd company logo
MIT
December 15, 2017
Jeff Hawkins
jhawkins@numenta.com
Have We Missed Half of What the Neocortex Does?
Allocentric Location as the Basis of Perception
1) Reverse engineer the neocortex
- an ambitious but realizable goal
- seek biologically accurate theories
- test empirically and via simulation
2) Enable technology based on cortical theory
- active open source community
- basis for Machine Intelligence
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)
L2
L3a
L3b
L4
L6a
L6b
L6 ip
L6 mp
L6 bp
L5 tt
L5 cc
L5 cc-ns
L2/3
L4
L6
L5
Input
The Cortical Column
1) Cortical columns are complex
- Twelve or more excitatory cellular layers
- Two parallel FF pathways
- Parallel FB pathways (not shown)
- Numerous intra- and inter-column connections (not shown)
- Inhibitory neurons/circuits are equally complex
2) The function of a cortical column must also be complex.
3) Whatever a column does applies to everything the cortex does.
L5: Calloway et. al, 2015
L6: Zhang and Deschenes, 1997
Simple
Output, via thalamus
50%10%
Cortex
Thalamus
Output, direct
L5 CTC: Guillery, 1995
Constantinople and Bruno, 2013
A Couple of Thoughts
Output
Observation:
The neocortex is constantly predicting its inputs.
How do networks of neurons, as seen in the neocortex,
learn predictive models of the world?
Research:
1) How does the cortex learn predictive models of extrinsic sequences?
2) How does the cortex learn predictive models of sensorimotor sequences?
Current research: How do columns compute allocentric location?
- Grid cells in entorhinal cortex solve a similar problem
- Big Idea: cortical columns contain analogs of grid cells and head direction cells
- Starting to understand the function of numerous layers and connections
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
- Big Idea: Pyramidal neuron model for prediction
- A single layer network model for sequence memory
- Properties of sparse activations
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
- Extension of sequence memory model
- Big Idea: Columns compute “allocentric” location of input
- By moving sensor, columns learn models of complete objects
Proximal synapses: Cause somatic spikes
Define classic receptive field of neuron
Distal synapses: Cause dendritic spikes
Put the cell into a depolarized, or “predictive” state
Depolarized neurons fire sooner, inhibiting nearby neurons.
A neuron can predict its activity in hundreds of unique contexts.
5K to 30K excitatory synapses
- 10% proximal
- 90% distal
Distal dendrites are pattern detectors
- 8-15 co-active, co-located synapses
generate dendritic spike
- sustained depolarization of soma
HTM Neuron Model
Prediction Starts in the Neuron
Pyramidal Neuron
Major, Larkum and Schiller 2013
Properties of Sparse Activations
L2
L3a
L3b
L4
L6a
L6b
L6 ip
L6 mp
L6 bp
L5 tt
L5 cc
L5 cc-ns
Example: One layer of cells, 5,000 neurons, 2% (100) active
1) Representational capacity is virtually unlimited
(5,000 choose 100) = 3x10211
2) Randomly chosen representations have minimal overlap
3) A neuron can robustly recognize an activation pattern by forming 10 to 20 synapses
4) Unions of patterns do not cause errors in recognition
Hypothesis: Cellular layers use unions to represent uncertainty
Hawkins, Ahmad, 2016
Ahmad, Hawkins, 2015
Pattern 1 (100 active cells)
Cell robustly recognizes pattern1
by forming synapses to small sub-
sample of active cells
Union
Patterns 1-10 (1,000 active cells)
Cell still robustly recognizes pattern 1
A Single Layer Network Model for Sequence Memory
- Neurons in a mini-column learn same FF receptive field.
- Neurons forms distal connections to nearby cells.
No prediction Predicted input
(Hawkins & Ahmad, 2016)
(Cui et al, 2016)
- High capacity (learns up to 1M transitions)
- Learns high-order sequences: “ABCD” vs “XBCY”
- Makes simultaneous predictions: “BC…” predicts “D” and “Y”
- Extremely robust (tolerant to 40% noise and faults)
- Learning is unsupervised, continuous, and local
- Satisfies many biological constraints
- Multiple open source implementations (some commercial)
t=0
t=1
Predicted cells fire first
and inhibit neighbors
Next prediction t=2
t=0
t=1
1) How does the cortex learn predictive models of extrinsic sequences?
2) How does the cortex learn predictive models of sensorimotor sequences?
Current research: How do columns compute allocentric location?
- Grid cells in entorhinal cortex solve a similar problem
- Hypothesis: cortical columns contain analogs of grid cells and head direction cells
- Starting to understand the function of numerous layers and connections
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
- Pyramidal neuron model
- A single layer network model for sequence memory
- Properties of sparse activations
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
- Extension of sequence memory model
- Big Idea: Columns compute “allocentric” location of input
- By moving sensor, columns learn models of complete objects
How Could a Layer of Neurons Learn a Predictive Model of
Sensorimotor Sequences?
Sequence memory
Sensorimotor sequences
SensorMotor-related context
Hypothesis:
By adding motor-related context, a cellular layer can predict
its input as the sensor moves.
What is the correct motor-related context?
L2
L3a
L3b
L4
L6a
L6b
L6 ip
L6 mp
L6 bp
L5 tt
L5 cc
L5 cc-ns
50%
Sensory
feature
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)
Two Layer Model of Sensorimotor Sequence Memory
Feature @ location
Object Stable over movement of sensor
With allocentric location input, a column can learn models of
complete objects by sensing different locations on object over time.
Sensor
Feature
Allocentric
Location
Pooling
Seq Mem
Changes with each movement
Object
Feature @ Location
Location
on object
Column 1 Column 2 Column 3
Sensor
feature
Sensorimotor Inference With Multiple Columns
Each column has partial knowledge of object.
Long range connections in object layer allow columns to vote.
Inference is much faster with multiple columns.
FeatureFeatureFeatureLocationLocationLocation
Output
Input
Objects Recognized By Integrating Inputs Over Time
FeatureLocationFeatureLocationFeatureLocation
Column 1 Column 2 Column 3
Output
Input
Recognition is Faster with Multiple Columns
Yale-CMU-Berkeley (YCB) Object Benchmark (Calli et al, 2017)
- 80 objects designed for robotics grasping tasks
- Includes high-resolution 3D CAD files
YCB Object Benchmark
We created a virtual hand using the Unity game engine
Curvature based sensor on each fingertip
4096 neurons per layer per column
98.7% recall accuracy (77/78 uniquely classified)
Convergence time depends on object, sequence of
sensations, number of fingers.
Simulation using YCB Object Benchmark
Pairwise confusion between objects after 1 touch
Convergence 1 finger 1 touch
Pairwise confusion between objects after 2 touches
Convergence 1 finger 2 touches
Pairwise confusion between objects after 6 touches
Convergence 1 finger 6 touches
Pairwise confusion between objects after 10
touches
Convergence 1 finger 10 touches
Convergence Time vs. Number of Columns
This is why we can infer complex objects in a single grasp or single visual fixation.
1) How does the cortex learn predictive models of extrinsic sequences?
2) How does the cortex learn predictive models of sensorimotor sequences?
Current research: How do columns compute allocentric location?
- Hypothesis: cortical columns contain analogs of grid cells and head direction cells
- Starting to understand the function of numerous layers and connections
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
- Pyramidal neuron model
- A single layer network model for sequence memory
- Properties of sparse activations
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
- Extension of sequence memory model
- Big Idea: Columns compute “allocentric” location of input
- By moving sensor, columns learn models of complete objects
Entorhinal Cortex
environments
A
B C
X
Y Z
R
S T
Room 3
Room 2Room 1
Location
- Encoded by Grid Cells
- Unique to location in room AND room
- Location is updated by movement
Orientation (of head to room)
- Encoded by Head Direction Cells
- Anchored to room
- Orientation is updated by movement
Location
- Unique to location on object AND object
- Location is updated by movement
Orientation (of sensor patch to object)
- Anchored to object
- Orientation is updated by movement
Cortical Column
objects
Hypothesis:
Cortical columns contain analogs of grid cells and head direction cells
A
C
B
X
Y
Z
Stensola, Solstad, Frøland, Moser, Moser: 2012
Location and Orientation are both necessary
to learn the structure of rooms and predict
sensory input.
Location and Orientation are both necessary
to learn the structure of objects and predict
sensory input.
L3
L4
L6a
L6b
L5a
L5b
Mapping Orientation and Location to a Cortical Column (most complex slide)
Sensation
Orientation
1) A column is a two-stage sensorimotor model for learning and inferring structure.
2) A column usually cannot infer a Feature or Object in one sensation.
- Integrate over time (sense, move, sense, move, sense..)
- Vote with neighboring columns
3) This system is most obvious for touch, but it applies to vision and other sensory modalities.
Because this architecture exists throughout the neocortex, it suggests we learn, infer,
and manipulate abstract concepts the same way we manipulate objects in the world.
Location
Sensation @ Orientation
Feature
Feature @ Location
Object
Motor updated (HD cell-like)
Motor updated (grid cell-like)
Seq mem
Pooling
Seq mem
Pooling
Meaning Operation
Rethinking Hierarchy
Every column learns complete models of objects. They operate in parallel.
Inputs project to multiple levels at once. Columns operate at different
scales of input.
Sense
Simple features
Complex features
Objects
Classic
Objects
Objects
Objects
Sensor array
Proposed
Region 3
Region 2
Region 1
Rethinking Hierarchy
Every column learns complete models of objects. They operate in parallel.
Inputs project to multiple levels at once. Columns operate at different
scales of input.
Non-hierarchical connections allow columns to vote on shared elements
such as “object” and “feature”.
Sense
Simple features
Complex features
Objects
Classic
Sensor array
Objects
Objects
Objects
Sensor array
vision touch
Proposed
Region 3
Region 2
Region 1
Summary
Goal: Understand the function and operation of the laminar circuits in the neocortex.
Method: Study how cortical columns make predictions of their inputs.
Proposals
1) Pyramidal neurons are the substrate of prediction.
Each neuron predicts its activity in hundreds of contexts.
2) A single layer of neurons forms a predictive memory of high-order sequences.
(sparse activations, mini-columns, fast inhibition, and lateral connections)
3) A two-layer network forms a predictive memory of sensorimotor sequences.
(add motor-derived context and a pooling layer)
4) Columns need motor-derived representations of location and orientation, of the
sensor relative to the object. These are analogous to grid and head direction cells.
5) A framework for the cortical column.
- Columns learn complete models of objects as “features at locations”, using two
sensorimotor inference stages.
6) The neocortex contains thousands of parallel models, that resolve uncertainty by
associative linking and/or movement of the sensors.
Open Issues
Behaviors: how are they learned, encoded, and applied to objects?
Detailed model of hierarchy including thalamus
How can the model be applied to “Where” pathways, and how do “What” and “Where”
pathways work together
Collaborations
There are many testable predictions in this model, a “green field”. We welcome
collaborations and discussions.
We are always interested in hosting visiting scholars and interns.
Numenta Team
Subutai Ahmad
VP Research
Marcus Lewis
Thank You

More Related Content

PDF
Could A Model Of Predictive Voting Explain Many Long-Range Connections? by Su...
Numenta
 
PDF
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...
Numenta
 
PPTX
Location, Location, Location - A Framework for Intelligence and Cortical Comp...
Numenta
 
PPTX
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...
Numenta
 
PDF
Jeff Hawkins NAISys 2020: How the Brain Uses Reference Frames, Why AI Needs t...
Numenta
 
PPTX
Locations in the Neocortex: A Theory of Sensorimotor Prediction Using Cortica...
Numenta
 
PPTX
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...
Numenta
 
PPTX
Have We Missed Half of What the Neocortex Does? A New Predictive Framework ...
Numenta
 
Could A Model Of Predictive Voting Explain Many Long-Range Connections? by Su...
Numenta
 
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...
Numenta
 
Location, Location, Location - A Framework for Intelligence and Cortical Comp...
Numenta
 
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...
Numenta
 
Jeff Hawkins NAISys 2020: How the Brain Uses Reference Frames, Why AI Needs t...
Numenta
 
Locations in the Neocortex: A Theory of Sensorimotor Prediction Using Cortica...
Numenta
 
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...
Numenta
 
Have We Missed Half of What the Neocortex Does? A New Predictive Framework ...
Numenta
 

What's hot (19)

PDF
Sparsity In The Neocortex, And Its Implications For Machine Learning
Numenta
 
PDF
Numenta Brain Theory Discoveries of 2016/2017 by Jeff Hawkins
Numenta
 
PDF
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...
Christy Maver
 
PDF
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...
Numenta
 
PDF
ICMNS Presentation: Presence of high order cell assemblies in mouse visual co...
Numenta
 
PDF
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...
Numenta
 
PPTX
Sparse Distributed Representations: Our Brain's Data Structure
Numenta
 
PDF
Recognizing Locations on Objects by Marcus Lewis
Numenta
 
PPSX
Fundamentals of Neural Networks
Gagan Deep
 
PPT
Lec 1-2-3-intr.
Taymoor Nazmy
 
PPT
What is (computational) neuroscience?
SSA KPI
 
PPTX
CARLsim 3: Concepts, Tools, and Applications
Michael Beyeler
 
PDF
Lesson 37
Avijit Kumar
 
PPTX
neural networks
joshiblog
 
PDF
Neural Computing
ESCOM
 
PDF
7 nn1-intro.ppt
Sambit Satpathy
 
PPTX
intelligent system
Suneel Kr Chacrawarti
 
PDF
Artificial Neural Network Abstract
Anjali Agrawal
 
Sparsity In The Neocortex, And Its Implications For Machine Learning
Numenta
 
Numenta Brain Theory Discoveries of 2016/2017 by Jeff Hawkins
Numenta
 
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...
Christy Maver
 
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...
Numenta
 
ICMNS Presentation: Presence of high order cell assemblies in mouse visual co...
Numenta
 
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...
Numenta
 
Sparse Distributed Representations: Our Brain's Data Structure
Numenta
 
Recognizing Locations on Objects by Marcus Lewis
Numenta
 
Fundamentals of Neural Networks
Gagan Deep
 
Lec 1-2-3-intr.
Taymoor Nazmy
 
What is (computational) neuroscience?
SSA KPI
 
CARLsim 3: Concepts, Tools, and Applications
Michael Beyeler
 
Lesson 37
Avijit Kumar
 
neural networks
joshiblog
 
Neural Computing
ESCOM
 
7 nn1-intro.ppt
Sambit Satpathy
 
intelligent system
Suneel Kr Chacrawarti
 
Artificial Neural Network Abstract
Anjali Agrawal
 
Ad

Similar to Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017) (20)

PDF
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...
Numenta
 
PDF
Computational Cognitive Models of Spatial Memory in Navigation Space: A review
Seonghyun Kim
 
PPTX
Why Neurons have thousands of synapses? A model of sequence memory in the brain
Numenta
 
PDF
Mechanisms Of Neuronal Computation In Mammalian Visual Cortex Nicholasnbspj P...
mrganiosune
 
PDF
Triangulating general intelligence and cortical microcircuits
Dileep George
 
PPTX
Principles of Hierarchical Temporal Memory - Foundations of Machine Intelligence
Numenta
 
PDF
Alistair Knott Sensorimotor Cognition Natural Language Syntax
mhariogjyra
 
PDF
Journalcompneuro
Khalifa Bakkar
 
PPT
cs4811-ch11-neural-networks.ppt
butest
 
PDF
Where is my mind?
Nicolas Rougier
 
PDF
Pattern Separation In The Hippocampus Michael A Yassa
wozampiwnik
 
PPTX
Inverse Modeling for Cognitive Science "in the Wild"
Aalto University
 
PDF
Adult Visual Cortical Plasticity Charlesnbspd Gilbert Wu Li Wu Li
conaldobanor
 
PPT
neuros
airsrch
 
PPT
Neutral Network
Divyansh Sawant
 
PPT
ML-NaiveBayes-NeuralNets-Clustering.ppt-
shalinipriya1692
 
PPTX
Unit 4 Neurobiology Unit 4 Neurobiology Unit 4 Neurobiology Unit 4 Neurobiology
ssuserb46887
 
PPTX
Neural networks
Indira Priyadarsini
 
PDF
SF Big Analytics20170706: What the brain tells us about the future of streami...
Chester Chen
 
PDF
8 neural network representation
TanmayVijay1
 
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...
Numenta
 
Computational Cognitive Models of Spatial Memory in Navigation Space: A review
Seonghyun Kim
 
Why Neurons have thousands of synapses? A model of sequence memory in the brain
Numenta
 
Mechanisms Of Neuronal Computation In Mammalian Visual Cortex Nicholasnbspj P...
mrganiosune
 
Triangulating general intelligence and cortical microcircuits
Dileep George
 
Principles of Hierarchical Temporal Memory - Foundations of Machine Intelligence
Numenta
 
Alistair Knott Sensorimotor Cognition Natural Language Syntax
mhariogjyra
 
Journalcompneuro
Khalifa Bakkar
 
cs4811-ch11-neural-networks.ppt
butest
 
Where is my mind?
Nicolas Rougier
 
Pattern Separation In The Hippocampus Michael A Yassa
wozampiwnik
 
Inverse Modeling for Cognitive Science "in the Wild"
Aalto University
 
Adult Visual Cortical Plasticity Charlesnbspd Gilbert Wu Li Wu Li
conaldobanor
 
neuros
airsrch
 
Neutral Network
Divyansh Sawant
 
ML-NaiveBayes-NeuralNets-Clustering.ppt-
shalinipriya1692
 
Unit 4 Neurobiology Unit 4 Neurobiology Unit 4 Neurobiology Unit 4 Neurobiology
ssuserb46887
 
Neural networks
Indira Priyadarsini
 
SF Big Analytics20170706: What the brain tells us about the future of streami...
Chester Chen
 
8 neural network representation
TanmayVijay1
 
Ad

More from Numenta (15)

PDF
Deep learning at the edge: 100x Inference improvement on edge devices
Numenta
 
PDF
Brains@Bay Meetup: A Primer on Neuromodulatory Systems - Srikanth Ramaswamy
Numenta
 
PDF
Brains@Bay Meetup: How to Evolve Your Own Lab Rat - Thomas Miconi
Numenta
 
PDF
Brains@Bay Meetup: The Increasing Role of Sensorimotor Experience in Artifici...
Numenta
 
PDF
Brains@Bay Meetup: Open-ended Skill Acquisition in Humans and Machines: An Ev...
Numenta
 
PDF
Brains@Bay Meetup: The Effect of Sensorimotor Learning on the Learned Represe...
Numenta
 
PDF
SBMT 2021: Can Neuroscience Insights Transform AI? - Lawrence Spracklen
Numenta
 
PDF
FPGA Conference 2021: Breaking the TOPS ceiling with sparse neural networks -...
Numenta
 
PDF
OpenAI’s GPT 3 Language Model - guest Steve Omohundro
Numenta
 
PDF
The Thousand Brains Theory: A Framework for Understanding the Neocortex and B...
Numenta
 
PDF
The Biological Path Toward Strong AI by Matt Taylor (05/17/18)
Numenta
 
PDF
The Biological Path Towards Strong AI Strange Loop 2017, St. Louis
Numenta
 
PPTX
HTM Spatial Pooler
Numenta
 
PDF
Biological path toward strong AI
Numenta
 
PDF
Predictive Analytics with Numenta Machine Intelligence
Numenta
 
Deep learning at the edge: 100x Inference improvement on edge devices
Numenta
 
Brains@Bay Meetup: A Primer on Neuromodulatory Systems - Srikanth Ramaswamy
Numenta
 
Brains@Bay Meetup: How to Evolve Your Own Lab Rat - Thomas Miconi
Numenta
 
Brains@Bay Meetup: The Increasing Role of Sensorimotor Experience in Artifici...
Numenta
 
Brains@Bay Meetup: Open-ended Skill Acquisition in Humans and Machines: An Ev...
Numenta
 
Brains@Bay Meetup: The Effect of Sensorimotor Learning on the Learned Represe...
Numenta
 
SBMT 2021: Can Neuroscience Insights Transform AI? - Lawrence Spracklen
Numenta
 
FPGA Conference 2021: Breaking the TOPS ceiling with sparse neural networks -...
Numenta
 
OpenAI’s GPT 3 Language Model - guest Steve Omohundro
Numenta
 
The Thousand Brains Theory: A Framework for Understanding the Neocortex and B...
Numenta
 
The Biological Path Toward Strong AI by Matt Taylor (05/17/18)
Numenta
 
The Biological Path Towards Strong AI Strange Loop 2017, St. Louis
Numenta
 
HTM Spatial Pooler
Numenta
 
Biological path toward strong AI
Numenta
 
Predictive Analytics with Numenta Machine Intelligence
Numenta
 

Recently uploaded (20)

PPTX
2019 Upper Respiratory Tract Infections.pptx
jackophyta10
 
PDF
A water-rich interior in the temperate sub-Neptune K2-18 b revealed by JWST
Sérgio Sacani
 
PPTX
How to Add SBCGlobal.net Email to MacBook Air in Minutes
raymondjones7273
 
PPTX
Unit 4 - Astronomy and Astrophysics - Milky Way And External Galaxies
RDhivya6
 
PPT
Grade_9_Science_Atomic_S_t_r_u_cture.ppt
QuintReynoldDoble
 
PPTX
fghvqwhfugqaifbiqufbiquvbfuqvfuqyvfqvfouiqvfq
PERMISONJERWIN
 
PPTX
Embark on a journey of cell division and it's stages
sakyierhianmontero
 
PDF
Evaluating Benchmark Quality: a Mutation-Testing- Based Methodology
ESUG
 
PPTX
Home Garden as a Component of Agroforestry system : A survey-based Study
AkhangshaRoy
 
PDF
Directing Generative AI for Pharo Documentation
ESUG
 
PPTX
METABOLIC_SYNDROME Dr Shadab- kgmu lucknow pptx
ShadabAlam169087
 
PDF
Microbial Biofilms and Their Role in Chronic Infections
Prachi Virat
 
PDF
N-enhancement in GN-z11: First evidence for supermassive stars nucleosynthesi...
Sérgio Sacani
 
PDF
Bacteria, Different sizes and Shapes of of bacteria
Vishal Sakhare
 
PDF
Integrating Executable Requirements in Prototyping
ESUG
 
PDF
Identification of Bacteria notes by EHH.pdf
Eshwarappa H
 
PPTX
Introduction to biochemistry.ppt-pdf_shotrs!
Vishnukanchi darade
 
PDF
Paleoseismic activity in the moon’s Taurus-Littrowvalley inferred from boulde...
Sérgio Sacani
 
PDF
Package-Aware Approach for Repository-Level Code Completion in Pharo
ESUG
 
PPTX
Pharmacognosy: ppt :pdf :pharmacognosy :
Vishnukanchi darade
 
2019 Upper Respiratory Tract Infections.pptx
jackophyta10
 
A water-rich interior in the temperate sub-Neptune K2-18 b revealed by JWST
Sérgio Sacani
 
How to Add SBCGlobal.net Email to MacBook Air in Minutes
raymondjones7273
 
Unit 4 - Astronomy and Astrophysics - Milky Way And External Galaxies
RDhivya6
 
Grade_9_Science_Atomic_S_t_r_u_cture.ppt
QuintReynoldDoble
 
fghvqwhfugqaifbiqufbiquvbfuqvfuqyvfqvfouiqvfq
PERMISONJERWIN
 
Embark on a journey of cell division and it's stages
sakyierhianmontero
 
Evaluating Benchmark Quality: a Mutation-Testing- Based Methodology
ESUG
 
Home Garden as a Component of Agroforestry system : A survey-based Study
AkhangshaRoy
 
Directing Generative AI for Pharo Documentation
ESUG
 
METABOLIC_SYNDROME Dr Shadab- kgmu lucknow pptx
ShadabAlam169087
 
Microbial Biofilms and Their Role in Chronic Infections
Prachi Virat
 
N-enhancement in GN-z11: First evidence for supermassive stars nucleosynthesi...
Sérgio Sacani
 
Bacteria, Different sizes and Shapes of of bacteria
Vishal Sakhare
 
Integrating Executable Requirements in Prototyping
ESUG
 
Identification of Bacteria notes by EHH.pdf
Eshwarappa H
 
Introduction to biochemistry.ppt-pdf_shotrs!
Vishnukanchi darade
 
Paleoseismic activity in the moon’s Taurus-Littrowvalley inferred from boulde...
Sérgio Sacani
 
Package-Aware Approach for Repository-Level Code Completion in Pharo
ESUG
 
Pharmacognosy: ppt :pdf :pharmacognosy :
Vishnukanchi darade
 

Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)

  • 1. MIT December 15, 2017 Jeff Hawkins [email protected] Have We Missed Half of What the Neocortex Does? Allocentric Location as the Basis of Perception
  • 2. 1) Reverse engineer the neocortex - an ambitious but realizable goal - seek biologically accurate theories - test empirically and via simulation 2) Enable technology based on cortical theory - active open source community - basis for Machine Intelligence
  • 4. L2 L3a L3b L4 L6a L6b L6 ip L6 mp L6 bp L5 tt L5 cc L5 cc-ns L2/3 L4 L6 L5 Input The Cortical Column 1) Cortical columns are complex - Twelve or more excitatory cellular layers - Two parallel FF pathways - Parallel FB pathways (not shown) - Numerous intra- and inter-column connections (not shown) - Inhibitory neurons/circuits are equally complex 2) The function of a cortical column must also be complex. 3) Whatever a column does applies to everything the cortex does. L5: Calloway et. al, 2015 L6: Zhang and Deschenes, 1997 Simple Output, via thalamus 50%10% Cortex Thalamus Output, direct L5 CTC: Guillery, 1995 Constantinople and Bruno, 2013 A Couple of Thoughts Output
  • 5. Observation: The neocortex is constantly predicting its inputs. How do networks of neurons, as seen in the neocortex, learn predictive models of the world? Research:
  • 6. 1) How does the cortex learn predictive models of extrinsic sequences? 2) How does the cortex learn predictive models of sensorimotor sequences? Current research: How do columns compute allocentric location? - Grid cells in entorhinal cortex solve a similar problem - Big Idea: cortical columns contain analogs of grid cells and head direction cells - Starting to understand the function of numerous layers and connections “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex” Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30 - Big Idea: Pyramidal neuron model for prediction - A single layer network model for sequence memory - Properties of sparse activations “A Theory of How Columns in the Neocortex Learn the Structure of the World” Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25 - Extension of sequence memory model - Big Idea: Columns compute “allocentric” location of input - By moving sensor, columns learn models of complete objects
  • 7. Proximal synapses: Cause somatic spikes Define classic receptive field of neuron Distal synapses: Cause dendritic spikes Put the cell into a depolarized, or “predictive” state Depolarized neurons fire sooner, inhibiting nearby neurons. A neuron can predict its activity in hundreds of unique contexts. 5K to 30K excitatory synapses - 10% proximal - 90% distal Distal dendrites are pattern detectors - 8-15 co-active, co-located synapses generate dendritic spike - sustained depolarization of soma HTM Neuron Model Prediction Starts in the Neuron Pyramidal Neuron Major, Larkum and Schiller 2013
  • 8. Properties of Sparse Activations L2 L3a L3b L4 L6a L6b L6 ip L6 mp L6 bp L5 tt L5 cc L5 cc-ns Example: One layer of cells, 5,000 neurons, 2% (100) active 1) Representational capacity is virtually unlimited (5,000 choose 100) = 3x10211 2) Randomly chosen representations have minimal overlap 3) A neuron can robustly recognize an activation pattern by forming 10 to 20 synapses 4) Unions of patterns do not cause errors in recognition Hypothesis: Cellular layers use unions to represent uncertainty Hawkins, Ahmad, 2016 Ahmad, Hawkins, 2015 Pattern 1 (100 active cells) Cell robustly recognizes pattern1 by forming synapses to small sub- sample of active cells Union Patterns 1-10 (1,000 active cells) Cell still robustly recognizes pattern 1
  • 9. A Single Layer Network Model for Sequence Memory - Neurons in a mini-column learn same FF receptive field. - Neurons forms distal connections to nearby cells. No prediction Predicted input (Hawkins & Ahmad, 2016) (Cui et al, 2016) - High capacity (learns up to 1M transitions) - Learns high-order sequences: “ABCD” vs “XBCY” - Makes simultaneous predictions: “BC…” predicts “D” and “Y” - Extremely robust (tolerant to 40% noise and faults) - Learning is unsupervised, continuous, and local - Satisfies many biological constraints - Multiple open source implementations (some commercial) t=0 t=1 Predicted cells fire first and inhibit neighbors Next prediction t=2 t=0 t=1
  • 10. 1) How does the cortex learn predictive models of extrinsic sequences? 2) How does the cortex learn predictive models of sensorimotor sequences? Current research: How do columns compute allocentric location? - Grid cells in entorhinal cortex solve a similar problem - Hypothesis: cortical columns contain analogs of grid cells and head direction cells - Starting to understand the function of numerous layers and connections “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex” Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30 - Pyramidal neuron model - A single layer network model for sequence memory - Properties of sparse activations “A Theory of How Columns in the Neocortex Learn the Structure of the World” Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25 - Extension of sequence memory model - Big Idea: Columns compute “allocentric” location of input - By moving sensor, columns learn models of complete objects
  • 11. How Could a Layer of Neurons Learn a Predictive Model of Sensorimotor Sequences? Sequence memory Sensorimotor sequences SensorMotor-related context Hypothesis: By adding motor-related context, a cellular layer can predict its input as the sensor moves. What is the correct motor-related context? L2 L3a L3b L4 L6a L6b L6 ip L6 mp L6 bp L5 tt L5 cc L5 cc-ns 50% Sensory feature
  • 13. Two Layer Model of Sensorimotor Sequence Memory Feature @ location Object Stable over movement of sensor With allocentric location input, a column can learn models of complete objects by sensing different locations on object over time. Sensor Feature Allocentric Location Pooling Seq Mem Changes with each movement
  • 14. Object Feature @ Location Location on object Column 1 Column 2 Column 3 Sensor feature Sensorimotor Inference With Multiple Columns Each column has partial knowledge of object. Long range connections in object layer allow columns to vote. Inference is much faster with multiple columns.
  • 16. FeatureLocationFeatureLocationFeatureLocation Column 1 Column 2 Column 3 Output Input Recognition is Faster with Multiple Columns
  • 17. Yale-CMU-Berkeley (YCB) Object Benchmark (Calli et al, 2017) - 80 objects designed for robotics grasping tasks - Includes high-resolution 3D CAD files YCB Object Benchmark We created a virtual hand using the Unity game engine Curvature based sensor on each fingertip 4096 neurons per layer per column 98.7% recall accuracy (77/78 uniquely classified) Convergence time depends on object, sequence of sensations, number of fingers. Simulation using YCB Object Benchmark
  • 18. Pairwise confusion between objects after 1 touch Convergence 1 finger 1 touch
  • 19. Pairwise confusion between objects after 2 touches Convergence 1 finger 2 touches
  • 20. Pairwise confusion between objects after 6 touches Convergence 1 finger 6 touches
  • 21. Pairwise confusion between objects after 10 touches Convergence 1 finger 10 touches
  • 22. Convergence Time vs. Number of Columns This is why we can infer complex objects in a single grasp or single visual fixation.
  • 23. 1) How does the cortex learn predictive models of extrinsic sequences? 2) How does the cortex learn predictive models of sensorimotor sequences? Current research: How do columns compute allocentric location? - Hypothesis: cortical columns contain analogs of grid cells and head direction cells - Starting to understand the function of numerous layers and connections “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex” Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30 - Pyramidal neuron model - A single layer network model for sequence memory - Properties of sparse activations “A Theory of How Columns in the Neocortex Learn the Structure of the World” Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25 - Extension of sequence memory model - Big Idea: Columns compute “allocentric” location of input - By moving sensor, columns learn models of complete objects
  • 24. Entorhinal Cortex environments A B C X Y Z R S T Room 3 Room 2Room 1 Location - Encoded by Grid Cells - Unique to location in room AND room - Location is updated by movement Orientation (of head to room) - Encoded by Head Direction Cells - Anchored to room - Orientation is updated by movement Location - Unique to location on object AND object - Location is updated by movement Orientation (of sensor patch to object) - Anchored to object - Orientation is updated by movement Cortical Column objects Hypothesis: Cortical columns contain analogs of grid cells and head direction cells A C B X Y Z Stensola, Solstad, Frøland, Moser, Moser: 2012 Location and Orientation are both necessary to learn the structure of rooms and predict sensory input. Location and Orientation are both necessary to learn the structure of objects and predict sensory input.
  • 25. L3 L4 L6a L6b L5a L5b Mapping Orientation and Location to a Cortical Column (most complex slide) Sensation Orientation 1) A column is a two-stage sensorimotor model for learning and inferring structure. 2) A column usually cannot infer a Feature or Object in one sensation. - Integrate over time (sense, move, sense, move, sense..) - Vote with neighboring columns 3) This system is most obvious for touch, but it applies to vision and other sensory modalities. Because this architecture exists throughout the neocortex, it suggests we learn, infer, and manipulate abstract concepts the same way we manipulate objects in the world. Location Sensation @ Orientation Feature Feature @ Location Object Motor updated (HD cell-like) Motor updated (grid cell-like) Seq mem Pooling Seq mem Pooling Meaning Operation
  • 26. Rethinking Hierarchy Every column learns complete models of objects. They operate in parallel. Inputs project to multiple levels at once. Columns operate at different scales of input. Sense Simple features Complex features Objects Classic Objects Objects Objects Sensor array Proposed Region 3 Region 2 Region 1
  • 27. Rethinking Hierarchy Every column learns complete models of objects. They operate in parallel. Inputs project to multiple levels at once. Columns operate at different scales of input. Non-hierarchical connections allow columns to vote on shared elements such as “object” and “feature”. Sense Simple features Complex features Objects Classic Sensor array Objects Objects Objects Sensor array vision touch Proposed Region 3 Region 2 Region 1
  • 28. Summary Goal: Understand the function and operation of the laminar circuits in the neocortex. Method: Study how cortical columns make predictions of their inputs. Proposals 1) Pyramidal neurons are the substrate of prediction. Each neuron predicts its activity in hundreds of contexts. 2) A single layer of neurons forms a predictive memory of high-order sequences. (sparse activations, mini-columns, fast inhibition, and lateral connections) 3) A two-layer network forms a predictive memory of sensorimotor sequences. (add motor-derived context and a pooling layer) 4) Columns need motor-derived representations of location and orientation, of the sensor relative to the object. These are analogous to grid and head direction cells. 5) A framework for the cortical column. - Columns learn complete models of objects as “features at locations”, using two sensorimotor inference stages. 6) The neocortex contains thousands of parallel models, that resolve uncertainty by associative linking and/or movement of the sensors.
  • 29. Open Issues Behaviors: how are they learned, encoded, and applied to objects? Detailed model of hierarchy including thalamus How can the model be applied to “Where” pathways, and how do “What” and “Where” pathways work together Collaborations There are many testable predictions in this model, a “green field”. We welcome collaborations and discussions. We are always interested in hosting visiting scholars and interns.
  • 30. Numenta Team Subutai Ahmad VP Research Marcus Lewis Thank You

Editor's Notes

  • #2: I am the outlier on the agenda
  • #3: Excellent progress, it is accelerating
  • #16: 2 min 30 seconds
  • #23: Larger axes