0% found this document useful (0 votes)
339 views104 pages

Recurrent Neural Networks

This slide is about RNN which is very relevant in designing a deep learning system.

Uploaded by

Dimpoli Toppo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
339 views104 pages

Recurrent Neural Networks

This slide is about RNN which is very relevant in designing a deep learning system.

Uploaded by

Dimpoli Toppo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 104

Lecture 10:

Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 1 May


May 2,
2, 2019
2019
Last Time: CNN Architectures

GoogLeNet
AlexNet

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 4 May 2, 2019
Last Time: CNN Architectures

ResNet

SENet

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 5 May 2, 2019
Comparing complexity...

An Analysis of Deep Neural Network Models for Practical Applications, 2017.

Figures copyright Alfredo Canziani, Adam Paszke, Eugenio Culurciello, 2017. Reproduced with permission.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 6 May 2, 2019
Efficient networks...
MobileNets: Efficient Convolutional Neural Networks for
Mobile Applications
[Howard et al. 2017]

Depthwise separable convolutions replace


standard convolutions by factorizing them
into a depthwise convolution and a 1x1
convolution that is much more efficient
Much more efficient, with little loss in
accuracy
Follow-up MobileNetV2 work in 2018
(Sandler et al.)
- Other works in this space e.g. ShuffleNet
(Zhang et al. 2017)

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May


May 2,
2, 2019
2019
Meta-learning: Learning to learn network architectures...
Neural Architecture Search with Reinforcement Learning (NAS)
[Zoph et al. 2016]

- “Controller” network that learns to design a good


network architecture (output a string
corresponding to network design)
- Iterate:
1) Sample an architecture from search space
2) Train the architecture to get a “reward” R
corresponding to accuracy
3) Compute gradient of sample probability, and
scale by R to perform controller parameter
update (i.e. increase likelihood of good
architecture being sampled, decrease
likelihood of bad architecture)

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 8 May 2, 2019
Meta-learning: Learning to learn network architectures...
Learning Transferable Architectures for Scalable Image
Recognition
[Zoph et al. 2017]
- Applying neural architecture search (NAS) to a
large dataset like ImageNet is expensive
- Design a search space of building blocks
(“cells”) that can be flexibly stacked
- NASNet: Use NAS to find best cell structure
on smaller CIFAR-10 dataset, then transfer
architecture to ImageNet
- Many follow-up works in this
space e.g. AmoebaNet (Real et
al. 2019) and ENAS (Pham,
Guan et al. 2018)

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 9 May 2, 2019
Today: Recurrent Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 10 May 2, 2019
“Vanilla” Neural Network

Vanilla Neural Networks

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 11 May 2, 2019
Recurrent Neural Networks: Process Sequences

e.g. Image Captioning


image -> sequence of words

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 12 May 2, 2019
Recurrent Neural Networks: Process Sequences

e.g. Sentiment Classification


sequence of words -> sentiment

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 13 May 2, 2019
Recurrent Neural Networks: Process Sequences

e.g. Machine Translation


seq of words -> seq of words

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 14 May 2, 2019
Recurrent Neural Networks: Process Sequences

e.g. Video classification on frame level

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 15 May 2, 2019
Sequential Processing of Non-Sequence Data

Classify images by taking a


series of “glimpses”

Ba, Mnih, and Kavukcuoglu, “Multiple Object Recognition with Visual Attention”, ICLR 2015.
Gregor et al, “DRAW: A Recurrent Neural Network For Image Generation”, ICML 2015
Figure copyright Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra,
2015. Reproduced with permission.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 16 May 2, 2019
Sequential Processing of Non-Sequence Data
Generate images one piece at a time!

Gregor et al, “DRAW: A Recurrent Neural Network For Image Generation”, ICML 2015
Figure copyright Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra, 2015. Reproduced with
permission.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 17 May 2, 2019
RecurrentNeuralNetwork

RNN

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 18 May


May 2,
2, 2019
2019
RecurrentNeuralNetwork

y
Key idea: RNNs have an
“internal state” that is
updated as a sequence is
RNN processed

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 19 May


May 2,
2, 2019
2019
RecurrentNeuralNetwork
We can process a sequence of vectors x by
applying a recurrence formula at every time step: y

RNN
new state old state input vector at
some time step
some function x
with parameters W

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 20 May


May 2,
2, 2019
2019
RecurrentNeuralNetwork
We can process a sequence of vectors x by
applying a recurrence formula at every time step: y

RNN

Notice: the same function and the same set x


of parameters are used at every time step.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 21 May


May 2,
2, 2019
2019
(Simple) Recurrent Neural Network
The state consists of a single “hidden” vector h:

RNN

x
Sometimes called a “Vanilla RNN” or an
“Elman RNN” after Prof. Jeffrey Elman

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 22 May


May 2,
2, 2019
2019
RNN: Computational Graph

h0 fW h1

x1

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 23 May


May 2,
2, 2019
2019
RNN: Computational Graph

h0 fW h1 fW h2

x1 x2

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 24 May


May 2,
2, 2019
2019
RNN: Computational Graph

h0 fW h1 fW h2 fW h3
… hT

x1 x2 x3

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 25 May


May 2,
2, 2019
2019
RNN: Computational Graph

Re-use the same weight matrix at every time-step

h0 fW h1 fW h2 fW h3
… hT

x1 x2 x3
W

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 26 May


May 2,
2, 2019
2019
RNN: Computational Graph: Many to Many

y1 y2 y3 yT

h0 fW h1 fW h2 fW h3
… hT

x1 x2 x3
W

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 27 May


May 2,
2, 2019
2019
RNN: Computational Graph: Many to Many

y1 L1 y2 L2 y3 L3 yT LT

h0 fW h1 fW h2 fW h3
… hT

x1 x2 x3
W

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 28 May


May 2,
2, 2019
2019
RNN: Computational Graph: Many to Many L

y1 L1 y2 L2 y3 L3 yT LT

h0 fW h1 fW h2 fW h3
… hT

x1 x2 x3
W

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 29 May


May 2,
2, 2019
2019
RNN: Computational Graph: Many to One

h0 fW h1 fW h2 fW h3
… hT

x1 x2 x3
W

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 30 May


May 2,
2, 2019
2019
RNN: Computational Graph: One to Many

y1 y2 y3 yT

h0 fW h1 fW h2 fW h3
… hT

x
W

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 31 May


May 2,
2, 2019
2019
Sequence to Sequence: Many-to-one +
one-to-many
Many to one: Encode input
sequence in a single vector

h
fW
h
fW
h
fW
h … h
0 1 2 3 T

x x x
W
1 2 3
1

Sutskever et al, “Sequence to Sequence Learning with Neural Networks”, NIPS 2014

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 32 May


May 2,
2, 2019
2019
Sequence to Sequence: Many-to-one +
one-to-many
One to many: Produce output
sequence from single input vector
Many to one: Encode input
sequence in a single vector y y
1 2

h
fW
h
fW
h
fW
h … h
fW
h
fW
h
fW …
0 1 2 3 T 1 2

x x x
W W
1 2 3
1 2

Sutskever et al, “Sequence to Sequence Learning with Neural Networks”, NIPS 2014

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 33 May


May 2,
2, 2019
2019
Example:
Character-level
Language Model

Vocabulary:
[h,e,l,o]

Example training
sequence:
“hello”

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 34 May


May 2,
2, 2019
2019
Example:
Character-level
Language Model

Vocabulary:
[h,e,l,o]

Example training
sequence:
“hello”

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 35 May


May 2,
2, 2019
2019
Example:
Character-level
Language Model

Vocabulary:
[h,e,l,o]

Example training
sequence:
“hello”

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 36 May


May 2,
2, 2019
2019
“e” “l” “l” “o”
Example: Sample

Character-level .03
.13
.25
.20
.11
.17
.11
.02
Softmax .00 .05 .68 .08
Language Model .84 .50 .03 .79

Sampling

Vocabulary:
[h,e,l,o]

At test-time sample
characters one at a time,
feed back to model

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 37 May


May 2,
2, 2019
2019
“e” “l” “l” “o”
Example: Sample
Character-level .03
.13
.25
.20
.11
.17
.11
.02
Softmax .00 .05 .68 .08
Language Model .84 .50 .03 .79

Sampling

Vocabulary:
[h,e,l,o]

At test-time sample
characters one at a time,
feed back to model

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 38 May


May 2,
2, 2019
2019
“e” “l” “l” “o”
Example: Sample
Character-level .03
.13
.25
.20
.11
.17
.11
.02
Softmax .00 .05 .68 .08
Language Model .84 .50 .03 .79

Sampling

Vocabulary:
[h,e,l,o]

At test-time sample
characters one at a time,
feed back to model

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 39 May


May 2,
2, 2019
2019
“e” “l” “l” “o”
Example: Sample
Character-level .03
.13
.25
.20
.11
.17
.11
.02
Softmax .00 .05 .68 .08
Language Model .84 .50 .03 .79

Sampling

Vocabulary:
[h,e,l,o]

At test-time sample
characters one at a time,
feed back to model

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 40 May


May 2,
2, 2019
2019
Forward through entire sequence to
compute loss, then backward through
Backpropagation through time entire sequence to compute gradient

Loss

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 41 May


May 2,
2, 2019
2019
Truncated Backpropagation through time
Loss

Run forward and backward


through chunks of the
sequence instead of whole
sequence

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 42 May


May 2,
2, 2019
2019
Truncated Backpropagation through time
Loss

Carry hidden states


forward in time forever,
but only backpropagate
for some smaller
number of steps

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 43 May


May 2,
2, 2019
2019
Truncated Backpropagation through time
Loss

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 44 May


May 2,
2, 2019
2019
min-char-rnn.py gist: 112 lines of Python

(https://gist.github.com/karpathy/d4dee
566867f8291f086)

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 45 May


May 2,
2, 2019
2019
y

RNN

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 46 May


May 2,
2, 2019
2019
at first:
train more

train more

train more

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 47 May


May 2,
2, 2019
2019
PANDARUS: VIOLA:
Alas, I think he shall be come approached and the day Why, Salisbury must find his flesh and thought
When little srain would be attain'd into being never fed, That which I am not aps, not a man and in fire,
And who is but a chain and subjects of his death, To show the reining of the raven and the wars
I should not sleep. To grace my hand reproach within, and not a fair are hand,
That Caesar and my goodly father's world;
Second Senator: When I was heaven of presence and our fleets,
They are away this miseries, produced upon my soul, We spare with hours, but cut thy council I am
Breaking and strongly should be buried, when I great,
perish Murdered and by thy master's ready there
The earth and thoughts of many states. My power to give thee but so much as hell:
Some service in the noble bondman here,
DUKE VINCENTIO: Would show him to her wine.
Well, your wit is in the care of side and that.
KING LEAR:
Second Lord: O, if you were a feeble sight, the courtesy of your law,
They would be ruled after this chamber, and Your sight and several breath, will wear the gods
my fair nues begun out of the fact, to be conveyed, With his heads, and my hands are wonder'd at the deeds,
Whose noble souls I'll have the heart of the wars. So drop upon your lordship's head, and your opinion
Shall be against your honour.
Clown:
Come, sir, I will make did behold your worship.

VIOLA:
I'll drink it.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 48 May


May 2,
2, 2019
2019
The Stacks Project: open source algebraic geometry textbook

Latex source http://stacks.math.columbia.edu/


The stacks project is licensed under the GNU Free Documentation License

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 49 May


May 2,
2, 2019
2019
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 50 May
May 2,
2, 2019
2019
Generated
C code

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 53 May


May 2,
2, 2019
2019
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 54 May
May 2,
2, 2019
2019
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 55 May
May 2,
2, 2019
2019
Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 56 May


May 2,
2, 2019
2019
Searching for interpretable cells

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016
Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 57 May


May 2,
2, 2019
2019
Searching for interpretable cells

quote detection cell


Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016
Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 58 May


May 2,
2, 2019
2019
Searching for interpretable cells

line length tracking cell


Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016
Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 59 May


May 2,
2, 2019
2019
Searching for interpretable cells

if statement cell
Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016
Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 60 May


May 2,
2, 2019
2019
Searching for interpretable cells

quote/comment cell
Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016
Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 61 May


May 2,
2, 2019
2019
Searching for interpretable cells

code depth cell

Karpathy, Johnson, and Fei-Fei: Visualizing and Understanding Recurrent Networks, ICLR Workshop 2016
Figures copyright Karpathy, Johnson, and Fei-Fei, 2015; reproduced with permission

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 62 May


May 2,
2, 2019
2019
Image Captioning

Figure from Karpathy et a, “Deep


Visual-Semantic Alignments for Generating
Image Descriptions”, CVPR 2015; figure
copyright IEEE, 2015.
Reproduced for educational purposes.

Explain Images with Multimodal Recurrent Neural Networks, Mao et al.


Deep Visual-Semantic Alignments for Generating Image Descriptions, Karpathy and Fei-Fei
Show and Tell: A Neural Image Caption Generator, Vinyals et al.
Long-term Recurrent Convolutional Networks for Visual Recognition and Description, Donahue et al.
Learning a Recurrent Visual Representation for Image Caption Generation, Chen and Zitnick

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 63 May


May 2,
2, 2019
2019
Recurrent Neural Network

Convolutional Neural Network

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 64 May


May 2,
2, 2019
2019
test image

This image is CC0 public domain

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 2, 2019
test image

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 2, 2019
test image

X
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 2, 2019
test image

x0
<STA
RT>

Fei-Fei Li & Justin Johnson & Serena Yeung


<START> Lecture 10 - May 2, 2019
test image

y0

before:
h = tanh(Wxh * x + Whh * h)
h0
Wih
now:
h = tanh(Wxh * x + Whh * h + Wih * v)
x0
<STA
RT>

v Li & Justin Johnson<START>


Fei-Fei & Serena Yeung Lecture 10 - May 2, 2019
test image

y0

sample!
h0

x0
<STA straw
RT>

Fei-Fei Li & Justin Johnson & Serena Yeung


<START> Lecture 10 - May 2, 2019
test image

y0 y1

h0 h1

x0
<STA straw
RT>

Fei-Fei Li & Justin Johnson & Serena Yeung


<START> Lecture 10 - May 2, 2019
test image

y0 y1

h0 h1
sample!

x0
<STA straw hat
RT>

Fei-Fei Li & Justin Johnson & Serena Yeung


<START> Lecture 10 - May 2, 2019
test image

y0 y1 y2

h0 h1 h2

x0
<STA straw hat
RT>

Fei-Fei Li & Justin Johnson & Serena Yeung


<START> Lecture 10 - May 2, 2019
test image

y0 y1 y2

sample
<END> token
h0 h1 h2 => finish.

x0
<STA straw hat
RT>

Fei-Fei Li & Justin Johnson & Serena Yeung


<START> Lecture 10 - May 2, 2019
Captions generated using neuraltalk2
All images are CC0 Public domain:

Image Captioning: Example Results cat suitcase, cat


surfers, tennistree,
dog, bear,
, giraffe, motorcycle

A cat sitting on a A cat is sitting on a tree A dog is running in the A white teddy bear sitting in
suitcase on the floor branch grass with a frisbee the grass

Two people walking on A tennis player in action Two giraffes standing in a A man riding a dirt bike on
the beach with surfboards on the court grassy field a dirt track

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 75 May


May 2,
2, 2019
2019
Captions generated using neuraltalk2
All images are CC0 Public domain: fur

Image Captioning: Failure Cases coat, handstand, spider web, baseball

A bird is perched on
a tree branch

A woman is holding a cat


in her hand

A man in a
baseball uniform
throwing a ball

A woman standing on a
beach holding a surfboard
A person holding a
computer mouse on a desk

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 76 May


May 2,
2, 2019
2019
Image Captioning with Attention
RNN focuses its attention at a different spatial location
when generating each word

Xu et al, “Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015
Figure copyright Kelvin Xu, Jimmy Lei Ba, Jamie Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Benchio, 2015. Reproduced with
permission.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 77 May


May 2,
2, 2019
2019
Image Captioning with Attention

CNN h0

Features:
Image: LxD
HxWx3

Xu et al, “Show, Attend and Tell: Neural


Image Caption Generation with Visual
Attention”, ICML 2015

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 78 May


May 2,
2, 2019
2019
Image Captioning with Attention
Distribution over
L locations

a1

CNN h0

Features:
Image: LxD
HxWx3

Xu et al, “Show, Attend and Tell: Neural


Image Caption Generation with Visual
Attention”, ICML 2015

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 79 May


May 2,
2, 2019
2019
Image Captioning with Attention
Distribution over
L locations

a1

CNN h0

Features:
Image: LxD
HxWx3 Weighted
z1
features: D
Weighted
Xu et al, “Show, Attend and Tell: Neural
Image Caption Generation with Visual
combination
Attention”, ICML 2015 of features

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 80 May


May 2,
2, 2019
2019
Image Captioning with Attention
Distribution over
L locations

a1

CNN h0 h1

Features:
Image: LxD
HxWx3 Weighted
z1 y1
features: D
Weighted
Xu et al, “Show, Attend and Tell: Neural
Image Caption Generation with Visual
combination First word
Attention”, ICML 2015 of features

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 81 May


May 2,
2, 2019
2019
Image Captioning with Attention
Distribution over Distribution
L locations over vocab

a1 a2 d1

CNN h0 h1

Features:
Image: LxD
HxWx3 Weighted
z1 y1
features: D
Weighted
Xu et al, “Show, Attend and Tell: Neural
Image Caption Generation with Visual
combination First word
Attention”, ICML 2015 of features

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 82 May


May 2,
2, 2019
2019
Image Captioning with Attention
Distribution over Distribution
L locations over vocab

a1 a2 d1

CNN h0 h1 h2

Features:
Image: LxD
HxWx3 Weighted
z1 y1 z2 y2
features: D
Weighted
Xu et al, “Show, Attend and Tell: Neural
Image Caption Generation with Visual
combination First word
Attention”, ICML 2015 of features

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 83 May


May 2,
2, 2019
2019
Image Captioning with Attention
Distribution over Distribution
L locations over vocab

a1 a2 d1 a3 d2

CNN h0 h1 h2

Features:
Image: LxD
HxWx3 Weighted
z1 y1 z2 y2
features: D
Weighted
Xu et al, “Show, Attend and Tell: Neural
Image Caption Generation with Visual
combination First word
Attention”, ICML 2015 of features

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 84 May


May 2,
2, 2019
2019
Image Captioning with Attention

Soft attention

Hard attention

Xu et al, “Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015
Figure copyright Kelvin Xu, Jimmy Lei Ba, Jamie Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Benchio, 2015. Reproduced with
permission.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 85 May


May 2,
2, 2019
2019
Image Captioning with Attention

Xu et al, “Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015
Figure copyright Kelvin Xu, Jimmy Lei Ba, Jamie Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Benchio, 2015. Reproduced with
permission.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 86 May


May 2,
2, 2019
2019
Visual Question Answering

Agrawal et al, “VQA: Visual Question Answering”, ICCV 2015


Zhu et al, “Visual 7W: Grounded Question Answering in Images”, CVPR
2016
Figure from Zhu et al, copyright IEEE 2016. Reproduced for educational purposes.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 87 May


May 2,
2, 2019
2019
Visual Question Answering: RNNs with Attention

What kind of animal is in the photo?


A cat.

Why is the person holding a knife?


Zhu et al, “Visual 7W: Grounded Question Answering in Images”, CVPR 2016
Figures from Zhu et al, copyright IEEE 2016. Reproduced for educational purposes. To cut the cake with.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture


Lecture10
10-- 88 May
May2,
2,2019
2019
Multilayer RNNs

LSTM:

depth

time

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 89 May


May 2,
2, 2019
2019
Vanilla RNN Gradient Flow Bengio et al, “Learning long-term dependencies with gradient descent
is difficult”, IEEE Transactions on Neural Networks, 1994
Pascanu et al, “On the difficulty of training recurrent neural
networks”, ICML 2013

W tanh

ht-1 stack ht

xt

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 90 May


May 2,
2, 2019
2019
Vanilla RNN Gradient Flow Bengio et al, “Learning long-term dependencies with gradient descent
is difficult”, IEEE Transactions on Neural Networks, 1994
Pascanu et al, “On the difficulty of training recurrent neural
networks”, ICML 2013

Backpropagation from ht
to ht-1 multiplies
T
by W
(actually Whh )

W tanh

ht-1 stack ht

xt

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 91 May


May 2,
2, 2019
2019
Vanilla RNN Gradient Flow Bengio et al, “Learning long-term dependencies with gradient descent
is difficult”, IEEE Transactions on Neural Networks, 1994
Pascanu et al, “On the difficulty of training recurrent neural
networks”, ICML 2013

h0 h1 h2 h3 h4

x1 x2 x3 x4

Computing gradient
of h0 involves many
factors of W
(and repeated tanh)

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 92 May


May 2,
2, 2019
2019
Vanilla RNN Gradient Flow Bengio et al, “Learning long-term dependencies with gradient descent
is difficult”, IEEE Transactions on Neural Networks, 1994
Pascanu et al, “On the difficulty of training recurrent neural
networks”, ICML 2013

h0 h1 h2 h3 h4

x1 x2 x3 x4

Largest singular value > 1:


Computing gradient Exploding gradients
of h0 involves many
factors of W Largest singular value < 1:
(and repeated tanh) Vanishing gradients

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 93 May


May 2,
2, 2019
2019
Vanilla RNN Gradient Flow Bengio et al, “Learning long-term dependencies with gradient descent
is difficult”, IEEE Transactions on Neural Networks, 1994
Pascanu et al, “On the difficulty of training recurrent neural
networks”, ICML 2013

h0 h1 h2 h3 h4

x1 x2 x3 x4

Largest singular value > 1: Gradient clipping: Scale


Computing gradient Exploding gradients gradient if its norm is too big
of h0 involves many
factors of W Largest singular value < 1:
(and repeated tanh) Vanishing gradients

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 94 May


May 2,
2, 2019
2019
Vanilla RNN Gradient Flow Bengio et al, “Learning long-term dependencies with gradient descent
is difficult”, IEEE Transactions on Neural Networks, 1994
Pascanu et al, “On the difficulty of training recurrent neural
networks”, ICML 2013

h0 h1 h2 h3 h4

x1 x2 x3 x4

Largest singular value > 1:


Computing gradient Exploding gradients
of h0 involves many
factors of W Largest singular value < 1:
(and repeated tanh) Change RNN architecture
Vanishing gradients

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 95 May


May 2,
2, 2019
2019
Long Short Term Memory (LSTM)

Vanilla RNN LSTM

Hochreiter and Schmidhuber, “Long Short Term Memory”, Neural Computation


1997

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 96 May


May 2,
2, 2019
2019
Long Short Term Memory (LSTM)
[Hochreiter et al., 1997] i: Input gate, whether to write to cell
f: Forget gate, Whether to erase cell
o: Output gate, How much to reveal cell
vector from g: Gate gate (?), How much to write to cell
below (x)
x sigmoid i

h sigmoid f
W
vector from sigmoid o
before (h)
tanh g

4h x 2h 4h 4*h

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 97 May


May 2,
2, 2019
2019
Long Short Term Memory (LSTM)
[Hochreiter et al., 1997]

ct-1 ☉ + ct

f
i
W ☉ tanh
g
ht-1 stack
o ☉ ht

xt
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 98 May
May 2,
2, 2019
2019
Long Short Term Memory (LSTM): Gradient Flow
[Hochreiter et al., 1997]
Backpropagation from ct to
ct-1 only elementwise
ct-1 multiplication by f, no matrix
☉ + ct multiply by W

f
i
W ☉ tanh
g
ht-1 stack
o ☉ ht

xt
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 99 May
May 2,
2, 2019
2019
Long Short Term Memory (LSTM): Gradient Flow
[Hochreiter et al., 1997]

Uninterrupted gradient flow!


c0 c1 c2 c3

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 100 May
May 2,
2, 2019
2019
c3

May 2,
Lecture 10 - 101 May 2019
2, 2019
Uninterrupted gradient flow!
Long Short Term Memory (LSTM): Gradient Flow

c2

Softmax
FC 1000
Pool
3x3 conv, 64
3x3 conv, 64
3x3 conv, 64
3x3 conv, 64

Fei-Fei Li & Justin Johnson & Serena Yeung


3x3 conv, 64
3x3 conv, 64
...
3x3 conv, 128
3x3 conv, 128
3x3 conv, 128
3x3 conv, 128
c1

3x3 conv, 128


3x3 conv, 128 / 2
3x3 conv, 64
3x3 conv, 64
3x3 conv, 64
3x3 conv, 64
[Hochreiter et al., 1997]

3x3 conv, 64
3x3 conv, 64
Pool
7x7 conv, 64 / 2
Input

Similar to ResNet!
c0
Long Short Term Memory (LSTM): Gradient Flow
[Hochreiter et al., 1997]

Uninterrupted gradient flow!


c0 c1 c2 c3

In between:
Highway Networks
3x3 conv, 128 / 2
7x7 conv, 64 / 2

3x3 conv, 128


3x3 conv, 128

3x3 conv, 128


3x3 conv, 128
3x3 conv, 64
3x3 conv, 64

3x3 conv, 64
3x3 conv, 64

3x3 conv, 128

3x3 conv, 64
3x3 conv, 64

3x3 conv, 64
3x3 conv, 64
3x3 conv, 64
3x3 conv, 64

3x3 conv, 64
3x3 conv, 64
Similar to ResNet!

FC 1000
Softmax
Input

...
Pool

Pool
Srivastava et al, “Highway Networks”,
ICML DL Workshop 2015

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 102 May
May 2,
2, 2019
2019
[An Empirical Exploration of
Other RNN Variants Recurrent Network Architectures,
Jozefowicz et al., 2015]
GRU [Learning phrase representations using rnn
encoder-decoder for statistical machine translation,
Cho et al. 2014]

[LSTM: A Search Space Odyssey,


Greff et al., 2015]

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 103 May
May 2,
2, 2019
2019
Recently in Natural Language Processing…
New paradigms for reasoning over sequences
[“Attention is all you need”, Vaswani et al., 2018]
- New “Transformer” architecture no longer
processes inputs sequentially; instead it can
operate over inputs in a sequence in parallel
through an attention mechanism

- Has led to many state-of-the-art results and


pre-training in NLP, for more interest see e.g.
- “BERT: Pre-training of Deep Bidirectional
Transformers for Language
Understanding”, Devlin et al., 2018
- OpenAI GPT-2, Radford et al., 2018

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 104 May
May 2,
2, 2019
2019
Summary
- RNNs allow a lot of flexibility in architecture design
- Vanilla RNNs are simple but don’t work very well
Common to use LSTM or GRU: their additive interactions
improve gradient flow
- Backward flow of gradients in RNN can explode or vanish.
Exploding is controlled with gradient clipping. Vanishing is
controlled with additive interactions (LSTM)
- Better/simpler architectures are a hot topic of current research,
as well as new paradigms for reasoning over sequences
- Better understanding (both theoretical and empirical) is needed.

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 105 May
May 2,
2, 2019
2019
Next time: Midterm!

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - 106 May 2, 2019

You might also like