Autoencoders - Bits and Bytes of Deep Learning - Towards Data Science
Autoencoders - Bits and Bytes of Deep Learning - Towards Data Science
Images haven’t loaded yet. Please exit printing, wait for images to load, and try to
Vindula Jayawardana Follow
print again.
Postgraduate Researcher at UOM | Research Engineer | Writer | Entrepreneur | www.vindulaj.com
Aug 4, 2017 · 5 min read
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 1/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
neural networks (ANNs) to learning tasks that contain more than one
hidden layer.
. . .
What’s an Autoencoder?
Despite its somewhat initially-sounding cryptic name, autoencoders are
a fairly basic machine learning model. Autoencoders (AE) are a family
of neural networks for which the input is the same as the output. They
work by compressing the input into a latent-space representation, and
then reconstructing the output from this representation.
Autoencoder architecture
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 2/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
Why Autoencoders?
Despite the fact, the practical applications of autoencoders were pretty
rare some time back, today data denoising and dimensionality
reduction for data visualization are considered as two main
interesting practical applications of autoencoders. With appropriate
dimensionality and sparsity constraints, autoencoders can learn data
projections that are more interesting than PCA or other basic
techniques.
Convolutional Autoencoders
In traditional architecture of autoencoders, it is not taken into account
the fact that a signal can be seen as a sum of other signals.
Convolutional Autoencoders (CAE),on the other way, use the
convolution operator to accommodate this observation. Convolution
operator allows ltering an input signal in order to extract some part of
its content. They learn to encode the input in a set of simple signals and
then try to reconstruct the input from them.
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 3/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
Refer this for the use cases of convolution autoencoders with pretty
good explanations using examples. We will see a practical example of
CAE later in this post.
Building Autoencoders
We will start with the most simple autoencoder that we can build. In
the later part, we will be looking into more complex use cases of the
autoencoders in real examples.
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 4/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
1 import numpy as np
2 from keras.layers import Input, Dense
3 from keras.models import Model
4 from keras.datasets import mnist
5 import matplotlib.pyplot as plt
6
7 # this is the size of our encoded representations
8 encoding_dim = 32 # 32 floats -> compression of factor 24
9
10 # this is our input placeholder
11 input_img = Input(shape=(784,))
12 # "encoded" is the encoded representation of the input
13 encoded = Dense(encoding_dim, activation='relu')(input_img
14 # "decoded" is the lossy reconstruction of the input
15 decoded = Dense(784, activation='sigmoid')(encoded)
16 # this model maps an input to its reconstruction
17 autoencoder = Model(input_img, decoded)
18 # this model maps an input to its encoded representation
19 encoder = Model(input_img, encoded)
20 # create a placeholder for an encoded (32-dimensional) inp
21 encoded_input = Input(shape=(encoding_dim,))
22 # retrieve the last layer of the autoencoder model
23 decoder_layer = autoencoder.layers[-1]
24 # create the decoder model
25 decoder = Model(encoded_input, decoder_layer(encoded_input
26 # configure our model to use a per-pixel binary crossentro
27 autoencoder.compile(optimizer='adadelta', loss='binary_cro
28
29 # prepare our input data. We're using MNIST digits, and we
30 (x_train, _), (x_test, _) = mnist.load_data()
31 # normalize all values between 0 and 1 and we will flatten
32 x_train = x_train.astype('float32') / 255.
33 x_test = x_test.astype('float32') / 255.
34 x_train = x_train.reshape((len(x_train), np.prod(x_train.s
35 x_test = x_test.reshape((len(x_test), np.prod(x_test.shape
36 print x_train.shape
37 print x_test.shape
38
39 # train our autoencoder for 50 epochs
40 autoencoder.fit(x_train, x_train,
41 epochs=50,
42 b t h i 256
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 5/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
42 batch_size=256,
43 shuffle=True,
In the above image, the top row is the original digits, and the bottom
row is the reconstructed digits. As you can see, we have lost some
important details in this basic example.
Since our inputs are images, it makes sense to use convolutional neural
networks as encoders and decoders. In practical settings, autoencoders
applied to images are always convolutional autoencoders as they
simply perform much better.
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 6/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 7/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
With the convolution autoencoder, we will get the following input and
reconstructed output.
Conclusion
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 8/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
References
1. https://hackernoon.com/autoencoders-deep-learning-bits-1-
11731e200694
2. https://blog.keras.io/building-autoencoders-in-keras.html
3. https://www.technologyreview.com/s/513696/deep-learning/
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 9/10
8/18/2018 Autoencoders — Bits and Bytes of Deep Learning – Towards Data Science
https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-learning-eaba376f23ad 10/10