0% found this document useful (0 votes)
25 views

Unit V 2 Marks With Header DL

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Unit V 2 Marks With Header DL

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

lOMoARcPSD|39233649

UNIT V-2 Marks WITH Header DL

deep learning (Anna University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Honey Priya Dharshini ([email protected])
lOMoARcPSD|39233649

DEPARTMENT OF AI & DS

Sub Name: DEEP LEARNING Sem: V


Sub Code : AD3501 Year: III

UNIT-5 AUTOENCODERS AND GENERATIVE MODELS

PART-A Question & Answer

1. Define Autoencoders?
Autoencoders are a specialized class of algorithms that can learn efficient representations
of input data with no need for labels. It is a class of artificial neural networks designed for
unsupervised learning. Learning to compress and effectively represent input data without specific
labels is the essential principle of an automatic decoder. This is accomplished using a two-fold
structure that consists of an encoder and a decoder. The encoder transforms the input data into a
reduced-dimensional representation, which is often referred to as “latent space” or “encoding”.
From that representation, a decoder rebuilds the initial input. For the network to gain meaningful
patterns in data, a process of encoding and decoding facilitates the definition of essential features.

2. Give the architecture of Autoencoder in Deep Learning


The general architecture of an autoencoder includes an encoder, decoder, and bottleneck layer.

Figure: Architecture of Autoencoder

3. What are the different ways to constrain the network in autoencoders?


 Keep small Hidden Layers: If the size of each hidden layer is kept as small as possible,
then the network will be forced to pick up only the representative features of the data thus
encoding the data.
 Regularization: In this method, a loss term is added to the cost function which encourages
the network to train in ways other than copying the input.

ARUNACHALA COLLEGE OF ENGINEERING FOR WOMEN, MANAVILAI

Downloaded by Honey Priya Dharshini ([email protected])


lOMoARcPSD|39233649

 Denoising: Another way of constraining the network is to add noise to the input and teach
the network how to remove the noise from the data.
 Tuning the Activation Functions: This method involves changing the activation functions
of various nodes so that a majority of the nodes are dormant thus, effectively reducing the
size of the hidden layers.

4. Give the types of Autoencoders


 Denoising Autoencoder
 Sparse Autoencoder
 Variational Autoencoder
 Convolutional Autoencoder

5. Define Denoising Autoencoder


Denoising autoencoder works on a partially corrupted input and trains to recover the original
undistorted image. As mentioned above, this method is an effective way to constrain the
network from simply copying the input and thus learn the underlying structure and important
features of the data.

6. Define Sparse Autoencoder


This type of autoencoder typically contains more hidden units than the input but only a few are
allowed to be active at once. This property is called the sparsity of the network. The sparsity of
the network can be controlled by either manually zeroing the required hidden units, tuning the
activation functions or by adding a loss term to the cost function.

7. Define Variational Autoencoder


Variational autoencoder makes strong assumptions about the distribution of latent variables and
uses the Stochastic Gradient Variational Bayes estimator in the training process. It assumes
that the data is generated by a Directed Graphical Model and tries to learn an approximation .

8. Define Convolutional Autoencoder


Convolutional autoencoders are a type of autoencoder that use convolutional neural networks
(CNNs) as their building blocks. The encoder consists of multiple layers that take a image or a
grid as input and pass it through different convolution layers thus forming a compressed
representation of the input. The decoder is the mirror image of the encoder it deconvolves the
compressed representation and tries to reconstruct the original image.

9. Give the Implementation of Autoencoders


We’ve created an autoencoder comprising two Dense layers: an encoder responsible for
condensing the images into a 64-dimensional latent vector, and a decoder tasked with
reconstructing the initial image based on this latent space.

10. What are the different ways to constrain the Regularized Autoencoders

ARUNACHALA COLLEGE OF ENGINEERING FOR WOMEN, MANAVILAI

Downloaded by Honey Priya Dharshini ([email protected])


lOMoARcPSD|39233649

There are other ways to constrain the reconstruction of an autoencoder than to impose a
hidden layer of smaller dimensions than the input. The regularized autoencoders use a loss
function that helps the model to have other properties besides copying input to the output. We can
generally find two types of regularized autoencoder: the denoising autoencoder and the sparse
autoencoder.

11. Define Generative Adversarial Networks

Generative Adversarial Networks, or GANs, represent a cutting-edge approach to


generative modeling within deep learning, often leveraging architectures like convolutional
neural networks. The goal of generative modeling is to autonomously identify patterns in input
data, enabling the model to produce new examples that feasibly resemble the original dataset.

12. Give the architecture of GAN


A Generative Adversarial Network (GAN) is composed of two primary parts, which are the
Generator and the Discriminator.
Generator Model
A key element responsible for creating fresh, accurate data in a Generative Adversarial
Network (GAN) is the generator model. The generator takes random noise as input and
converts it into complex data samples, such text or images
Discriminator Model
An artificial neural network called a discriminator model is used in Generative Adversarial
Networks (GANs) to differentiate between generated and actual input.

13. Define Discriminator Loss(JD )


The discriminator reduces the negative log likelihood of correctly classifying both produced
and real samples. This loss incentivizes the discriminator to accurately categorize generated
samples as fake (log(1−D(G(zi))) close to 1) and real samples (logD(xi )closeto1).

14. Give application Of Generative Adversarial Networks (GANs)

 Image Synthesis and Generation

 Image-to-Image Translation

 Text-to-Image Synthesis
 Data Augmentation
 Data Generation for Training

15. Advantages of Generative Adversarial Networks (GANs)


The advantages of the GANs are as follows:
1. Synthetic data generation: GANs can generate new, synthetic data that resembles some
known data distribution, which can be useful for data augmentation, anomaly detection, or

ARUNACHALA COLLEGE OF ENGINEERING FOR WOMEN, MANAVILAI

Downloaded by Honey Priya Dharshini ([email protected])


lOMoARcPSD|39233649

creative applications.
2. High-quality results: GANs can produce high-quality, photorealistic results in image
synthesis, video synthesis, music synthesis, and other tasks.
3. Unsupervised learning: GANs can be trained without labeled data, making them suitable
for unsupervised learning tasks, where labeled data is scarce or difficult to obtain.
4. Versatility: GANs can be applied to a wide range of tasks, including image synthesis, text-
to-image synthesis, image-to-image translation, anomaly detection, data augmentation, and
others.

16. Disadvantages of Generative Adversarial Networks (GANs)


The disadvantages of the GANs are as follows:
1. Training Instability: GANs can be difficult to train, with the risk of instability, mode
collapse, or failure to converge.
2. Computational Cost: GANs can require a lot of computational resources and can be slow
to train, especially for high-resolution images or large datasets.
3. Overfitting: GANs can overfit the training data, producing synthetic data that is too similar
to the training data and lacking diversity.
4. Bias and Fairness: GANs can reflect the biases and unfairness present in the training data,
leading to discriminatory or biased synthetic data.
5. Interpretability and Accountability: GANs can be opaque and difficult to interpret or
explain, making it challenging to ensure accountability, transparency, or fairness in their
applications.

17. Give the Implementation of Autoencoders


We’ve created an autoencoder comprising two Dense layers: an encoder responsible for
condensing the images into a 64-dimensional latent vector, and a decoder tasked with
reconstructing the initial image based on this latent space.

18. What are the different ways to constrain the Regularized Autoencoders

There are other ways to constrain the reconstruction of an autoencoder than to impose a
hidden layer of smaller dimensions than the input. The regularized autoencoders use a loss
function that helps the model to have other properties besides copying input to the output. We can
generally find two types of regularized autoencoder: the denoising autoencoder and the sparse
autoencoder.

19. Define Generative Adversarial Networks

Generative Adversarial Networks, or GANs, represent a cutting-edge approach to


generative modeling within deep learning, often leveraging architectures like convolutional
neural networks. The goal of generative modeling is to autonomously identify patterns in input
data, enabling the model to produce new examples that feasibly resemble the original dataset.

20. Define Sparse Autoencoder


This type of autoencoder typically contains more hidden units than the input but only a few are
allowed to be active at once. This property is called the sparsity of the network. The sparsity of
the network can be controlled by either manually zeroing the required hidden units, tuning the

ARUNACHALA COLLEGE OF ENGINEERING FOR WOMEN, MANAVILAI

Downloaded by Honey Priya Dharshini ([email protected])


lOMoARcPSD|39233649

activation functions or by adding a loss term to the cost function.

PART – B Question
1. i. Write short notes Sparse Autoencoders.(7)
ii. Illustrate Denoising Auto encoders. (6
2. Discuss Auto encoders. (13)
3. Explain in detail the Generative adversarial networks.
4. Write in detail about Undercomplete Autoencoders. (13
5. Explain Regularized Autoencoders. (13)
PART – C Question
1. Discuss Auto encoders. (15)
2. Explain in detail the Generative adversarial networks.
3. Write in detail about Undercomplete Autoencoders.
4. Explain Regularized Autoencoders.
5. Assess Independent Component Analysis. (15)

ARUNACHALA COLLEGE OF ENGINEERING FOR WOMEN, MANAVILAI

Downloaded by Honey Priya Dharshini ([email protected])

You might also like