Unit V 2 Marks With Header DL
Unit V 2 Marks With Header DL
DEPARTMENT OF AI & DS
1. Define Autoencoders?
Autoencoders are a specialized class of algorithms that can learn efficient representations
of input data with no need for labels. It is a class of artificial neural networks designed for
unsupervised learning. Learning to compress and effectively represent input data without specific
labels is the essential principle of an automatic decoder. This is accomplished using a two-fold
structure that consists of an encoder and a decoder. The encoder transforms the input data into a
reduced-dimensional representation, which is often referred to as “latent space” or “encoding”.
From that representation, a decoder rebuilds the initial input. For the network to gain meaningful
patterns in data, a process of encoding and decoding facilitates the definition of essential features.
Denoising: Another way of constraining the network is to add noise to the input and teach
the network how to remove the noise from the data.
Tuning the Activation Functions: This method involves changing the activation functions
of various nodes so that a majority of the nodes are dormant thus, effectively reducing the
size of the hidden layers.
10. What are the different ways to constrain the Regularized Autoencoders
There are other ways to constrain the reconstruction of an autoencoder than to impose a
hidden layer of smaller dimensions than the input. The regularized autoencoders use a loss
function that helps the model to have other properties besides copying input to the output. We can
generally find two types of regularized autoencoder: the denoising autoencoder and the sparse
autoencoder.
Image-to-Image Translation
Text-to-Image Synthesis
Data Augmentation
Data Generation for Training
creative applications.
2. High-quality results: GANs can produce high-quality, photorealistic results in image
synthesis, video synthesis, music synthesis, and other tasks.
3. Unsupervised learning: GANs can be trained without labeled data, making them suitable
for unsupervised learning tasks, where labeled data is scarce or difficult to obtain.
4. Versatility: GANs can be applied to a wide range of tasks, including image synthesis, text-
to-image synthesis, image-to-image translation, anomaly detection, data augmentation, and
others.
18. What are the different ways to constrain the Regularized Autoencoders
There are other ways to constrain the reconstruction of an autoencoder than to impose a
hidden layer of smaller dimensions than the input. The regularized autoencoders use a loss
function that helps the model to have other properties besides copying input to the output. We can
generally find two types of regularized autoencoder: the denoising autoencoder and the sparse
autoencoder.
PART – B Question
1. i. Write short notes Sparse Autoencoders.(7)
ii. Illustrate Denoising Auto encoders. (6
2. Discuss Auto encoders. (13)
3. Explain in detail the Generative adversarial networks.
4. Write in detail about Undercomplete Autoencoders. (13
5. Explain Regularized Autoencoders. (13)
PART – C Question
1. Discuss Auto encoders. (15)
2. Explain in detail the Generative adversarial networks.
3. Write in detail about Undercomplete Autoencoders.
4. Explain Regularized Autoencoders.
5. Assess Independent Component Analysis. (15)