viva
viva
1. Limited Capacity
2. Underfitting
3. Choice of Hyperparameters
1. Max Pooling
2. Average Pooling
3. Global Average Pooling
4. Global Max Pooling
5. Min Pooling
6. Fractional Pooling
7. Stochastic Pooling
8. Adaptive Pooling
1. Convolution
2. Activation
3. Pooling
4. Fully Connected Layer
1. Valid Padding
2. Same Padding
3. Full Padding
The number of filters in a convolutional neural network (CNN) isn't fixed and depends on
several factors. Here are some guidelines:
1. Early Layers
2. Deeper Layers
3. Task Complexity
4. Architecture Design
5. Available Resources
30. What are the reasons for training data can be limited?
1. The cost of collecting more data.
2. Data collection can require time, money or human suffering.
31. Explain learning rate.
Learning rate controls model capacity in a more complicated way than other
hyperparamaters.
Effective capacity is highest when learning rate is correct, not when it is large or
small.
The amount that the weights are updated during training is referred to as the step size
or the learning rate.
32. What is grid search?
Grid-search is used to find the optimal hyperparamaters of a model which results in
the most “accurate” predictions.
When there are three or fewer hyperparamaters, the common practice is to perform
grid search.
33. What is hyperparamaters?
Hyperparameters control algorithm behavior and deep learning algorithms come with
many hyperparameters.
Hyperparameters are points of choice or configuration that allow a machine learning
model to be customized for a specific task or dataset.
34. Define random search.
Random search uses random combinations of hyperparameters.
This means that not all of the parameters values or tried and instead parameters will
be sampled with fixed numbers of iterations.
35. What is main reason? Why random search finds good solutions faster than grid-
search?
The main reason why random search finds good solutions faster than grid-search is
that there are no wasted experimental runs, unlike in the case of grid-search, when two values
of a hyperparamater would give the same results.
36. When manual hyperparameter works well?
Manual hyperparameter tuning can work very well when the user has a good starting
point such as one determined by others having worked on the same type of applications and
architecture ,or when the user has months or years of experience in exploring hyperparameter
values for neural networks applied to similar tasks.
37. Define precision.
Precision is the ratio of true positive and total positives predicted.
A precision score towards 1 will signify that your model didn’t miss any true positives
and is able to classify well between correct and incorrect labeling of data.
38. What is mean by performance metrices?
To evaluate the performance or quality of the model, different metrics are used, and
these metrics are known as performance metrics or evaluation metrics.
These performance metrics help us understand how well our model has performed for
the given data.
In this way, we can improve the model's performance by tuning the hyper-parameters.
Each ML model aims to generalize well on unseen/new data, and performance metrics
help determine how well the model generalizes on the new dataset.
39. Define accuracy.
The accuracy metric is one of the simplest Classification metrics to implement, and it can
be determined as the number of correct predictions to the total number of predictions.
Accuracy=number of correct predictions
Total number of predictions
40. What is mean by confusion matrix?
A confusion matrix is a tabular representation of prediction outcomes of any binary
classifier, which is used to describe the performance of the classification model on a set of
test data when true values are known.
41. Define Autoencoders.
A machine learning algorithm known as an artificial neural network (ANN), which
uses backpropagation and sets the target values to equal the input values, is used in an
autoencoder. In order to recreate the actual input, it is constructed in a way that can do both
data encoding and data decoding tasks.
Data compression
Dimensionality reduction
Image denoising
Feature extraction
Removing watermarks from Images
Convolutional Autoencoders
Sparse Autoencoders
Deep Autoencoders
Contractive Autoencoders