0% found this document useful (0 votes)
85 views

Module 1: Introduction To Machine Learning: 1. What Is Machine Learning? How Is It Different From Human Learning?

This document provides an introduction to machine learning, including definitions of machine learning and how it differs from human learning. It discusses supervised, unsupervised, and reinforcement learning and provides examples. It also covers common machine learning tasks like classification, regression, clustering, and dimensionality reduction with examples.

Uploaded by

Arnav Ambre
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

Module 1: Introduction To Machine Learning: 1. What Is Machine Learning? How Is It Different From Human Learning?

This document provides an introduction to machine learning, including definitions of machine learning and how it differs from human learning. It discusses supervised, unsupervised, and reinforcement learning and provides examples. It also covers common machine learning tasks like classification, regression, clustering, and dimensionality reduction with examples.

Uploaded by

Arnav Ambre
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Module 1: Introduction to Machine Learning

1. What is machine learning? How is it different from human learning?

Machine learning is a type of technology that allows computers to learn


from data and improve their performance without being explicitly
programmed. It's like teaching a computer to recognize patterns and make
decisions based on examples it's given.

The main difference between machine learning and human learning is how
they learn. Humans learn through experience, observation, and instruction,
while machines learn from large amounts of data and algorithms designed
to detect patterns within that data. While humans have limitations in
accessing and processing vast amounts of data manually, machine learning
algorithms can handle huge volumes of data efficiently and automatically
learn from it to make predictions or decisions. So, machine learning is
essentially about teaching computers to learn and improve their
performance by themselves, based on the data they are given.

2. Discuss different types/categories of machine learning with examples.


OR
Discuss supervised/unsupervised/reinforcement learning with example.

Supervised learning is like teaching with a teacher. The computer is given


labeled examples to learn from, where each example has an input (like an
image) and a desired output (like a label saying what's in the image). For
instance, if we want a computer to learn to distinguish between cats and
dogs, we'd give it lots of pictures of cats and dogs, each labeled correctly.
Then, the computer learns from these examples and can predict whether a
new picture contains a cat or a dog.

Unsupervised learning is more like exploring without a guide. Here, the


computer doesn't have labeled examples to learn from. Instead, it's given a
bunch of data and needs to find patterns or structures within it on its own.
For example, if you give the computer a lot of customer data without telling
it which customers bought what, it might try to group customers with
similar buying habits together.
Reinforcement learning is like teaching a computer to play a game. The
computer interacts with an environment and learns to achieve a goal
through trial and error. It gets feedback in the form of rewards or penalties
for its actions. For example, in teaching a computer to play chess, it makes
moves, and based on whether those moves lead to winning or losing, it
adjusts its strategy over time to maximize winning.

Reinforcement learning involves an agent (the computer program), actions


(what it can do), rewards (feedback on its actions), and the environment
(where it operates). The computer learns by trying different actions and
seeing what results it gets, aiming to maximize the total reward it receives
over time. This is like how we might learn to play a new game by
experimenting with different strategies until we find what works best.

3. Discuss various machine learning tasks with examples.


OR
Discuss Supervised/unsupervised learning tasks with example
OR
Discuss Classification/Regression/Clustering/Dimension Reduction task
with example

1. Classification:
Classification algorithms are used when the output variable is categorical,
meaning there are two or more classes or categories.
For example, classifying emails as spam or not spam, predicting whether a
customer will churn or not, or identifying whether a transaction is
fraudulent.
Common classification algorithms include Random Forest, Decision Trees,
Logistic Regression, and Support Vector Machines.

2. Regression:
Regression algorithms are used when there is a relationship between the
input variables and the output variable, which is continuous.
For example, predicting house prices based on features like location, size,
and amenities, forecasting stock prices, or estimating the temperature
based on historical weather data.
Common regression algorithms include Linear Regression, Regression Trees,
Non-Linear Regression, Bayesian Linear Regression, and Polynomial
Regression.

3. Clustering:
Clustering is a method of grouping objects or data points based on their
similarities.
For example, clustering customers based on their purchasing behavior to
identify different market segments, grouping similar documents or articles
together in topic modeling, or detecting anomalies in network traffic.
Common clustering algorithms include K-means clustering, Hierarchical
clustering, and DBSCAN (Density-Based Spatial Clustering of Applications
with Noise).

4. Dimensionality Reduction:
Dimensionality reduction is the process of reducing the number of features
in a dataset while retaining as much information as possible.
This can be useful for reducing the complexity of a model, improving the
performance of a learning algorithm, or making it easier to visualize the
data.
Examples of dimensionality reduction techniques include Principal
Component Analysis (PCA), Singular Value Decomposition (SVD), and Linear
Discriminant Analysis (LDA).

4. Discuss different issues in machine learning.

1. Inadequate Training Data:


This means not having enough good-quality data to train the machine
learning model properly.
If there's not enough data or the data is messy or incorrect, the model won't
learn effectively.
It's like trying to learn how to bake a cake with only half the recipe – you
won't get the best results.

2. Poor Data Quality:


This refers to data that is messy, incomplete, inaccurate, or otherwise
unreliable.
Using poor-quality data can lead to inaccurate predictions or classifications
by the model.
- It's important to have clean, accurate data to get good results from
machine learning.

3. Overfitting and Underfitting:


Overfitting happens when a model learns too much from the training data
and doesn't generalize well to new, unseen data.
Underfitting occurs when a model is too simple and fails to capture the
underlying patterns in the data.
Both overfitting and underfitting lead to poor performance in making
predictions or classifications.

Alongside challenges like inadequate training data, poor data quality, and
overfitting/underfitting, other obstacles in machine learning include
inaccurate recommendations, skill shortages, customer segmentation
complexities, process implementation difficulties, data biases, explainability
limitations, slow results, and irrelevant features, all impacting the efficacy of
machine learning systems.

5. Discuss the key terminologies of machine learning.

1. Gathering Data:
This is the first step in the machine learning process, where you identify and
obtain all the data needed for your project from various sources like files,
databases, or the internet.
The quantity and quality of data play a crucial role in determining the
accuracy of predictions, and having a large and coherent dataset is
important.

2. Data Preparation:
Once you've gathered the data, you need to put it all together and
understand its nature, format, and quality.
This involves addressing issues like missing values, duplicate data, invalid
data, or noise to ensure the data is clean and suitable for analysis.
3. Choosing a Model:
In this step, you select the appropriate machine learning model or algorithm
to analyze your data.
Depending on your project goals, you might choose from various techniques
like classification, regression, clustering, or association analysis.
After selecting the model, you build it using the prepared data and evaluate
its performance.

4. Training:
Training the model involves teaching it to understand patterns, rules, and
features in the data using selected machine learning algorithms.
Data is typically segmented into training, evaluation, and validation sets for
this purpose.

5. Evaluation:
Evaluation is done using evaluation data to measure the efficiency of the
trained model.
Different algorithms have different efficiency measures, such as accuracy,
sensitivity, specificity for classification, or mean squared error for regression.

6. Hyper-parameter Tuning:
Hyper-parameter tuning involves finding the optimal rules and parameters
of the trained model using data.
The primary goal is to increase the model's efficiency by fine-tuning its
parameters.

7. Prediction:
Once the model is trained, evaluated, and tuned correctly, it can be used to
make predictions on new data.
Testing data is used to check the efficiency of the model, and if it performs
well, the model can be deployed in real-world systems.
6. Discuss the steps in developing the machine learning application.

7. Discuss the applications of Machine Learning.


OR
Discuss any two ML applications in detail.

1. Image Recognition:
Image recognition employs machine learning to identify objects or patterns
within digital images. For instance, facial recognition technology used in
smartphones or security systems learns from large datasets of human faces
to accurately identify individuals. Additionally, in medical imaging, machine
learning algorithms analyze MRI scans or X-rays to aid doctors in diagnosing
diseases like cancer by detecting anomalies that may be overlooked by
human observers.

2. Recommendation Systems:
Recommendation systems utilize machine learning to analyze user
preferences and behaviors, providing personalized suggestions for products,
services, or content. Platforms such as Netflix or Spotify leverage this
technology to recommend movies or songs based on user interactions and
similarities between users. Similarly, e-commerce sites like Amazon utilize
machine learning to suggest products tailored to individual users' browsing
history and past purchases, enhancing user engagement and driving sales.
Module 2: Data Preprocessing

Q.1 What is data preprocessing? What is the need for data


preprocessing/preparation?

Data preprocessing is the process of preparing raw data and making it


suitable for a machine learning model. It's the first and crucial step when
creating a machine learning project because real-world data is often messy,
containing missing values, errors, outliers, or inconsistencies. By cleaning
and formatting the data, we ensure it meets the requirements of machine
learning algorithms, thus increasing the accuracy and efficiency of the
model.

The need for data preprocessing arises because real-world data is often dirty,
incomplete, noisy, or inconsistent. For example, attribute values may be
missing, contain errors or outliers, or have discrepancies in codes or names.
Quality decisions in machine learning must be based on quality data, and
preprocessing helps ensure data accuracy, completeness, consistency, and
other quality dimensions. Additionally, different machine learning
algorithms may require data in specific formats, so preprocessing ensures
the data is formatted appropriately for the chosen method.

Q.2 Discuss various steps of data preprocessing.

Sure, let's expand on each point:

1. Data Cleaning: This step is like giving the data a good scrub. We look
closely at all the information to find any mistakes or problems. For example,
we might notice that some numbers are missing or that there are two
entries for the same thing. We fix these issues so that the data is accurate
and reliable.

2. Data Integration: Think of this like putting together pieces of a puzzle. We


gather data from different places, like spreadsheets, databases, or even
different websites. Then, we combine all of this information into one big
dataset. This helps us see the bigger picture and make better decisions.
3. Data Transformation: Here, we're like wizards transforming data into a
more useful form. Sometimes, data comes in a format that's hard to
understand or work with. So, we change it into a format that's easier to
analyze. For example, we might convert dates into a standard format or turn
text into numbers.

4. Data Reduction: Imagine you have a giant pile of toys, but you only want
to keep the ones you really like. Data reduction is a bit like that. We sift
through all the data and pick out the most important parts. This makes the
dataset smaller and more manageable, but still contains all the key
information we need.

5. Data Discretization: This step is all about organizing data into neat little
groups. Instead of dealing with lots of individual numbers, we group them
together based on certain criteria. For example, we might group people's
ages into categories like "child," "teenager," and "adult." This makes it easier
to analyze and understand the data.

6. Data Normalization: Data normalization is like putting everything on the


same scale. Sometimes, data comes in different units or ranges, which can
make it hard to compare or analyze. So, we adjust the data to make it
uniform. For example, we might scale all the numbers to be between 0 and
1, or we might adjust them to have a mean of 0 and a standard deviation of 1.
This makes the data easier to work with and ensures fair comparisons
between different variables.

Q.3. How the noise/missing values/ categorical values/outliers are handled


in preprocessing?
OR
Numerical on preprocessing (Sample dataset will be provided and you will
be asked to apply any of the above mentioned steps).

1. Missing Values:
Handling missing values in data preprocessing is crucial to ensure the
quality and reliability of the dataset. One approach to address missing values
is by deleting rows or columns that contain a significant amount of missing
data. If a column has around 70% to 75% of its rows as null values, it might be
prudent to drop the entire column. Similarly, rows with one or more missing
values across columns can also be dropped. However, it's important to
exercise caution and consider the impact on the dataset's integrity. Deleting
rows or columns should only be done if there are enough remaining
samples in the dataset to maintain its overall representativeness. Therefore,
careful consideration is necessary to balance the need for data
completeness with the potential loss of knowledge.

2. Categorical Data:
Categorical data refers to information that is grouped into categories or
groups. For instance, if a school or college is collecting details about its
students, the data collected, such as branch, section, or gender, would be
categorized as categorical data. There are two types of categorical data:

1. Nominal Data: This type of data is used to name variables without


assigning any numerical value. It's also known as labeled or named data.
Nominal data helps in making better conclusions. Examples of nominal data
include divisions, genders, etc.

2. Ordinal Data: Variables in ordinal data have natural, ordered categories,


but the distances between these categories are not known. Ordinal data is
ordered or scaled, but this order doesn't have a standard scale for measuring
the difference in variables within each scale. Examples of ordinal data
include interval scales, likes, dislikes, customer satisfaction survey data, etc.
Each of these examples may require different collection and analysis
techniques, but they are all considered ordinal data.

3. Outliers:
Handling outliers in data preprocessing involves addressing values that are
considered to be significantly different from the rest of the data.

1. Detection: Outliers are identified as observations that deviate substantially


from the majority of the data. This can be done by standardizing
observations and labeling the standardized values outside of a
predetermined bound as outliers.

2. Purpose: Outlier detection serves various purposes, such as identifying


fraudulent activities or cleaning up data for analysis.
3. Approaches:
Do Nothing/Treat Separately: In some cases, outliers may represent valid
data points that should be analyzed separately.
Imputing/Enforcing Bounds: Outliers can be handled by imputing them
with upper and lower bounds or enforcing constraints to restrict their
impact on the dataset.
Deleting/Let Binning Handle the Problem: Outliers can also be removed
from the dataset, or binning techniques can be applied to group them with
nearby values.

Q.4 What is dimension reduction? What is the need of dimension


reduction?

Dimension reduction is the process of reducing the number of features in a


dataset while preserving important information. This is done to simplify the
data, improve model performance, and facilitate visualization. Techniques
like principal component analysis (PCA), singular value decomposition (SVD),
and linear discriminant analysis (LDA) are commonly used for this purpose.
Each technique projects the data onto a lower-dimensional space while
retaining as much relevant information as possible.

The need for dimension reduction arises because it helps in data


compression, reducing storage space and computation time. It also removes
redundant features, making the data more manageable and efficient for
analysis. However, dimension reduction can lead to some data loss, and
techniques like PCA may find linear correlations between variables, which
may not always be desirable. Additionally, determining the optimal number
of principal components to keep can be challenging, and care must be
taken to ensure that important information is not discarded during the
process.
Q.5 What is the curse of dimensionality?

The curse of dimensionality refers to the deterioration of machine learning


algorithm performance as the number of features or dimensions in the
dataset increases. It leads to sparsity of data, increased computational
complexity, difficulty in visualization, and higher risk of overfitting. This
makes it challenging to generalize accurately from the training data to
unseen data points and requires careful consideration of feature selection
and dimensionality reduction techniques to mitigate its effects.

1. Sparsity of Data: With higher dimensions, data points become sparser,


making it challenging for algorithms to generalize effectively from training
data to unseen instances, resulting in poorer predictive performance.

2. Increased Computational Complexity: Rising dimensionality leads to


exponential growth in computational complexity, causing longer training
times, higher memory usage, and increased computational costs, making
traditional methods impractical for high-dimensional data.

3. Difficulty in Visualization: High-dimensional data is hard to visualize


accurately due to human perceptual limitations, hindering the
interpretation of relationships among features and impeding the
identification of important patterns.

4. Overfitting Risk: In high-dimensional spaces, overfitting is more likely, as


models may capture noise or irrelevant patterns instead of meaningful
relationships, resulting in reduced predictive accuracy and reliability on
unseen data.

Q.6 Explain the steps of Principal Component Analysis.

Principal Component Analysis (PCA) is a method used for dimensionality


reduction, introduced by Karl Pearson. It aims to map data from a
higher-dimensional space to a lower-dimensional space while maximizing
the variance of the data in the lower-dimensional space.

The steps of Principal Component Analysis are as follows:


1. Construct the Covariance Matrix:
The first step involves constructing the covariance matrix of the data. This
matrix captures the relationships between different features in the dataset.

2. Compute Eigenvectors:
Next, we compute the eigenvectors of the covariance matrix. Eigenvectors
represent the directions of maximum variance in the data.

3. Select Principal Components:


The eigenvectors corresponding to the largest eigenvalues are selected as
the principal components. These eigenvectors capture the most significant
patterns or directions of variance in the data.

4. Reconstruct Data:
Finally, the selected principal components are used to reconstruct the data
in a lower-dimensional space. By retaining the principal components that
explain the most variance, we can effectively reduce the dimensionality of
the dataset while preserving as much information as possible.

PCA is an exploratory technique that can be used to reduce the number of


dimensions in a dataset, find patterns in high-dimensional data, and
visualize data of high dimensionality. Example applications of PCA include
face recognition, image compression, and gene expression analysis.

In simple terms, PCA helps in simplifying complex data by identifying its


main patterns or directions of variation and representing it in a more
compact form, making it easier to analyze and visualize.
Module 3: Learning with Regression

Q1. Compare regression with classification problems.

Regression Problems:
1. Nature of Output: In regression, the output variable is continuous,
meaning it can take any value within a range. For example, predicting house
prices, stock prices, or temperature.
2. Objective: The main goal is to predict a quantity or value based on input
features. It aims to understand the relationship between variables and make
continuous predictions.
3. Evaluation Metrics: Common metrics for regression include Mean Squared
Error (MSE), Root Mean Squared Error (RMSE), and R-Squared, which
measure the accuracy of predictions relative to the actual values.
4. Example: Predicting the price of a house based on features like size,
location, and number of bedrooms.

Classification Problems:
1. Nature of Output: In classification, the output variable is categorical,
meaning it belongs to a specific class or category. For example, classifying
emails as spam or not spam, or identifying different types of animals.
2. Objective: The main goal is to assign a label or category to input data
based on its features. It aims to distinguish between different classes or
categories.
3. Evaluation Metrics: Common metrics for classification include accuracy,
precision, recall, and F1-score, which measure the performance of the model
in correctly classifying instances.
4. Example: Classifying images of animals into categories such as cat, dog, or
bird based on features extracted from the images.

Q2. Explain different types of regression with examples.

Regression analysis is a statistical method used to predict or model the


relationship between a dependent variable and one or more independent
variables in a dataset. There are different types of regression techniques,
each suited to different scenarios:
1. Linear Regression:
i. Simple Linear Regression: Involves finding a linear relationship between
one independent variable (input feature) and one dependent variable
(output).
Example: Predicting the sales of a product based on advertising
expenditure. Here, advertising expenditure is the independent variable, and
sales are the dependent variable.
ii. Multivariate Linear Regression: Extends the concept to include multiple
independent variables.
Example: Predicting the price of a house based on features like square
footage, number of bedrooms, and location. Here, square footage,
bedrooms, and location are independent variables, and house price is the
dependent variable.

2. Logistic Regression:
Used when the dependent variable is categorical, meaning it has discrete
outcomes.
Example: Predicting whether a customer will buy a product (yes/no) based
on factors like age, income, and past purchase history. Here, the dependent
variable is categorical (buy/don't buy), and age, income, and purchase
history are independent variables.

Q3. Discuss basic operation of Neural Net.

X1 and X2 - Input neurons.


Y - Output neuron
Weighted interconnection links-W1 and W2.
Net input calculation is: Yin = x1w1 + x2w2
Output is: y = f(Yin)

In general net input is calculated by:


Yin = x1w1 + x2w2+...+xnwn
Q4. Compare simple linear regression with multivariate regression.

Simple Linear Regression Multivariate Regression

1. Involves only one independent Involves two or more independent


variable. variables.

2. Simplest form of regression, with More complex as it incorporates


a single predictor. multiple predictors.

3. Models the relationship between Models the relationship between


one input and output. multiple inputs and output.

4. Easier to interpret the relationship Interpretation becomes more


graphically. complex with multiple predictors.

5. Lower risk of overfitting due to Higher risk of overfitting with more


fewer variables. variables to fit.

6. Requires only one feature for Requires multiple features, possibly


analysis. increasing data collection needs.

7. Less flexible in capturing complex More flexible, capable of capturing


patterns. intricate relationships.

8. Simpler data handling and Requires more sophisticated data


visualization. preprocessing and analysis.
Q5. Differentiate between linear and logistic regression.

Linear Regression Logistic Regression

1. Predicts continuous numeric Predicts categorical outcomes


outcomes. (binary or multi-class).

2. Assumes a linear relationship Does not assume linear relationship;


between variables. models probabilities.

3. Predicts values within a Predicts probabilities between 0


continuous range. and 1 for each class.

4. Uses a linear equation to model Uses a logistic (sigmoid) function to


the relationship. model probabilities.

5. Typically uses methods like Mean Utilizes methods like Cross-Entropy


Squared Error (MSE). Loss or Log Loss.

6. Evaluated based on metrics like Evaluated based on metrics like


R-squared or RMSE. accuracy, precision, or recall.

7. Commonly used for predicting Commonly used in classification


quantities, such as sales or tasks, such as spam detection or
temperature. medical diagnosis.

Q6. Discuss evaluation metrics of regression models.

1. R Square/Adjusted R Square:
R Square measures the proportion of the variance in the dependent variable
that is predictable from the independent variables.
It's calculated as the square of the correlation coefficient (R), indicating the
goodness of fit.
R Square ranges from 0 to 1, with higher values indicating a better fit.
Adjusted R Square adjusts for the number of predictors and helps prevent
overfitting.
2. Mean Square Error (MSE)/Root Mean Square Error (RMSE):
MSE calculates the average of the squares of the errors, which are the
differences between actual and predicted values.
It provides an absolute measure of the goodness of fit, where lower values
indicate better performance.
RMSE is the square root of MSE, making it easier to interpret as it's in the
same units as the dependent variable.

3. Mean Absolute Error (MAE):


Similar to MSE but calculates the average of the absolute errors rather than
squared errors.
MAE provides a direct representation of the average error magnitude.
It treats all errors equally, unlike MSE, which penalizes larger errors more
heavily.

Q7. List the applications of regression and discuss any one in detail.

1. Forecasting Continuous Outcomes: Predicting values like house prices,


stock prices, or sales.
2. Predicting Trends: Anticipating future retail sales, customer behavior, or
user trends on platforms like streaming services.
3. Analyzing Relationships: Understanding connections between variables
in datasets.
4. Predicting Interest Rates and Stock Prices: Using various factors to
forecast financial indicators.
5. Creating Time Series Visualizations: Visualizing data trends over time.
1. Sports Performance Analysis:
Data scientists in sports teams use linear regression to assess how different
training regimens affect player performance.
For instance, they might analyze how weekly yoga and weightlifting
sessions impact a player's scoring.
The regression equation could be: points scored = β0 + β1(yoga sessions) +
β2(weightlifting sessions).
The coefficient β0 represents expected points scored without yoga or
weightlifting.

2. Business Revenue Prediction:


Linear regression helps businesses understand the link between advertising
spending and revenue.
The regression equation might be: revenue = β0 + β1(ad spending).
Here, β0 represents the total expected revenue when ad spending is zero.
The coefficient β1 indicates the average change in revenue when ad
spending increases by one unit (e.g., one dollar).
A negative β1 suggests that more ad spending leads to less revenue, while a
positive one indicates the opposite.
Based on β1's value, a company can adjust their ad spending strategy
accordingly.

Q8. Discuss any one real life application of NN-based regression.

Application: Predicting Housing Prices

In this application, neural networks are used to predict the prices of houses
based on various features such as location, size, number of bedrooms, and
amenities. Here's how it works:

1. Data Collection:
Data on past house sales are collected, including information like house size,
location, number of bedrooms, and sale prices.

2. Data Preprocessing:
The collected data is cleaned and preprocessed to handle missing values,
outliers, and categorical variables.
3. Feature Engineering:
Relevant features are selected or engineered from the raw data. For
example, creating a new feature like "price per square foot" can help the
model learn better patterns.

4. Neural Network Training:


A neural network model is constructed with input neurons representing the
features (e.g., size, location) and an output neuron representing the
predicted house price.
The model is trained using historical data, where it learns to map the input
features to the corresponding house prices.

5. Model Evaluation:
The trained model is evaluated using a separate test dataset to assess its
performance. Metrics like Mean Squared Error (MSE) or Root Mean Squared
Error (RMSE) are commonly used to evaluate regression models.

6. Prediction:
Once the model is trained and evaluated, it can be deployed to predict the
prices of new houses.
Given the features of a new house (e.g., size, location), the model can provide
an estimate of its selling price.

Benefits:
Accurate Predictions: Neural networks can capture complex relationships
between the features and house prices, leading to more accurate
predictions.
Adaptability: The model can adapt to changing market conditions and
incorporate new data for continuous improvement.
Decision Support: Predicted prices can assist buyers, sellers, and real estate
professionals in making informed decisions.

Overall, neural network-based regression in predicting housing prices is a


valuable tool in the real estate industry, helping stakeholders make
data-driven decisions in buying, selling, or investing in properties.
Q9. Compare biological neural networks with artificial neural networks.

Biological Neural Network Artificial Neural Network

1. Learns through synaptic Learns through mathematical


connections and plasticity. algorithms and optimization
techniques.

2. Processing speed varies Can perform computations much


depending on the complexity of faster than biological counterparts.
tasks.

3. Limited memory capacity and Scalable memory capacity and


storage capabilities. storage capabilities.

4. Highly adaptable and capable Flexible and can be trained to


of learning new tasks. perform various tasks.

5. Generally more energy-efficient Energy consumption can vary


compared to artificial counterparts. depending on architecture and
implementation.

6. Tolerant to faults and capable of Vulnerable to errors and require


self-repair. error-checking mechanisms.

7. Minimal hardware requirements, Requires specialized hardware or


primarily biological. computational resources.

8. Found in living organisms for Utilized in various fields including AI,


various cognitive functions. pattern recognition, and data
analysis.

Q10. Discuss different components of a NN model / Any one


Components (Architectures, Learning algorithms, Activation Functions)
in detail.

Network Architecture:
1. Arrangement of Neurons: Neurons are organized into layers and
interconnected.
2. Layer Types: Different types of layers include input, output, and hidden
layers.
3. Hidden Layers: Intermediate layers between input and output layers
process information.
4. Feedforward vs. Feedback Networks: Networks may be feedforward
(output doesn't influence input) or feedback (output affects input).
5. Recurrent Networks: These include feedback connections, allowing
feedback loops within the network.

Learning Algorithms:
1. Supervised Learning: Learning with teacher guidance using input-output
pairs.
2. Unsupervised Learning: Learning without explicit supervision, where the
network discovers patterns.
3. Training Data: Networks learn from training data to adjust their
parameters.
4. Parameter Learning: Updates network weights to minimize errors.
5. Structure Learning: Focuses on altering the network's architecture based
on performance.

Activation Functions:
1. Determining Neuron Output: Activation functions process neuron inputs
to produce outputs.
2. Types of Functions: Include identity, binary step, sigmoid, and ramp
functions.
3. Identity Function: Maintains the same output as the input.
4. Binary Step Function: Outputs binary values based on a threshold.
5. Sigmoid Function: S-shaped curve used in backpropagation networks for
non-linear transformations.

You might also like