0% found this document useful (0 votes)
3 views

Deep Learning HA (Blog)-1

The document is a blog post discussing neural networks, which are foundational to deep learning and mimic the human brain's structure. It explains the components of neural networks, including artificial neurons, layers, forward and backward propagation, and activation functions, while also highlighting their applications in various fields such as computer vision and healthcare. The conclusion emphasizes the importance of understanding these basics for further exploration into advanced neural network architectures.

Uploaded by

NAJIYA SHAIKH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Deep Learning HA (Blog)-1

The document is a blog post discussing neural networks, which are foundational to deep learning and mimic the human brain's structure. It explains the components of neural networks, including artificial neurons, layers, forward and backward propagation, and activation functions, while also highlighting their applications in various fields such as computer vision and healthcare. The conclusion emphasizes the importance of understanding these basics for further exploration into advanced neural network architectures.

Uploaded by

NAJIYA SHAIKH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Page 1 of 9 - Cover Page Submission ID trn:oid:::3618:96086845

NZ
Deep Learning HA (Blog)-1.docx
Vishwakarma Group of Institutions

Document Details

Submission ID

trn:oid:::3618:96086845 6 Pages

Submission Date 811 Words

May 15, 2025, 9:15 PM GMT+5:30


4,799 Characters

Download Date

May 15, 2025, 9:20 PM GMT+5:30

File Name

Deep Learning HA (Blog)-1.docx

File Size

9.1 MB

Page 1 of 9 - Cover Page Submission ID trn:oid:::3618:96086845


Page 2 of 9 - Integrity Overview Submission ID trn:oid:::3618:96086845

13% Overall Similarity


The combined total of all matches, including overlapping sources, for each database.

Filtered from the Report


Bibliography

Cited Text

Small Matches (less than 9 words)

Submitted works

Crossref database

Crossref posted content database

Match Groups Top Sources

8 Not Cited or Quoted 13% 13% Internet sources


Matches with neither in-text citation nor quotation marks
0% Publications
0 Missing Quotations 0% 0% Submitted works (Student Papers)
Matches that are still very similar to source material

0 Missing Citation 0%
Matches that have quotation marks, but no in-text citation

0 Cited and Quoted 0%


Matches with in-text citation present, but no quotation marks

Integrity Flags
0 Integrity Flags for Review
Our system's algorithms look deeply at a document for any inconsistencies that
No suspicious text manipulations found. would set it apart from a normal submission. If we notice something strange, we flag
it for you to review.

A Flag is not necessarily an indicator of a problem. However, we'd recommend you


focus your attention there for further review.

Page 2 of 9 - Integrity Overview Submission ID trn:oid:::3618:96086845


Page 3 of 9 - Integrity Overview Submission ID trn:oid:::3618:96086845

Match Groups Top Sources

8 Not Cited or Quoted 13% 13% Internet sources


Matches with neither in-text citation nor quotation marks
0% Publications
0 Missing Quotations 0% 0% Submitted works (Student Papers)
Matches that are still very similar to source material

0 Missing Citation 0%
Matches that have quotation marks, but no in-text citation

0 Cited and Quoted 0%


Matches with in-text citation present, but no quotation marks

Top Sources
The sources with the highest number of matches within the submission. Overlapping sources will not be displayed.

1 Internet

doaj.org 2%

2 Internet

angelxuanchang.github.io 2%

3 Internet

information-science-engineering.newhorizoncollegeofengineering.in 2%

4 Internet

www.journals.uofd.edu.sd 2%

5 Internet

cloudreachtechnology.com 1%

6 Internet

fastercapital.com 1%

7 Internet

leyao-daily.github.io 1%

8 Internet

romanpub.com 1%

Page 3 of 9 - Integrity Overview Submission ID trn:oid:::3618:96086845


Page 4 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845

Deep Learning HA (Blog)


Neural Networks Explained: From Neuron to Network
3 Neural networks serve as foundation of deep learning, Mimicking the structure and
functionality of the human brain. From basic building blocks to intricate structures, neural
networks enable machines to identify patterns, comprehend visual information, and produce
text that mimics human writing.

Diagram 1: Biological Neuron VS Artificial Neuron

In recent times, neural networks have brought about a significant transformation in the realm
of artificial intelligence. Neural networks play a crucial role in boosting recommendation
systems, voice assistants, self-driving cars, medical diagnostics, driving the advancements in
deep learning. In this blog, we'll explain the basics of neural networks, In a systematic manner.

Artificial Neuron
At the heart of every neural network is artificial neuron Which is inspired by the biological
6 neurons present in the human brain. A neuron receives multiple input signals, performs a
weighted sum, applies an activation function, and generates an output.

A single artificial neuron:

 Receives input(s) –> like numbers or features from a dataset.

 Applies weights to each input.

 Adds a bias term.

Page 4 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845


Page 5 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845

 Processes the result through an activation function (like ReLU, sigmoid, or tanh).

 Outputs a value, which becomes input for the next layer.

Diagram 2: Structure of an Artificial Neuron

Mathematical Representation

1. Expression 1 (Mathematical Notation):

Where:
 xi = input feature
 wi = weight for input
 b = bias term
 f = activation function
 ∑i=1->n = summation from 1 to n inputs
 y = output of the neuron

2. Expression 2 (Programming/Practical Style):

 This is Basically the same as above, but written in more readable, code-style
pseudocode for clarity in presentations, or programming contexts.
 "Activation" refers to the function f
 "Output" refers to y.

Page 5 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845


Page 6 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845

Layers and Architecture


A neural network consists of multiple layers:

 Input Layer: Accepts the raw data (e.g., pixels, features).

 Hidden Layers: Where the magic happens (data transformations and learning).
4  Output Layer: Produces the final result (e.g., a classification label or numeric
prediction).

Diagram 3: Simple Neural Network Architecture

The scope and organization of concealed layers determine whether a model is considered
shallow (with a limited no. of hidden layers) or deep (with a major no. of hidden layers).

Forward Propagation (Data Flows Forward)


Forward propagation is the method of transmitting data through the network:

 The input data is passed to the input layer.


 Each concealed neuron calculates a weighted sum of inputs, along with a bias, to
produce its output.
 An activation function modifies the total.
 The outcome is transferred to the subsequent stage.

This continues until the final layer produces a forecast. It's similar to a series of mathematical
transformations — each step enhances the data.

Page 6 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845


Page 7 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845

Backward Propagation (Backpropagation)


Backpropagation, also known as backpropagation, is the fundamental algorithm employed to train
neural networks. It's how a neural network refines its predictions by modifying the weights and biases
in response to the discrepancy between the predicted and actual outcomes.

1 After passing the input through the neural network, it generates an output. The output is then
8 compared to the actual target, and the error is determined using a loss function, such as mean squared
error or cross entropy.

7 Backpropagation operates in the opposite direction — from the output layer back to the input layer —
adjusting weights to reduce the error.’

Diagram 4: Forward and Backward Propagation Pipelines

Activation Functions
It adds non-linearity to the model, allow it to learn complex patterns.

Common types:
2  ReLU (Rectified Linear Unit): f(z) = max(0,z)

 Sigmoid: f(z) = 1/(1+(e^-z))

 Tanh: f(z) = tanh(z)

Without activation functions, a neural network is just a linear model.

Page 7 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845


Page 8 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845

Diagram 5: Activation Function Graphs

From Single Neuron to Full Network


A single neuron can only represent basic connections, combining neurons into layers forms a
robust learning system:

 First/Single Layer: Simple/Single networks -> address fundamental issues.


 Multiple Layers: Deep networks -> Complex tasks such as image and speech
recognition can be effectively handle by deep networks, which consist of numerous
hidden layers.

Training the Network (Learning Through Error)


It modify the weights and biases to reduce the inconsistencies between predicted and actual
outcomes.

 Forward Propagation: Predict outcomes.


 Loss calculation: determine the extent to which predictions deviate from the actual
labels.
 Backward Propagation: Compute gradients and adjust weights.
 Repeat: loop through data until the model is trained.

Applications of Neural Networks


Neural networks have changed dramatically:

 Computer Vision: Object recognition, facial detection.

 Natural Language Processing: Chatbots, language translation.

 Healthcare: Medical image analysis, disease prediction.

Page 8 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845


Page 9 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845

 Finance: Fraud detection, stock market forecasting.

Diagram 6: Applications of Neural Networks

Conclusion
Neural networks, taking inspiration from the human brain, serve as basics tools in the
development of current artificial intelligence. From a basic artificial neuron to complex layered
architectures.

Understanding the basics, how neurons process inputs, how layers stack together, and how
learning happens through forward and backward propagation lays the foundation for
5 Exploring further into advanced topics like convolutional neural networks (CNNs), recurrent
neural networks (RNNs), and transformers.

Page 9 of 9 - Integrity Submission Submission ID trn:oid:::3618:96086845

You might also like