0% found this document useful (0 votes)
19 views

Association Rules Overview

Uploaded by

uditmishra2911
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Association Rules Overview

Uploaded by

uditmishra2911
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Association Rules Overview

Association Rules Learning: Need and Application of


Association Rules Learning explain from geeksforgeeks

Association Rule Learning is a key unsupervised machine learning technique used to uncover
interesting relationships between variables in large datasets. GeeksforGeeks provides
comprehensive insights into its necessity and applications.​

📌 Need for Association Rule Learning


Association Rule Learning is essential for:​

Discovering Hidden Patterns: It identifies frequent itemsets and uncovers hidden


patterns within large datasets, which is crucial for understanding complex data
structures.​

Market Basket Analysis: Retailers use it to analyze customer purchasing habits by


finding associations between different products bought together. For instance,
discovering that customers who buy bread often buy butter as well. ​

Enhancing Decision Making: By revealing associations between variables, businesses


can make informed decisions regarding product placements, inventory management,
and promotional strategies.​

🛠️ Applications of Association Rule Learning


Association Rule Learning has diverse applications across various domains:​

Retail and E-commerce: Analyzing purchase patterns to optimize product placement


and cross-selling strategies.​

Web Usage Mining: Understanding user navigation patterns to improve website


structure and user experience.​

Healthcare: Identifying associations between symptoms and diseases to aid in diagnosis


and treatment planning.​

1/23
Bioinformatics: Discovering relationships between genes and biological functions,
which is vital for research and development.​

Intrusion Detection: Detecting unusual patterns that may indicate security breaches or
unauthorized access.​

These applications demonstrate the versatility of Association Rule Learning in extracting


valuable insights from complex datasets.​

For a more detailed exploration of Association Rule Learning, including its algorithms like
Apriori and ECLAT, you can refer to the comprehensive guide provided by GeeksforGeeks.

Sources

basic concepts of association rule mining

Sure! Here's a breakdown of the basic concepts of Association Rule Mining, which is a key
technique in data mining used to uncover interesting relationships or patterns among items
in large datasets—especially useful in market basket analysis.

🧠 Basic Concepts of Association Rule Mining


1. Itemset
A collection of one or more items.

Example: In a supermarket, {milk, bread, butter} is an itemset.

2. Frequent Itemset
An itemset that appears frequently in the dataset (transactions).

2/23
For example, if {milk, bread} appears in 60% of all transactions, it might be
considered a frequent itemset (based on the threshold).

3. Association Rule
A rule is an implication in the form of:

A⇒B

A and B are disjoint itemsets (no common items).

Example: {milk, bread} ⇒ {butter}

4. Support
Indicates how frequently the itemset appears in the dataset.

Formula:

Number of transactions containing A


Support(A) =
Total transactions

Higher support means the rule is more relevant statistically.

5. Confidence
Measures the likelihood of itemset B being purchased when A is purchased.

Formula:

Support(A ∪ B)
Confidence(A ⇒ B) =
Support(A)

6. Lift
Measures how much more likely B is to be bought when A is bought, compared to when
B is bought independently.

3/23
Formula:

Confidence(A ⇒ B)
Lift(A ⇒ B) =
Support(B)

If Lift > 1: Positive association


If Lift = 1: No association
If Lift < 1: Negative association

7. Apriori Principle
If an itemset is frequent, then all of its subsets must also be frequent.

Used to reduce the search space when finding frequent itemsets.

These are the fundamental ideas that make up Association Rule Mining. Would you like to
see a real-life example or visualization of how this works?

yes but in hinglish

Bilkul! Chalo ab Association Rule Mining ko Hinglish mein samajhte hain – simple aur
relatable example ke saath 😄

🧠 Basic Concepts of Association Rule Mining – Hinglish


Mein
1. Itemset (Item ka Group)
Ek ya zyada items ka group.

Example: {bread, butter, milk} ek itemset hai. Matlab customer ne ye teen items ek
sath kharide.

2. Frequent Itemset (Jo Baar-Baar Kharide Jate Hain)

4/23
Aise itemsets jo frequently (baar baar) transactions mein aate hain.

Example: Agar {bread, butter} 70% logon ne kharida, toh wo frequent itemset hai.

3. Association Rule (Relation ya Pattern Dikhane Wala Rule)


Yeh ek rule hota hai jisme hum dekhte hain:

A⇒B

Matlab agar customer A kharidta hai toh B bhi kharidne ke chances hain.

Example: {bread} ⇒ {butter}


Iska matlab: Jin logon ne bread kharida, unmein se bahut log butter bhi le gaye.

4. Support (Kitni Baar Ek Itemset Aaya)


Kisi itemset ka total transactions mein kitni baar aana.

Formula:

A ∪ B aayi transactions
Support =
Total transactions

Zyada support = zyada important rule.

5. Confidence (A ke Saath B Aane ka Chance)


Jab A kharida gaya, tab B aane ka chance kitna tha?

Formula:

Support of (A + B)
Confidence =
Support of A

Jaise: {bread ⇒ butter} ka confidence 80% hai, toh iska matlab 100 logon mein se 80
ne bread ke saath butter bhi kharida.

5/23
6. Lift (Actual Relation Kitna Strong Hai)
Yeh dekhta hai ki A aur B ka relation normal chance se zyada strong hai ya nahi.

Formula:

Confidence of A ⇒ B
Lift =
Support of B

Lift > 1: Strong relation

Lift = 1: No relation

Lift < 1: Weak ya negative relation

7. Apriori Principle (Shortcut for Finding Rules)


Agar koi bada itemset frequent hai, toh uske chhote parts bhi frequent honge.

Isse hum unnecessary combinations avoid kar lete hain.

🛒 Example: Dukan Wala Scene


Agar 100 logon ne shopping ki aur:

60 logon ne bread kharida

50 logon ne butter kharida

40 logon ne bread aur butter dono liya

Toh:

Support = 40 / 100 = 0.40 (yaani 40%)

Confidence (bread ⇒ butter) = 40 / 60 = 0.666 = 66.6%

Lift = 0.666 / 0.50 = 1.33 (positive relation)

Agar chaho toh main ek chhoti si graphical example bhi bana deta hoon ya Python code ke
through dikhata hoon. Batana!

6/23
table ke form me define kar do

Bilkul! Neeche maine Association Rule Mining ke basic concepts ko ek simple aur clear
table ke format mein define kiya hai – Hinglish explanation ke saath 👇

📊 Association Rule Mining – Concepts Table (Hinglish)


Concept Definition (Hinglish mein) Example

Itemset Ek ya zyada items ka group jo ek sath kharide {milk, bread, butter}


gaye ho

Frequent Aise itemsets jo baar-baar transactions mein {milk, bread} agar 60% log
Itemset aate hain kharide

Association Ek pattern ya rule jo batata hai: agar A liya gaya {bread} ⇒ {butter}
Rule to B bhi liya gaya

Support Total transactions mein se kisi itemset ka aane Support of {bread, butter} =
ka percentage 40%

Confidence Jab A kharida gaya, tab B bhi kharidne ka chance Confidence = 40 / 60 = 66.6%
kitna tha

Lift Actual relation kitna strong hai (Confidence ko Lift = 0.666 / 0.50 = 1.33
compare karta hai B ke normal support se)

Apriori Agar koi itemset frequent hai to uske saare {milk, bread, butter}
Principle chhote parts bhi frequent honge – isse frequent → {milk, bread} bhi
calculation easy ho jaata hai frequent hoga

Agar chaho toh main is table ko PDF format mein ya ek image ke form mein bhi bana sakta
hoon, ya iske upar ek example graph bhi add kar sakta hoon. Batana!

explain apriori algorithm from geeks for geeks

7/23
Apriori Algorithm - GeeksforGeeks
geeksforgeeks.org

The Apriori Algorithm is a foundational method in data mining used for discovering
frequent itemsets and generating association rules. Its significance lies in its ability to
identify relationships between items in large datasets, which is particularly valuable in
market basket analysis. ​

🔍 Key Concepts
Frequent Itemsets: Collections of items that appear together frequently in transactions.​

Support: The proportion of transactions that contain a particular itemset.​

Confidence: The likelihood that a transaction containing itemset A also contains itemset
B.​

Lift: The ratio of the observed support to that expected if A and B were independent.​

⚙️ How the Apriori Algorithm Works


1. Initialization: Identify all individual items in the dataset and determine their support.​

2. Prune Step: Eliminate items that do not meet the minimum support threshold.​

3. Join Step: Generate new candidate itemsets by combining the remaining items.​

4. Repeat: Repeat the prune and join steps for larger itemsets until no further frequent
itemsets are found.​

5. Generate Rules: From the frequent itemsets, generate association rules that meet the
minimum confidence threshold.​

🛒 Example
Consider the following transactions:

8/23
T1: {Milk, Bread, Butter}

T2: {Bread, Butter}

T3: {Milk, Bread}

T4: {Milk, Butter}

T5: {Bread, Butter}

If we set a minimum support threshold of 60%, the algorithm will identify itemsets like
{Bread, Butter} as frequent since they appear together in 3 out of 5 transactions (60%).​

📈 Applications
Market Basket Analysis: Understanding customer purchasing patterns to optimize
product placement and promotions.​

Recommendation Systems: Suggesting products to customers based on their purchase


history.​

Healthcare: Identifying associations between symptoms and diagnoses.​

For a more detailed explanation and implementation, you can refer to the comprehensive
guide provided by GeeksforGeeks: Apriori Algorithm - GeeksforGeeks

Sources

in hinglish

Oops! Mujhe maaf kijiye — main sirf un URLs ko directly khol sakta hoon jo aap provide karte
ho. Jo GeeksforGeeks ka Apriori Algorithm ka URL hai, wo directly access nahi ho pa raha.

Lekin koi tension nahi! Main aapko Apriori Algorithm ka explanation Hinglish mein yahaan
hi de deta hoon, bilkul simple language mein 👇

9/23
🧠 Apriori Algorithm – Hinglish Mein Samjhaav
Apriori algorithm ek frequent pattern mining technique hai jo market basket analysis jaise
cases mein use hoti hai. Iska main kaam hota hai:

"Kaunse items ek saath frequently kharide ja rahe hain?"

🔑 Main Concepts
Concept Matlab Hinglish Mein

Itemset Ek group of items jo ek sath kharide gaye ho

Support Kitni baar item ya itemset transactions mein aaya

Confidence Jab A kharida gaya to B bhi kharida gaya ya nahi

Lift A aur B ka relation normal se kitna strong hai

⚙️ Kaise Kaam Karta Hai Apriori Algorithm


Step 1: Start with Single Items

Har item ka support count nikalo.

Jaise {Milk} , {Bread} , {Butter} .

Step 2: Apply Support Threshold

Jo items minimum support se kam baar aaye, unko hata do.

Step 3: Generate Pair Combinations

Jo bache items hain unka 2-item ka combination banao.

Example: {Milk, Bread} , {Milk, Butter} , {Bread, Butter}

Step 4: Repeat Step 2

In 2-item combinations ka bhi support nikalo.

10/23
Jo minimum support se kam hain, unko hata do.

Step 5: Make Bigger Sets

Ab 3-item combinations banao sirf un items ka jo 2-item mein bache hain.

Step 6: Continue Till No More Frequent Sets

Jab tak naye frequent itemsets milte rahe, tab tak steps repeat karo.

Step 7: Generate Association Rules

Jo frequent itemsets mile unse rules banao.

Jaise {Bread} ⇒ {Butter} ya {Milk, Bread} ⇒ {Butter}

🛒 Simple Example
5 Transactions:

TID Items

T1 Milk, Bread, Butter

T2 Bread, Butter

T3 Milk, Bread

T4 Milk, Butter

T5 Bread, Butter

{Bread} ka support = 4/5 = 80%

{Bread, Butter} = 3/5 = 60% → Frequent Itemset

Toh rule banega:

{Bread} ⇒ {Butter}
Confidence = 3/4 = 75%

11/23
🎯 Kaha Use Hota Hai?
Supermarket/Online Store: Product recommendation

Healthcare: Symptoms ke relation

Web Usage: Kaunse pages log ek sath dekhte hain

Agar chaho toh main is algorithm ka Python implementation ya iska flowchart/image bhi
bana sakta hoon. Batana!

Artificial Neural Network: Need and Application of Artificial


Neural Network from geeks for geeks and in hinglish

Bilkul! Chaliye Artificial Neural Network (ANN) ke baare mein GeeksforGeeks ke content ke
aadhaar par Hinglish mein samjhte hain — iski zarurat aur applications ke saath.

🧠 Artificial Neural Network (ANN) Kya Hai?


Artificial Neural Network ek computational model hai jo human brain ke neurons se prerit
hai. Isme interconnected units (ya neurons) hote hain jo layers mein arranged hote hain:
input layer, hidden layers, aur output layer. ANNs pattern recognition, prediction, aur
decision-making jaise tasks mein kaafi effective hote hain. ​

❓ ANN Ki Zarurat Kyu Hai?

12/23
Complex Data Handling: ANNs complex aur nonlinear data patterns ko samajhne mein
madad karte hain, jo traditional algorithms ke liye mushkil hota hai.​

Learning from Examples: Ye models examples se seekh kar naye data par predictions
karne mein capable hote hain.​

Adaptability: ANNs naye data ke saath apne aap ko adjust kar sakte hain, jisse unki
performance improve hoti hai.​

💼 ANN Ke Applications
GeeksforGeeks ke anusaar, ANN ke kuch mukhya applications hain:​

1. Pattern Recognition: Images, handwriting, aur speech patterns ko pehchanne mein.​

2. Data Mining: Large datasets mein hidden patterns aur relationships ko discover karne
mein.​

3. Medical Diagnosis: Symptoms ke analysis se diseases ka prediction.​

4. Stock Market Prediction: Financial data ka analysis karke market trends ka forecast.​

5. Fraud Detection: Unusual patterns ko identify karke fraudulent activities ka pata lagana.

Agar aap chahte hain, toh main ANN ka ek simple Python implementation ya ek flowchart
bhi provide kar sakta hoon. Batayein!

सोर्सेस

Neural network representation and working

13/23
https://miro.mediu
m.com/v2/resize%3Afi
t%3A1400/1%2ASCz0a
TETjTYC864Bqjt6Og.p
ng

Bilkul! Chaliye Neural Network ke representation aur kaam karne ke tarike ko Hinglish
mein samjhte hain, ek simple explanation ke saath.

🧠 Neural Network Representation


Neural Network ek layered structure hoti hai jo human brain ke neurons se inspired hoti hai.
Isme teen main layers hoti hain:​

1. Input Layer: Yeh layer external data ko receive karti hai.

2. Hidden Layers: Yeh layers input data par complex computations karti hain.

3. Output Layer: Yeh layer final prediction ya decision deti hai.​

Har layer mein multiple neurons hote hain, jo ek doosre se connections ke through jude
hote hain. Har connection ka ek weight hota hai jo signal ki strength ko define karta hai.​

⚙️ Neural Network Kaam Kaise Karta Hai?


1. Data Input: Input layer data ko receive karti hai.

2. Weighted Sum Calculation: Har neuron apne inputs ka weighted sum calculate karta
hai.

3. Activation Function: Yeh function decide karta hai ki neuron activate hoga ya nahi.

4. Output Generation: Final output layer prediction ya decision deti hai.​

Neural Network ka learning process backpropagation ke through hota hai, jisme errors ko
minimize karne ke liye weights adjust kiye jaate hain. ​

14/23
Agar aap chahte hain, toh main is process ka ek Python implementation ya ek flowchart bhi
provide kar sakta hoon. Batayein!

Sources

elaborate

Mujhe maaf kijiye — main sirf un URLs ko directly access kar sakta hoon jo aap khud provide
karte hain. Lekin chinta mat kariye! Main aapko Neural Network ka representation aur
working ko detail mein Hinglish mein yahin explain karta hoon. 👇

🧠 Neural Network Representation – Detail Mein


(Hinglish)
Neural Network ek graph-based structure hai jo human brain ke neurons se inspired hoti
hai. Isme teen main parts hote hain:

1. Input Layer
Ye network ka pehla layer hota hai.

Ye real-world data (jaise images, numbers, text) ko accept karta hai.

Har input neuron ek feature ko represent karta hai.

Example: Agar image hai to input layer har pixel ka value lega.

2. Hidden Layers
Input ke baad aate hain hidden layers – jisme processing hoti hai.

Ye layer neurons ke through calculations karta hai.

Zyada complex problems ke liye multiple hidden layers hote hain (deep neural
networks).

Har neuron input neurons se connected hota hai, aur unka weighted sum leta hai.

3. Output Layer
Final prediction ya decision yahin se aata hai.

15/23
Example: Agar digit recognition model hai, toh output layer 0 se 9 tak prediction dega.

⚙️ Neural Network Working – Step by Step


🔹 Step 1: Data Input
Input layer har feature (value) ko accept karta hai.

Jaise {x1, x2, x3...}

🔹 Step 2: Weighted Sum


Har hidden neuron calculate karta hai:

z = w1 ⋅ x1 + w2 ⋅ x2 + ... + b
​ ​ ​ ​

Yahan w = weights , x = input , b = bias

🔹 Step 3: Activation Function


z ke baad ek function apply hota hai (jaise ReLU, sigmoid)

Ye decide karta hai ki neuron activate hoga ya nahi.

Non-linearity add karta hai model mein.

🔹 Step 4: Output Prediction


Last layer ka output final result hota hai.

Classification problem mein: probability values (e.g. 80% cat, 20% dog)

Regression mein: ek numeric value (e.g. 45.2)

🔹 Step 5: Loss Calculation


Actual output aur predicted output ka difference nikalte hain.

Isse Loss Function kehte hain.

🔹 Step 6: Backpropagation
Error ko peeche le jaa kar weights update kiye jaate hain.

Goal hota hai: error ko kam karte jao.

Optimization ke liye algorithms jaise Gradient Descent use hota hai.

16/23
🧪 Ek Chhota Sa Example
Maan lijiye ek neural network digit "5" ko recognize kar raha hai:

Input layer: 784 neurons (28x28 pixel image)

Hidden layer: 128 neurons (processing)

Output layer: 10 neurons (digits 0 to 9)

Agar image "5" ka hai:

Network input leta hai

Hidden layer process karta hai

Output layer batata hai:

csharp

[0.01, 0.02, 0.05, 0.03, 0.01, **0.91**, 0.02, 0.01, 0.02, 0.01]

Max value 0.91 → Prediction: 5

Agar chaho toh main is explanation ka diagram, code implementation, ya PDF format bhi
bana sakta hoon. Batana!

Activation Function

https://miro.mediu https://miro.mediu https://miro.mediu


m.com/v2/resize%3Afi m.com/v2/resize%3Afi m.com/v2/resize%3Afi
t%3A1400/1%2ApVAK t%3A2000/1%2AZG90l t%3A1400/1%2AOh57
4JVYrft7yRjd1K9XIg.pn tyYG46grayKQVDDqw. 2O-
g png xjD8eDaw_NIjpDg.pn
g

Bilkul! Aaiye Activation Function ko detail mein Hinglish mein samjhte hain — iska role,
types, aur kaise kaam karta hai neural networks mein.

17/23
🔍 Activation Function Kya Hai?
Neural networks mein, activation function ek mathematical operation hota hai jo har
neuron ke output ko transform karta hai. Ye function decide karta hai ki neuron activate
hoga ya nahi, aur output next layer ko pass karega ya nahi.​

Activation functions ka main role hai non-linearity introduce karna, jisse neural network
complex patterns aur relationships ko samajh sake.​

⚙️ Kaise Kaam Karta Hai?


1. Input Processing: Har neuron apne inputs ka weighted sum calculate karta hai.

2. Activation Function Application: Ye weighted sum activation function ke through pass


hota hai.

3. Output Generation: Activation function ka output decide karta hai ki neuron ka output
kya hoga, jo next layer ko pass hota hai.​

🧠 Common Activation Functions


1. Sigmoid Function
1
Formula: f (x) = 1+e−x

Range: 0 se 1

Use Case: Binary classification problems

Pros: Smooth gradient, output probabilities

Cons: Vanishing gradient problem​

2. Tanh (Hyperbolic Tangent) Function


Formula: f (x) = tanh(x)
Range: -1 se 1

Use Case: Hidden layers mein

18/23
Pros: Zero-centered output

Cons: Vanishing gradient problem​

3. ReLU (Rectified Linear Unit)


Formula: f (x) = max(0, x)
Range: 0 se ∞

Use Case: Hidden layers in deep learning models

Pros: Computationally efficient, mitigates vanishing gradient

Cons: Dying ReLU problem (neurons can become inactive)​

4. Leaky ReLU

={
x if x > 0
Formula: f (x)
if x ≤ 0
​ ​

αx
Range: -∞ se ∞

Use Case: Alternative to ReLU to prevent dying neurons

Pros: Allows small gradient when x < 0

Cons: α value selection can be arbitrary​

5. Softmax Function
e xi
Formula: f (xi ) =

x
∑j e j
​ ​ ​

Range: 0 se 1 (outputs sum to 1)

Use Case: Multi-class classification problems

Pros: Provides probability distribution over classes

Cons: Computationally intensive for large number of classes​

📊 Comparison Table
Activation
Function Range Use Case Pros Cons

Sigmoid 0 to 1 Binary classification Smooth gradient Vanishing gradient

19/23
Activation
Function Range Use Case Pros Cons

Tanh -1 to 1 Hidden layers Zero-centered output Vanishing gradient

ReLU 0 to ∞ Hidden layers in Computationally Dying ReLU problem


deep networks efficient

Leaky ReLU -∞ to Alternative to ReLU Prevents dying neurons α value selection is


∞ arbitrary

Softmax 0 to 1 Multi-class Probability distribution Computationally


classification over classes intensive

📚 Additional Resources
Activation functions in Neural Networks - GeeksforGeeks

Understanding Activation Functions in Depth - GeeksforGeeks

Choosing the Right Activation Function for Your Neural Network - GeeksforGeeks

Agar aap chahte hain, toh main in activation functions ka Python implementation ya ek
visual diagram bhi provide kar sakta hoon. Batayein!

सोर्सेस

Activation Function

https://miro.mediu https://miro.mediu
m.com/v2/resize%3Afi m.com/v2/resize%3Afi
t%3A1400/1%2ApVAK t%3A1200/1%2AZG90l
4JVYrft7yRjd1K9XIg.pn tyYG46grayKQVDDqw.
g png

20/23
Bilkul! Aaiye, Activation Functions ko detail mein samjhte hain — unka role, types, aur kaise
kaam karte hain neural networks mein.

🧠 Activation Function Kya Hai?


Neural networks mein, Activation Function ek mathematical operation hai jo har neuron ke
output ko transform karta hai. Ye function decide karta hai ki neuron activate hoga ya nahi,
aur uska output next layer ko pass hoga ya nahi. Iska main role hai non-linearity introduce
karna, jisse network complex patterns ko samajh sake.​

⚙️ Kaise Kaam Karta Hai?


1. Input Processing: Har neuron apne inputs ka weighted sum calculate karta hai.

2. Activation Function Application: Ye weighted sum activation function ke through pass


hota hai.

3. Output Generation: Activation function ka output decide karta hai ki neuron ka output
kya hoga, jo next layer ko pass hota hai.​

🔹 Common Activation Functions


1. Sigmoid Function
1
Formula: f (x) = 1+e−x

Range: 0 se 1

Use Case: Binary classification problems

Pros: Smooth gradient, output probabilities

Cons: Vanishing gradient problem​

2. Tanh (Hyperbolic Tangent) Function


Formula: f (x) = tanh(x)

21/23
Range: -1 se 1

Use Case: Hidden layers mein

Pros: Zero-centered output

Cons: Vanishing gradient problem​

3. ReLU (Rectified Linear Unit)


Formula: f (x) = max(0, x)
Range: 0 se ∞

Use Case: Hidden layers in deep learning models

Pros: Computationally efficient, mitigates vanishing gradient

Cons: Dying ReLU problem (neurons can become inactive)​

4. Leaky ReLU

={
x if x > 0
Formula: f (x)
if x ≤ 0
​ ​

αx
Range: -∞ se ∞

Use Case: Alternative to ReLU to prevent dying neurons

Pros: Allows small gradient when x < 0

Cons: α value selection can be arbitrary​

5. Softmax Function
e xi
Formula: f (xi ) =

x
∑j e j
​ ​ ​

Range: 0 se 1 (outputs sum to 1)

Use Case: Multi-class classification problems

Pros: Provides probability distribution over classes

Cons: Computationally intensive for large number of classes​

📊 Comparison Table

22/23
Activation
Function Range Use Case Pros Cons

Sigmoid 0 to 1 Binary classification Smooth gradient Vanishing gradient

Tanh -1 to 1 Hidden layers Zero-centered output Vanishing gradient

ReLU 0 to ∞ Hidden layers in Computationally Dying ReLU problem


deep networks efficient

Leaky ReLU -∞ to Alternative to ReLU Prevents dying neurons α value selection is


∞ arbitrary

Softmax 0 to 1 Multi-class Probability distribution Computationally


classification over classes intensive

📚 Additional Resources
Activation functions in Neural Networks - GeeksforGeeks

Understanding Activation Functions in Depth - GeeksforGeeks

Choosing the Right Activation Function for Your Neural Network - GeeksforGeeks

Agar aap chahte hain, toh main in activation functions ka Python implementation ya ek
visual diagram bhi provide kar sakta hoon. Batayein!

सोर्सेस

23/23

You might also like