0% found this document useful (0 votes)
25 views

ADDA Notes

Uploaded by

Kundan Bharti
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

ADDA Notes

Uploaded by

Kundan Bharti
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Algorithm:

An algorithm is a set of instructions or a step-by-step procedure for solving a problem or


accomplishing a task. It is a well-defined computational procedure that takes some value, or
set of values, as input and produces some output. Algorithms are used extensively in
computer science and various fields of engineering, mathematics, and everyday life.
Characteristics:
1. Well-defined
Algorithms must have clear, unambiguous instructions. Each step must be precisely specified.
2. Finite
Algorithms must have a finite number of steps. They cannot go on indefinitely.
3. Effective
Algorithms must be effective in solving the problem they are designed for. This means they
should produce the correct output for any valid input and should do so in a reasonable amount
of time.
4. Deterministic
Given the same input, an algorithm should always produce the same output.
5. Generalizable
Algorithms should be applicable to a range of inputs, not just specific cases.
6. Optimizable
Algorithms can often be optimized for efficiency, reducing the time or resources required to
solve a problem.

Analysis of Algorithms
Analysis of algorithms is a fundamental concept in computer science that involves studying
the performance characteristics of algorithms. This analysis aims to understand how an
algorithm behaves in terms of its time complexity, space complexity, and other relevant
metrics. The goal is to predict and compare the efficiency of different algorithms for solving
the same problem.
Here are some key aspects of analyzing algorithms:
Time complexity
Time complexity refers to the amount of time an algorithm takes to complete as a function of
the size of its input. It is often expressed using big O notation, which describes the upper
bound on the algorithm's running time in terms of the input size. Analyzing time complexity
helps in understanding how the algorithm's performance scales with increasing input size.
Space Complexity: Space complexity refers to the amount of memory an algorithm requires
to execute as a function of the input size. It also uses big O notation to describe the upper
bound on the algorithm's memory usage. Analyzing space complexity helps in evaluating the
algorithm's memory requirements and potential limitations.
Worst-case, Best-case, and Average-case Analysis: Algorithms can perform differently
depending on the characteristics of the input data. Analyzing the worst-case, best-case, and
average-case scenarios helps in understanding the algorithm's behavior under different
conditions. While worst-case analysis provides an upper bound on performance, best-case
and average-case analysis give insights into the algorithm's typical behavior.
Algorithm Design Goals
The three basic design goals that one should strive for in a program are:
1. Try to save Time
2. Try to save Space
3. Try to save Face

Space Time Trade off


Space-time trade-off is a concept in computer science and algorithm design that involves
making a decision between the usage of space (memory) and time (computation) to achieve a
desired outcome or optimize the performance of an algorithm.
Here's how it works:
1. Space (Memory) Consideration: Some algorithms can be optimized to use less memory
by storing intermediate results or data structures. However, reducing memory usage might
lead to an increase in the time required to execute the algorithm.
2. Time (Computation) Consideration: Conversely, optimizing an algorithm for faster
execution may require the use of additional memory. For example, caching previously
computed results to avoid redundant calculations can speed up execution but may increase
memory usage.
3. Trade-off Analysis: When designing algorithms, developers must consider the trade-off
between space and time. By analyzing the requirements of the problem, they can decide
whether to prioritize minimizing memory usage, minimizing execution time, or finding a
balance between the two.
4. Examples:
- In dynamic programming, storing previously computed results (memoization) can reduce
computation time but increases memory usage.
- In data compression algorithms, using more memory for compression tables can reduce
the time required to compress or decompress data.
- In database systems, indexing techniques trade off space to store indexes against the time
required to search and retrieve data efficiently.
5. Optimization: Depending on the specific requirements of the problem and the constraints
of the computing environment, developers may choose different trade-offs. Optimization
techniques aim to find the most efficient balance between space and time, often by analyzing
the characteristics of the problem, benchmarking different implementations, or using
profiling tools to identify bottlenecks.

6. Real-world Applications: Space-time trade-offs are prevalent in various fields of


computer science, including algorithms, data structures, database systems, and optimization
problems. By carefully considering these trade-offs, developers can design efficient solutions
that meet the performance requirements of their applications while optimizing resource
usage.
Asymptotic Analysis
Asymptotic analysis algorithm ke performance ko input size ke respect mein study karta hai.
Isme hum worst-case, average-case, aur best-case scenarios consider karte hain. Commonly,
hum worst-case scenario ko analyze karte hain.
Asymptotic Notations
Algorithm ki complexity ko express karne ke liye kuch standard notations ka use kiya jata
hai:
 Big O Notation (O):
Big O notation kisi function ka upper bound define karta hai. Matlab, yeh batata hai ki
algorithm ka performance kis maximum rate tak grow kar sakta hai jab input size badhta hai.
Example: O(n2) ka matlab hai ki algorithm ka running time at most proportional to n 2 hoga
jab n bahut bada ho.

 Omega Notation (Ω):


Omega notation kisi function ka lower bound define karta hai. Matlab, yeh batata hai ki
algorithm ka performance kam se kam kis rate tak grow karega jab input size badhta hai.
Example: Ω(n) ka matlab hai ki algorithm ka running time at least proportional to n hoga jab
n bahut bada ho.
 Theta Notation (Θ):
Theta notation kisi function ka tight bound define karta hai. Matlab, yeh batata hai ki
algorithm ka performance exactly kis rate pe grow karega jab input size badhta hai.
Example: Θ(n log n) ka matlab hai ki algorithm ka running time exactly proportional to n log
n hoga jab n bahut bada ho.

Divide and Conquer Divide


Divide and Conquer Divide and conquer is a popular algorithmic paradigm that involves
breaking down a problem into smaller, more manageable subproblems, solving each
subproblem independently, and then combining the solutions to the subproblems to solve the
original problem. This approach typically follows three steps: divide, conquer, and combine.
Here's how it applies to the algorithms you mentioned:
1. Max-Min Problem:
- Divide: Divide the array of numbers into two halves.
- Conquer: Recursively find the maximum and minimum elements in each half.
- Combine: Compare the maximum and minimum elements from the two halves to find the
overall maximum and minimum of the entire array.
2. Merge Sort:
- Divide: Divide the array into two halves.
- Conquer: Recursively sort each half of the array using merge sort.
- Combine: Merge the sorted halves to produce the final sorted array.

3. Binary Search:
- Divide: Divide the sorted array into two halves.
- Conquer: Compare the target value with the middle element of the array. If the target
value is equal to the middle element, the search is successful. If the target value is less than
the middle element, recursively search the left half of the array. If the target value is greater
than the middle element, recursively search the right half of the array.
- Combine: This step isn't explicitly required in binary search, as the search terminates
once the target element is found or when the search space is reduced to an empty array.
In each of these algorithms, the problem is divided into smaller subproblems, which are then
solved independently. The solutions to these subproblems are then combined to solve the
original problem. This divide-and-conquer strategy often leads to efficient algorithms for a
wide range of problems.

Example for Binary, Merge and Quick sort Examples


### Binary Search
def binary_search(arr, target):
l, r = 0, len(arr) - 1
while l <= r:
mid = (l + r) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
l = mid + 1
else:
r = mid - 1
return -1

# Example usage:
print(binary_search([1, 2, 3, 4, 5, 6, 7, 8, 9], 5)) # Output: 4
### Merge Sort
def merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2
L, R = arr[:mid], arr[mid:]
merge_sort(L)
merge_sort(R)
i=j=k=0
while i < len(L) and j < len(R):
arr[k] = L[i] if L[i] < R[j] else R[j]
i += 1 if L[i] < R[j] else 0
j += 1 if L[i] >= R[j] else 0
k += 1
arr[k:] = L[i:] + R[j:]

# Example usage:
arr = [38, 27, 43, 3, 9, 82, 10]
merge_sort(arr)
print(arr) # Output: [3, 9, 10, 27, 38, 43, 82]
### Quick Sort
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
L = [x for x in arr if x < pivot]
M = [x for x in arr if x == pivot]
R = [x for x in arr if x > pivot]
return quick_sort(L) + M + quick_sort(R)

# Example usage:
print(quick_sort([3, 6, 8, 10, 1, 2, 1])) # Output: [1, 1, 2, 3, 6, 8, 10]
UNIT – 2
Greedy Strategy
Greedy method ek strategy hai jo Divide and Conquer jaise algorithms ke saath use hoti hai
problems solve karne ke liye. Ye method optimization problems ko solve karne ke liye use
hoti hai. Optimization problem wo problem hoti hai jisme hume ya to maximum ya minimum
result chahiye hota hai. Chalo, kuch terms ke zariye samajhte hain.
Greedy method sabse simplest aur straightforward approach hai. Ye ek algorithm nahi hai,
balki ek technique hai. Is approach ka main function ye hai ki decision lena currently
available information ke basis par hota hai. Jo bhi current information available hai, uske
basis par decision liya jata hai bina is baat ki chinta kiye ki is decision ka future mein kya
asar hoga.
Ye technique basicly feasible solution determine karne ke liye use hoti hai jo ho sakta hai
optimal na ho. Feasible solution ek aisa subset hota hai jo diye gaye criteria ko satisfy karta
hai. Optimal solution wo solution hota hai jo best aur most favorable hota hai subset mein.
Feasible case mein, agar ek se zyada solutions diye gaye criteria ko satisfy karte hain, to wo
sab feasible solutions consider kiye jayenge, jabki optimal solution best solution hota hai sab
solutions mein se.

### Greedy Method ke Features:


- Simple Decision Making: Har step par jo bhi best choice lagti hai, usko choose kiya jata
hai.
- Local Optimum: Har decision local optimum pe based hota hai, yaani us particular
moment mein best lagne wala decision.
- No Future Concerns: Future mein kya hoga uski chinta nahi ki jati, bas current
information pe focus kiya jata hai.

### Example Problem: Activity Selection


Maan lo multiple activities hain aur unke start aur finish times diye hue hain. Ek time par ek
hi activity ho sakti hai. Maximum activities select karni hain jo attend ki ja sakti hain.
Greedy Approach: Har baar woh activity choose karo jo sabse pehle finish ho rahi ho,
kyunki isse aane wale maximum activities ke liye time bachega.
def activity_selection(start, end):
n = len(start)
selected_activities = [0] # Hamesha pehli activity select karo
last_selected = 0

for i in range(1, n):


if start[i] >= end[last_selected]:
selected_activities.append(i)
last_selected = i

return selected_activities

start_times = [1, 3, 0, 5, 8, 5]
end_times = [2, 4, 6, 7, 9, 9]
print(activity_selection(start_times, end_times)) # Output: [0, 1, 3, 4]

Advantages:
- Simple implementation
- Fast aur efficient for many problems
Disadvantages:
- Har problem ke liye optimal solution nahi deta
- Problem ki property check karni padti hai greedy strategy use karne se pehle

Optimal merge patterns


Optimal merge patterns problem ek classic example hai greedy strategy ka, jo sorted files ko
minimum cost pe merge karne ke liye use hota hai. Ye problem tab aati hai jab hume multiple
sorted files ko ek single sorted file me combine karna hota hai, aur hum har merge step ke
cost ko minimize karna chahte hain.
Problem Statement:
Hume given hai multiple sorted files aur har file ka size. Hume in files ko merge karna hai
taaki total merge cost minimum ho. Merge cost define hoti hai as sum of the sizes of the files
being merged.
Greedy Approach:
Greedy strategy yeh suggest karti hai ki hum har step par woh do smallest size ki files merge
karen, kyunki isse immediate cost minimum rahegi aur future merges ke liye bhi cost
minimum ho sakti hai.

### Steps:
1. Initialize a Min-Heap: Start karte hain ek min-heap se jismein hum sab files ke
sizes ko insert karte hain.
2. Merge Two Smallest Files: Heap se do smallest elements (sizes) nikaalte hain,
unko merge karte hain aur unka combined size wapas heap me insert karte hain.
3. Repeat Until One File Remains: Ye step tab tak repeat karte hain jab tak heap me
ek hi element (combined size of all files) nahi bacha rahta.
### Example:
Consider kijiye humein 5 files di gayi hain jin ke sizes hain: [20, 30, 10, 5, 30]
Step-by-Step Execution:
1. Initial Min-Heap:
Heap: [5, 20, 10, 30, 30]
2. First Merge:
- Merge sizes 5 and 10.
- Cost = 5 + 10 = 15
- Insert combined size 15 back into the heap.
Heap: [15, 20, 30, 30]
Total cost: 15
3. Second Merge:
- Merge sizes 15 and 20.
- Cost = 15 + 20 = 35
- Insert combined size 35 back into the heap.
Heap: [30, 30, 35]
Total cost: 15 + 35 = 50
4. Third Merge:
- Merge sizes 30 and 30.
- Cost = 30 + 30 = 60
- Insert combined size 60 back into the heap.
Heap: [35, 60]
Total cost: 50 + 60 = 110
5. Fourth Merge:
- Merge sizes 35 and 60.
- Cost = 35 + 60 = 95
- Insert combined size 95 back into the heap.
Heap: [95]
Total cost: 110 + 95 = 205
Final total merge cost = 205

### Python Implementation:


import heapq

def optimal_merge(files):
heapq.heapify(files)
total_cost = 0

while len(files) > 1:


first = heapq.heappop(files)
second = heapq.heappop(files)
cost = first + second
total_cost += cost
heapq.heappush(files, cost)

return total_cost

files = [20, 30, 10, 5, 30]


print(optimal_merge(files)) # Output: 205

Huffman Coding

Huffman Coding ek popular greedy algorithm hai jo data compression ke liye use hoti hai. Ye
algorithm variable-length prefix codes generate karti hai taaki frequently occurring characters
ko shorter codes aur rarely occurring characters ko longer codes assign kiya ja sake. Is tarike
se, overall data size reduce ho jata hai.

Problem Statement:

Humein ek set of characters aur unki frequencies di gayi hain. Humein in characters ke liye
prefix-free binary codes generate karne hain taaki total encoded data size minimum ho.

Greedy Strategy:

Huffman Coding me greedy strategy is tarah se apply hoti hai:

1. Har character ko ek node ke roop me treat karte hain jismein uski frequency hoti hai.
2. Har step par do nodes (characters) jin ki frequencies sabse kam hain, unhe merge
karte hain aur ek new internal node banate hain jinki combined frequency un dono
nodes ki frequencies ka sum hoti hai.
3. Ye step tab tak repeat karte hain jab tak ek hi node (root of the Huffman Tree) nahi
bacha rahta.

Step-by-Step Explanation:

1. Initialize Priority Queue: Sab characters aur unki frequencies ko priority queue
(min-heap) me insert karte hain.
2. Build Huffman Tree: Har step par do nodes jin ki frequencies sabse kam hain, unhe
merge karte hain aur ek new internal node banate hain. New node ki frequency dono
nodes ki frequencies ka sum hoti hai. Phir is new node ko priority queue me insert
karte hain.
3. Generate Codes: Huffman Tree se prefix codes generate karte hain, jahan left edge
ko "0" aur right edge ko "1" assign karte hain.
Example:
Consider kijiye humein ye characters aur unki frequencies diye gaye hain:
Character Frequency
a 5
b 9
c 12
d 13
e 16
f 45

Step-by-Step Execution:

1. Initial Priority Queue:

[(5, 'a'), (9, 'b'), (12, 'c'), (13, 'd'), (16, 'e'), (45, 'f')]

2. First Merge:
o Merge nodes with frequencies 5 ('a') and 9 ('b').
o Create new node with frequency 14.

[(12, 'c'), (13, 'd'), (14, None), (16, 'e'), (45, 'f')]

3. Second Merge:
o Merge nodes with frequencies 12 ('c') and 13 ('d').
o Create new node with frequency 25.

[(14, None), (16, 'e'), (25, None), (45, 'f')]

4. Third Merge:
o Merge nodes with frequencies 14 and 16 ('e').
o Create new node with frequency 30.

[(25, None), (30, None), (45, 'f')]

5. Fourth Merge:
o Merge nodes with frequencies 25 and 30.
o Create new node with frequency 55.

[(45, 'f'), (55, None)]

6. Final Merge:
o Merge nodes with frequencies 45 ('f') and 55.
o Create new root node with frequency 100.

[(100, None)]
Huffman Tree banne ke baad, codes generate karte hain:

Character Code
a 1100
b 1101
c 100
d 101
e 111
f 0

Minimum spanning trees


Minimum Spanning Tree (MST) ek tree hai jo connected graph ke sare vertices ko connect
karta hai bina cycles ke aur uski total edge weights ka sum minimum karta hai. MST ko find
karne ke liye kuch algorithms hai, jinmein Prim's Algorithm aur Kruskal's Algorithm sabse
popular hain.
Minimum Spanning Tree (MST) Definition:
- Connected Graph: Graph jisme har vertex se dusre vertices tak koi path exist karta hai.
- Tree: Connected acyclic graph.
- Spanning Tree: Graph ka ek subgraph jo original graph ke sare vertices ko include karta
hai.
- Minimum Spanning Tree: Spanning tree jiska total edge weight minimum ho.

### Prim's Algorithm:


Prim's Algorithm ek greedy approach hai jo MST ko find karne ke liye use hoti hai. Ye
algorithm har step par minimum weight se ek naya vertex MST set me add karta hai.
Steps of Prim's Algorithm:
1. Choose a Starting Vertex: Kisi bhi starting vertex ko choose karo.
2. Select Minimum Weight Edge: MST me jo vertices already hai, unse connected
edges me se minimum weight wale edge ko select karo jo MST set me nahi hai.
3. Add to MST Set: Selected edge ke adjacent vertex ko MST set me add karo.
4. Repeat Until MST is Complete: Steps 2 aur 3 ko repeat karo jab tak MST
complete na ho jaye.
#### Example:
Consider karte hain ek graph jismein vertices aur edges diye gaye hain:
Vertices: A, B, C, D, E
Edges:
(A, B, 4), (A, C, 3), (B, C, 6), (B, D, 2), (B, E, 3), (C, D, 1), (D, E, 5)

Step-by-Step Execution:
1. Choose Starting Vertex: Kisi bhi vertex se start kar sakte hain, chalo A choose
karte hain.
2. Select Minimum Weight Edge:
- A se connected edges me se minimum weight edge (A, C, 3) ko select karo.
- MST Set: (A, C)
3. Add to MST Set:
- C ke adjacent edges me se minimum weight edge (C, D, 1) ko select karo.
- MST Set: (A, C, D)
4. Repeat:
- D ke adjacent edges me se minimum weight edge (D, B, 2) ko select karo.
- MST Set: (A, C, D, B)
5. Final MST:
- E ke adjacent edges me se minimum weight edge (B, E, 3) ko select karo.
- MST Set: (A, C, D, B, E)
Final MST ban gaya hai: (A, C), (C, D), (D, B), (B, E) with total weight 3 + 1 + 2 + 3
= 9.

Kruskal's Algorithm:
Kruskal's Algorithm bhi MST ko find karne ke liye use hoti hai, lekin isme edges ko sort
karke use karte hain.
Steps of Kruskal's Algorithm:
1. Sort Edges by Weight: Graph ke sare edges ko weight ke basis par sort karo.
2. Select Minimum Weight Edge: Sorted edges me se minimum weight edge ko
select karo, agar woh cycle create nahi karta.
3. Add to MST Set: Selected edge ko MST set me add karo, agar cycle create nahi ho
raha hai.
4. Repeat Until MST is Complete: Steps 2 aur 3 ko repeat karo jab tak MST
complete na ho jaye.

Example:
Consider karte hain ek graph jismein vertices aur edges diye gaye hain:
Vertices: A, B, C, D, E
Edges:
(A, B, 4), (A, C, 3), (B, C, 6), (B, D, 2), (B, E, 3), (C, D, 1), (D, E, 5)

Step-by-Step Execution:
1. Sort Edges by Weight: Edges ko weight ke basis par sort karo.
Sorted Edges: (C, D, 1), (B, D, 2), (B, E, 3), (A, C, 3), (A, B, 4), (D, E, 5), (B, C, 6)
2. Select Minimum Weight Edge:
- (C, D, 1) ko select karo.
- MST Set: (C, D)
3. Add to MST Set:
- (B, D, 2) ko select karo.
- MST Set: (C, D, B)
4. Repeat:
- (B, E, 3) ko select karo.
- MST Set: (C, D, B, E)
5. Final MST:
- (A, C, 3) ko select karo.
- MST Set: (A, C, D, B, E)
Final MST ban gaya hai: (C, D), (B, D), (B, E), (A, C) with total weight 1 + 2 + 3 + 3 = 9.
Knapsack Problem

Knapsack problem ek optimization problem hai jo ki greedy strategy ke through solve kiya ja
sakta hai. Is problem me ek knapsack (ya bag) diya hota hai jismein items ko add karna hota
hai, lekin knapsack ka capacity fixed hota hai. Har item ko ek weight aur ek value associated
hota hai, aur humein in items me se kuch items ko select karke knapsack me add karna hota
hai taaki total value maximize ho, bina knapsack ke capacity exceed kiye.

Types of Knapsack Problems:

1. 0/1 Knapsack Problem: Har item ko ya toh pura select karte hain ya bilkul nahi.
2. Fractional Knapsack Problem: Har item ko fractionally (partial amount) select kiya
ja sakta hai, jisse total value maximize ho.

Greedy Approach for Fractional Knapsack Problem:

Fractional Knapsack Problem me greedy approach ka use karke hum optimal solution pa
sakte hain. Greedy approach yaha par is tarah se work karta hai:

1. Calculate Value per Unit Weight: Har item ka value per unit weight calculate karo,
yani value/weight\text{value} / \text{weight}value/weight.
2. Sort Items: Items ko is value per unit weight ke basis par descending order me sort
karo. Isse sabse zyada value wale items sabse pehle aayenge.
3. Add Items to Knapsack: Ab sorted order me items ko knapsack me add karte jao,
starting se. Agar knapsack ka capacity exceed nahi hota hai, toh item ko pura add
karo. Agar exceed hota hai, toh fractionally item ko add karo jisse capacity full ho
jaye.

Example:

4. Consider karte hain ek knapsack jiska capacity 50 units hai, aur humein diye gaye
items ke weights aur unki values hai:
Item Weight Value
A 10 60
B 20 100
C 30 120

Step-by-Step Execution:

1. Calculate Value per Unit Weight:

A: 60 / 10 = 6
B: 100 / 20 = 5
C: 120 / 30 = 4

2. Sort Items by Value per Unit Weight (Descending):

A: 6, B: 5, C: 4
Sorted Order: A, B, C
3. Add Items to Knapsack:
o Start with item A (weight 10, value 60). Add full item A to knapsack.
o Knapsack Capacity: 50 - 10 = 40 units remaining.
o Next, add item B (weight 20, value 100). Add full item B to knapsack.
o Knapsack Capacity: 40 - 20 = 20 units remaining.
o Finally, add item C (weight 30, value 120). Only add a fraction of item C that
fits in remaining capacity (20 units).

Fraction to add: 20/30=2/3


Value to add: 2/3×120=80
o Knapsack Capacity: 20 - 20 = 0 units remaining.

Final Solution:

Total value in knapsack = 60 (item A) + 100 (item B) + 80 (fraction of item C) = 240

Graph
Graphs are fundamental data structures used to represent relationships between objects. A
graph consists of a set of vertices (also called nodes) and a set of edges that connect pairs of
vertices. Graphs are widely used in computer science, mathematics, and various fields for
modeling and solving problems involving networks, connections, and relationships.
Here are some key concepts related to graphs:
Vertices (Nodes): Vertices are the fundamental units of a graph and represent the entities
being connected. In a graph, vertices are typically denoted by symbols or labels.
Edges: Edges are the connections between pairs of vertices in a graph. They represent
relationships or interactions between the entities represented by the vertices. An edge can be
directed (having a specific direction from one vertex to another) or undirected (bi-directional,
with no specified direction).
Types of Graphs:
 Directed Graph (Digraph): A graph in which edges have a direction associated with
them.
 Undirected Graph: A graph in which edges have no direction, and they simply
connect pairs of vertices.
 Weighted Graph: A graph in which each edge has an associated weight or cost.
 Cyclic Graph: A graph containing at least one cycle, where a cycle is a path that
starts and ends at the same vertex.

Graph Traversal:
Depth-First Search (DFS): A graph traversal algorithm that explores as far as possible along
each branch before backtracking.
Breadth-First Search (BFS): A graph traversal algorithm that explores all the vertices in the
nearest neighborhood before moving to the vertices at the next level.
Applications:
 Graphs are used to model various real-world scenarios, including social networks,
road networks, computer networks, and biological networks.
 Graph algorithms are used for tasks such as finding shortest paths, detecting cycles,
determining connectivity, and clustering.
Trees
Trees are a special type of graph that is extensively used in computer science and related
fields for organizing hierarchical data and representing relationships between objects. A tree
is a connected graph with no cycles, meaning there is exactly one path between any two
vertices. Trees consist of nodes (also called vertices) and edges, with the following
properties:
1. Root: The root of a tree is the topmost node and serves as the starting point for traversing
the tree. Every tree has exactly one root node.
2. Parent and Child Nodes: Nodes in a tree are connected by edges, and each node (except
the root) has a parent node. Nodes directly connected to a parent node are called child nodes.
3. Leaf Nodes: Leaf nodes are nodes with no children. They are the terminal nodes in a tree,
meaning they have no further descendants.
4. Subtrees: A subtree is a portion of a tree that is itself a tree. Any node in a tree can be
considered the root of a subtree formed by its descendants.
5. Depth and Height: The depth of a node in a tree is the length of the path from the root to
that node. The height of a tree is the maximum depth of any node in the tree.
6. Binary Trees: A binary tree is a tree in which each node has at most two children: a left
child and a right child. Binary trees are widely used in computer science for efficient data
storage and searching.
7. Binary Search Trees (BST): A binary search tree is a binary tree in which the key value of
each node is greater than all keys in its left subtree and less than all keys in its right subtree.
BSTs are commonly used for fast searching, insertion, and deletion of data.
8. Balanced Trees: Balanced trees are trees in which the heights of the left and right subtrees
of any node differ by at most one. Examples include AVL trees, red-black trees, and B-trees.
9. Traversal: Tree traversal is the process of visiting all the nodes in a tree in a systematic
order. Common traversal algorithms include in-order, pre-order, post-order, and level-order
traversals.
Recursion
1. Tower of hanoi
2. Fibonachi series
3. Factorial
4. Sum of n numbers
Heuristics
Brute force approach

You might also like