Assignment - 1 AIM: Implement A Approach For Any Suitable Application. Objective
Assignment - 1 AIM: Implement A Approach For Any Suitable Application. Objective
OBJECTIVE :
• To understand the basic concept of A* approach.
• To implement A* algorithm for shortest path _nding application.
SOFTWARE REQUIREMENTS :
• Linux Operating System
• Python compiler
MATHEMATICAL MODEL :
Consider a set S consisting of all the elements related to a program.The
mathematical model is given as below,
S=fs,e,X,Y,Fme,DD,NDD,Mem sharedg
Where,
s = Initial State
e = End State
X = Input. Here it is map size,start point,_nish point.
Y = Output.Here output is time required to generate route,actual route and map.
Fme = Algorithm/Function used in program.
for eg.fnextdistance(),estimate(),pathFind()g
DD=Deterministic Data
NDD=Non deterministic Data
Mem shared=Memory shared by processor.
THEORY :
A* Algorithm :
A* is the most widely used approach for path_nding.It is the A* approachthat makes use of the
function g(n) along with h.Evaluation function f’(n) represents the total cost. Below is the classic
representation of the A* algorithm.
g(n) is the total distance it has taken to get from the starting position to the current location. h’(n) is
the estimated distance from the current position to the goal des-tination/state. A heuristic function is
used to create this estimate on how far away it will take to reach the goal state. f ’(n) is the sum of
g(n) and h’(n). This is the current estimated shortest path. f(n) is the true shortest path which is not
discovered until the A* algorithm is _nished.
In A* approach,the paths are not duplicated, they simply remain as paths that the algorithm hasn’t
explored yet. A* works by maintaining an open set/open list, it is the collection of nodes the
algorithm knows already how to reach (and by what cost), but it hasn’t tried expanding them yet.At
each iteration the algorithm chooses a node to expand from the open set (the one with the lowest f
function - the f function is the sum of the cost the algorithm already knows it takes to get to the
node (g) and the the algorithm’s estimate of how much it will cost to get from the node to the goal
(h, the heuristic). A* Search also makes use of a priority queue which is quite similar to the
Uniform Cost Search algorithm:
ALGORITHM :
1. Create a search graph G, consisting solely of the start node, no. Putno on a list called OPEN.
2. Create a list called CLOSED that is initially empty.
3. If OPEN is empty, exit with failure.
4. Select the _rst node on OPEN, remove it from OPEN, and put it onCLOSED. Called this node n.
5. If n is a goal node, exit successfully with the solution obtained by tracing a path along the
pointers from n to no in G. (The pointers de_ne a search tree and are established in Step 7.)
6. Expand node n, generating the set M, of its successors that are notalready ancestors of n in G.
Install these members of M as successors of n in G.
7. Establish a pointer to n from each of those members of M that were not already in G (i.e., not
already on either OPEN or CLOSED). Add these members of M to OPEN. For each member, m, of
M that was already on OPEN or CLOSED, redirect its pointer to n if the best path to m found so far
is through n. For each member of M already on CLOSED,redirect the pointers of each of its
descendants in G so that they point backward along the best paths found so far to these descendants.
8. Reorder the list OPEN in order of increasting f values. (Ties among minimal f values are resolved
in favor of the deepest node in the search tree.)
9. Go to Step 3.
CONCLUSION :
Thus, we have implemented A* algorithm for shortest path finding application.
ASSIGNMENT
OBJECTIVE :
• To understand the basics of Constraint satisfaction problem.
• To understand the concept of N queen problem and backtracking.
• To implement Constraint Satisfaction Problem – N queen using Backtracking.
SOFTWARE REQUIREMENTS :
• Linux Operating System
• C++ compiler
MATHEMATICAL MODEL :
Consider a set S consisting of all the elements related to a program.The
mathematical model is given as below,
S=fs,e,X,Y,Fme,DD,NDD,Mem sharedg
Where,
s = Initial State
e = End State
X = Input. Here it is n*n matrix.
Y = Output.Here output is time required to place n queens on chessboard.
Fme = Algorithm/Function used in program.
for eg.issafe(),iscolumnsafe(),isLeftdiagonalsafe()g
DD=Deterministic Data
NDD=Non deterministic Data
Mem shared=Memory shared by processor.
THEORY :
Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects
whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a
problem as a homogeneous collection of finite constraints over variables, which is solved by
constraint satisfaction methods. CSPs are the subject of intense research in both artificial
intelligence and operations research, since the regularity in their formulation provides a common
basis to analyze and solve problems of many seemingly unrelated families. CSPs often exhibit high
complexity, requiring a combination of heuristics and combinatorial search methods to be solved in
a reasonable time. The Boolean satisfiability problem (SAT), the satisfiability modulo theories
(SMT) and answer set programming (ASP) can be roughly thought of as certain forms of the
constraint satisfaction problem.
Examples of simple problems that can be modeled as a constraint satisfaction problem include:
• Eight queens puzzle
• Map coloring problem
• Sudoku, Crosswords, Futoshiki, Kakuro (Cross Sums), Numbrix, Hidato and many other
logic puzzles These are often provided with tutorials of ASP, Boolean SAT and SMT solvers.
In the general case, constraint problems can be much harder, and may not be expressible in
some of these simpler systems.
The n-queens puzzle is the problem of placing n queens on an n×n chessboard such that no two
queens attack each other. Given an integer n, print all distinct solutions to the n-queens puzzle. Each
solution contains distinct board configurations of the n-queens’ placement, where the solutions are a
permutation of [1,2,3..n] in increasing order, here the number in the ith place denotes that the ith-
column queen is placed in the row with that number. For eg below figure represents a chessboard [3
1 4 2].
he N Queen is the problem of placing N chess queens on an N×N chessboard so that no two queens
attack each other. For example, following is a solution for 4 Queen problem.
The expected output is a binary matrix which has 1s for the blocks where queens are placed. For
example, following is the output matrix for above 4 queen solution.
{ 0, 1, 0, 0}
{ 0, 0, 0, 1}
{ 1, 0, 0, 0}
{ 0, 0, 1, 0}
Backtracking Algorithm
The idea is to place queens one by one in different columns, starting from the leftmost column.
When we place a queen in a column, we check for clashes with already placed queens. In the
current column, if we find a row for which there is no clash, we mark this row and column as part
of the solution. If we do not find such a row due to clashes then we backtrack and return false.
1) Start in the leftmost column
2) If all queens are placed
return true
3) Try all rows in the current column. Do following for every tried row.
a) If the queen can be placed safely in this row then mark this [row,
column] as part of the solution and recursively check if placing queen
here leads to a solution.
b) If placing the queen in [row, column] leads to a solution then return
true.
c) If placing queen doesn't lead to a solution then umark this [row,
column] (Backtrack) and go to step (a) to try other rows.
3) If all rows have been tried and nothing worked, return false to trigger
backtracking.
CONCLUSION :
Thus, we have successfully implemented Constraint Satisfaction Problem – N queen using
Backtracking.
ASSIGNMENT No:
THEORY:
Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are
evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used
commonly for machine playing of two-player games (Tic-tac-toe, Chess, Go, etc.). It stops
completely evaluating a move when at least one possibility has been found that proves the move to
be worse than a previously examined move. Such moves need not be evaluated further. When
applied to a standard minimax tree, it returns the same move as minimax would, but prunes away
branches that cannot possibly influence the final decision
The initial call starts from A. The value of alpha here is -INFINITY and the value of beta is
+INFINITY. These values are passed down to subsequent nodes in the tree. At A the maximizer
must choose max of B and C, so A calls B first
• At B it the minimizer must choose min of D and E and hence calls D first.
• At D, it looks at its left child which is a leaf node. This node returns a value of 3. Now the
value of alpha at D is max( -INF, 3) which is 3.
• To decide whether its worth looking at its right node or not, it checks the condition
beta<=alpha. This is false since beta = +INF and alpha = 3. So it continues the search.
• D now looks at its right child which returns a value of 5.At D, alpha = max(3, 5) which is 5.
Now the value of node D is 5
• D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The minimizer is now
guaranteed a value of 5 or lesser. B now calls E to see if he can get a lower value than 5.
• At E the values of alpha and beta is not -INF and +INF but instead -INF and 5 respectively,
because the value of beta was changed at B and that is what B passed down to E
• Now E looks at its left child which is 6. At E, alpha = max(-INF, 6) which is 6. Here the
condition becomes true. beta is 5 and alpha is 6. So beta<=alpha is true. Hence it breaks and
E returns 6 to B
• Note how it did not matter what the value of E‘s right child is. It could have been +INF or
-INF, it still wouldn’t matter, We never even had to look at it because the minimizer was
guaranteed a value of 5 or lesser. So as soon as the maximizer saw the 6 he knew the
minimizer would never come this way because he can get a 5 on the left side of B. This way
we dint have to look at that 9 and hence saved computation time.
• E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The value of node B is also 5
PSEUDOCODE:
function minimax(node, depth, isMaximizingPlayer, alpha, beta):
if isMaximizingPlayer :
bestVal = -INFINITY
for each child node :
value = minimax(node, depth+1, false, alpha, beta)
bestVal = max( bestVal, value)
alpha = max( alpha, bestVal)
if beta <= alpha:
break
return bestVal
else :
bestVal = +INFINITY
for each child node :
value = minimax(node, depth+1, true, alpha, beta)
bestVal = min( bestVal, value)
beta = min( beta, bestVal)
if beta <= alpha:
break
return bestVa
For example, hill climbing can be applied to the traveling salesman problem. It is easy to find a
solution that visits all the cities but will be very poor compared to the optimal solution. The
algorithm starts with such a solution and makes small improvements to it, such as switching the
order in which two cities a re visited. Eventually, a much better route is obtained.
Hill climbing is used widely in artificial intelligence, for reaching a goal state from a starting node.
Choice of next node and starting node can be varied to give a list of related algorithms.
MATHEMATICAL DESCRIPTION:
Hill climbing attempts to maximize (or minimize) a function
f(x), where x are discrete states. These states are typically represented by vertices in a graph ,
where edges in the
graph encode nearness or similarity of a graph. Hill climb
ing will follow the graph from vertex to vertex, always locally increasing (or decreasing) the value
of f, until a local maximum (or local minimum ) xm is reached. Hill climbing can also operate on
a continuous space: in that case, the algorithm is called gradient ascent (or gradient descent if the
function is minimized).
PROBLEMS IN HILL CLIMBING :
1. LOCAL MAXIMA
A problem with hill climbing is that it will find only local maxima. Unless the heuristic is convex,
it may not reach a global maximum. Other local search algorithms try to overcome this problem
such as stochastic hill climbing, random walks and simulated annealing. This problem of hill
climbing can be solved by using random hill climbing search technique.
2. RIDGES
A ridge is a curve in the search place that leads to a maximum, but the orientation of the ridge
compared to the available moves that are used to climb is such that each move will
lead to a smaller point. In other words, each point on a ridge
looks to the algorithm like a local maximum, even though the point is part of a curve leading to a
better optimum.
3. PLATEAU
Another problem with hill climbing is that of a plateau, which occurs when we get to a "flat" part
of the search space, i.e. we have a path where the heuristics are all very close together. This kind of
flatness can cause the algorithm to cease progress and wander aimlessly.
PSEUDOCODE
Hill Climbing Algorithm
currentNode = startNode;
loop do
L = NEIGHBORS(currentNode);
nextEval =-INF;
nextNode = NULL;
for all x in L
if (EVAL(x) > nextEval)
nextNode = x;
nextEval = EVAL(x);
if nextEval <= EVAL(currentNode)
//Return current node since no better neighbors exist
return currentNode;
currentNode = nextNode
Title: For Bubble Sort and Merger Sort, based on existing sequential algorithms, design
and implement parallel algorithm utilizing all resources available.
Aim: Understand Parallel Sorting Algorithms like Bubble sort and Merge Sort.
Prerequisites:
Student should know basic concepts of Bubble sort and Merge Sort.
Objective: Study of Parallel Sorting Algorithms like Bubble sort and Merge
Sort
Theory:
i) What is Sorting?
Sorting is a process of arranging elements in a group in a particular order, i.e., ascending
order, descending order, alphabetic order, etc.
Bubble Sort
The idea of bubble sort is to compare two adjacent elements. If they are not in the
right order,switch them. Do this comparing and switching (if necessary) until the end of the
array is reached. Repeat this process from the beginning of the array n times.
Implemented as a pipeline.
Let local_size = n / no_proc. We divide the array in no_proc parts, and each process
executes the bubble sort on its part, including comparing the last element with the
first one belonging to the next thread.
Implement with the loop (instead of j<i)
for (j=0; j<n-1; j++)
For every iteration of i, each thread needs to wait until the previous thread
has finished that iteration before starting.
We'll coordinate using a barrier.
2. If k is even then
3. for i = 0 to (n/2)-1 do in parallel
4. If A[2i] > A[2i+1] then
5. Exchange A[2i] ↔ A[2i+1]
6. Else
7. for i = 0 to (n/2)-2 do in parallel
8. If A[2i+1] > A[2i+2] then
9. Exchange A[2i+1] ↔ A[2i+2]
10. Next k
Merge Sort
• Collects sorted list onto one processor
• Merges elements as they come together
• Simple tree structure
• Parallelism is limited when near the root
Theory:
To sort A[p .. r]:
1. Divide Step
If a given array A has zero or one element, simply return; it is already sorted. Otherwise,
splitA[p .. r] into two subarraysA[p .. q] and A[q + 1 .. r], each containing about half of the
elements of A[p .. r]. That is, q is the halfway point of A[p .. r].
2. Conquer Step
Conquer by recursively sorting the two subarraysA[p .. q] and A[q + 1 .. r].
3. Combine Step
Combine the elements back in A[p .. r] by merging the two sorted subarraysA[p .. q] and A[q
+ 1 .. r] into a sorted sequence. To accomplish this step, we will define a procedure MERGE
(A, p, q, r).
Example:
Laboratory Practice – I BE (Comp Engg)
1. Procedure parallelMergeSort
2. Begin
3. Create processors Pi where i = 1 to n
4. if i > 0 then recieve size and parent from the root
5. recieve the list, size and parent from the root
6. endif
7. midvalue= listsize/2
8. if both children is present in the tree then
9. send midvalue, first child
10. send listsize-mid,second child
11. send list, midvalue, first child
12. send list from midvalue, listsize-midvalue, second child
13. call mergelist(list,0,midvalue,list, midvalue+1,listsize,temp,0,listsize)
14. store temp in another array list2
15. else
16. call parallelMergeSort(list,0,listsize)
17. endif
18. if i >0 then
19. send list, listsize,parent
20. endif
21. end
Conclusion: Thus, we have implemented Parallel Bubble Sort and Merge Sort.
AIM: Implement Best First Search Algorithm
OBJECTIVES:
• To understand the basic concept of BFS technique.
• To implement Best First Search Algorithm for path finding application.
SOFTWARE REQUIREMENTS :
• Linux Operating System
• Python compiler
MATHEMATICAL MODEL :
Consider a set S consisting of all the elements related to a program.The
mathematical model is given as below,
S=fs,e,X,Y,Fme,DD,NDD,Mem sharedg
Where,
s = Initial State
e = End State
X = Input. Here it is map size,start point,_nish point.
Y = Output.Here output is time required to generate route,actual route and map.
Fme = Algorithm/Function used in program.
for eg.fnextdistance(),estimate(),pathFind()g
DD=Deterministic Data
NDD=Non deterministic Data
Mem shared=Memory shared by processor.
THEORY:
In BFS and DFS, when we are at a node, we can consider any of the adjacent as next node. So both
BFS and DFS blindly explore paths without considering any cost function. The idea of Best First
Search is to use an evaluation function to decide which adjacent is most promising and then
explore. Best First Search falls under the category of Heuristic Search or Informed Search.
We use a priority queue to store costs of nodes. So the implementation is a variation of BFS, we just
need to change Queue to PriorityQueue.
pq initially contains S
We remove s from and process unvisited
neighbors of S to pq.
pq now contains {A, C, B} (C is put
before B because C has lesser cost)
ANALYSIS :
• The worst case time complexity for Best First Search is O(n * Log n) where n is number of
nodes. In worst case, we may have to visit all nodes before we reach goal. Note that priority
queue is implemented using Min(or Max) Heap, and insert and remove operations take
O(log n) time.
• Performance of the algorithm depends on how well the cost or evaluation function is
designed.
CONCLUSION: Thus, we have implemented the Best First Search Algorithm