Daa Assignment 3_compressed
Daa Assignment 3_compressed
1.
a) An algorithm is a finite set of instructions that, if followed, accomplishes a particular task.
Properties of an Algorithm
• All algorithms must satisfy the following criteria:
1. Input. Zero or more quantities are externally supplied.
2. Output. At least one quantity is produced.
3. Definiteness. Each instruction is clear and unambiguous.
4. Finiteness. If we trace out the instructions of an algorithm, then for all cases, the
algorithm terminates after a finite number of steps.
5. Effectiveness: Every instruction must be very basic so that it can be carried out, in principle,
by a person using only pencil and paper.
2.
3.a)
O – NOTATION (Upper Bound)
• The O-notation (pronounced as Big “Oh”) is used to measure the performance of an algorithm
which depends on the volume of Input data.
• The O-notation is used to define the order of growth of an algorithm, as the input size increases,
the performance varies.
• The function f (n) = O (g (n)) (read as f of n is big oh of g of n) iff there exists two positive
constants c and n0 such that f (n) <= cg(n) for all n, where n>=n0.
O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }.
Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the
worst-case complexity of an algorithm.
OMEGA NOTATION (Ω)
• The function f (n) = Ω (g(n)) (read as “ f of n is omega of g of n” ) iff there exists two positive
constants c and n0 such that f(n) >= c * g(n) for all n, where n >= n0. Ω(g(n)) = { f(n) : there exist
positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }.
• Omega notation represents the lower bound of the running time of an algorithm.
• Thus, itprovides the best case complexity of an algorithm.
b)
4.a)
Time Complexity
• The time T (P) taken by a program P is the sum of the compile time and the run (execution) time.
• The compile time does not depend on the instance characteristics. (also we may assume that a
compiled program will be run several times without recompilation)
• This run time is denoted by tp (instance characteristics).
• If we know the characteristics of the compiler to be used, we could proceed to determine the
number of additions, subtractions, multiplications, divisions, compares, loads, stores, and so on, that
are performed when the code for P is used on an instance with characteristic n.
Space Complexity
The space needed by each of these algorithms is seen to be the sum of the following components:
*A fixed part that is independent of the characteristics (eg. Number, size) of the inputs and
outputs. This part typically includes the instruction space (space for code), space for simple variable
and fixed-size component variables (aggregate), space for constants and so on.
* A variable part that consists of the consists of the space needed by component variables
whose size is dependent on the particular problem instance being solved, the space needed by
referenced variables (to the extent that this depends on instance characteristics ) and the recursion
stack space (depends on the instance characteristics).
• The space requirement S(P) of any algorithm P may therefore be written S (P) = c+ Sp
(instance characteristics), where c is a constant.
• When analyzing the space complexity of an algorithm, we concentrate solely on estimating
Sp (instanced characteristics).
• For any given problem, we need first to determine which instance characteristics to use to
measure the space requirements.
Algorithm Sum
Algorithm Sum(a, n)
{
s:=0.0;
for i:= 1 to n do
s:= s+ a [i];
return s;
}
b)
O – NOTATION (Upper Bound)
• The O-notation (pronounced as Big “Oh”) is used to measure the performance of an algorithm
which depends on the volume of Input data.
• The O-notation is used to define the order of growth of an algorithm, as the input size increases,
the
performance varies.
• The function f (n) = O (g (n)) (read as f of n is big oh of g of n) iff there exists two positive
constants c and n0 such that f (n) <= cg(n) for all n, where n>=n0.
O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }.
Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the
worst-case complexity of an algorithm.
5.
Designing an algorithm typically involves several structured steps to ensure that the solution is
effective, efficient, and correct. Below is a step-by-step guide to designing an algorithm:
6.
7.a. Time Complexity
• Best case: If the algorithm finds the element at the first search itself, it is referred as a best case
algorithm.
• Worst case : If the algorithm finds the element at the end of the search or if the searching of the
element fails, the algorithm is in the worst case or it requires maximum number of steps thatcan
be executed for the given parameters.
• Average case: The analysis of average case behavior is complex than best case and worst case
analysis, and is taken by the probability of Input data. As the volume of Input data increases, the
average case algorithm behaves like worst case algorithm. In an average case behavior; the
searched element is locatedin between a position of the first and last element.
• The Asymptotic notation introduces some terminology that enables to make meaningful
statements about the time and space complexities of an algorithm. The functions f and g are
nonnegative functions.
• Big-O notation represents the upper bound of the running time of an algorithm.
hus, it gives the worst-case complexity of an algorithm.
• Suppose we have a program having count 3n+2.
o We can write it as f (n) =3n+2. We say that 3n+2 = O(n), that is of the order of n,
because f (n) <= 4n or 3n + 2 <=4n, for all n>=2.
• Another e.g. Suppose f (n) =10n 2 +4n+2.
•We say that f (n) =O (n 2 ) sine 10n 2 +4n+2<=11n 2 for n>=2
•But here we can’t say that f (n) =O(n) since 10n 2 + 4n+2 never less than or equal to cn,
•That is ,10n 2 +4n+2 != O (n) , But we are able to say that f (n) =O (n 3 ) since f (n) can be
less than or equal to cn 3, same as 10n 2 +4n+2<=10n4 , for n>=2.
• The function f (n) = Ω (g(n)) (read as “ f of n is omega of g of n” ) iff there exists two positive
constants c and n0 such that f(n) >= c * g(n) for
all n, where n >= n0.
• Omega notation represents the lower bound of the running time of an algorithm.
• Thus, itprovides the best case complexity of an algorithm.
• Suppose we have a program having count 3n+2. we can write it as f(n)=3n+2.
• And we say that 3n+2= Ω (n), that is of the order of n, because f(n)>=3n or 3n+2>=3n, for all
n>=1.
• Normally suppose a program has step count equals 5, and then we say that it has an order Ω
(constant) or Ω (1).
• If f(n) = 3n+5 then Ω (n) or Ω (1), But we cannot express its time complexity as Ω(n 2 ).
• If f(n) = 5n 2+8n+2 then Ω (n 2) or Ω (n) or Ω (1)
• The asymptotic upper bound provided by O-notation may or may not be asymptotically
tight.
• The bound 2n 2 = O(n 2) is asymptotically tight, but the bound 2n 2 = O(n 3) is not.
• We use o-notation to denote an upper bound that is not asymptotically tight.
• We formally define o(g(n)) (“little-oh of g of n”) as the set
o(g(n)) = { f(n) : for any positive constants c > 0, there exists a constant n 0 >0, such
that 0 ≤ f(n) < cg(n) for all n ≥ n0 }
• For example, 2n = o(n 2 ) , but 2n 2 != o(n 2) .
• The definitions of O-notation and o-notation are similar.
• The main difference is that in f(n) = O(g(n) , the bound 0 ≤ f(n) ≤ cg(n) holds for some
constant c > 0, but in f(n) = o(g(n)), the bound 0 ≤ f(n) < cg(n) holds for all constants c > 0.
• Intuitively, in o-notation, the function f(n) becomes insignificant relative to g(n) as n
approaches infinity.
Definition
• ω(g(n)) = { f(n) : for any positive constants c>0 and n0>0 such that 0 ≤ cg(n) < f(n) for all n
≥ n0 }
• We use ω notation to denote a lower bound that is not asymptotically tight.
• Main difference with Ω is that, ω defines for some constants c by ω defines for all
constants.
11. Merge Sort is a divide-and-conquer algorithm used to sort an array or a list of elements. It
works by recursively dividing the list into two halves, sorting each half, and then merging the sorted
halves back together to produce a fully sorted list.
Time Complexity:
• Best Case: O(n log n)
• Worst Case: O(n log n)
• Average Case: O(n log n)
• Left: [38]
• Right: [27]
• Now both are single elements, so they are "sorted."
• Similarly, split [43, 3]:
• Left: [43]
• Right: [3]
• These are already sorted as well.
• Now split [9, 82, 10] into:
• Left: [9]
• Right: [82, 10]
• Split [82, 10] into:
• Left: [82]
• Right: [10]
• Both are sorted.
Time Complexity:
• Best & Average Case: O(n log n)
• Worst Case: O(n²) (if the pivot choice is poor, like always picking the smallest or largest
element)
• Space Complexity: O(log n) for recursive stack (average case)
Now, all subarrays are sorted. The final sorted array is:
Sorted Output: [1, 5, 7, 8, 9, 10]
1) A minimum spanning tree is a spanning tree, but has weights or lengths
associated with the edges, and the total weight of the tree (the sum of the
weights of its edges) is at a minimum.
A Minimum cost spanning tree is a subset T of edges of G such that all the vertices remain
connected when only edges in T are used, and sum of the lengths of the edges in T is as
small as possible. Hence it is then a spanning tree with weight less than or equal to the
weight of every other spanning tree.
4)
a)The greedy knapsack is an algorithm for making decisions that have to make a locally
optimal choice at each stage in the hope that this will eventually lead to the best overall
decision.
Given n inputs and a knapsack or bag. Each object i is associated with a weight wi and
profit pi. If fraction of an object xi, 0 ≤ xi ≥ 1 is placed in knapsack earns profit pixi.
b) n=4
m=15
P=[10,5,7,11]
W=[3,4,3,5]
dp[i][w]=dp[i−1][w]
dp[i][w]=dp[i−1][w−W[i]]+P[i]
The optimal solution is to select Items 1 and 4, with a total profit of 18.
5)
a)Binary search is an efficient algorithm for finding an element in a sorted array. It works
by repeatedly dividing the search interval in half. If the value of the search key is less than
the item in the middle of the interval, the search continues on the left half; otherwise, it
continues on the right half.
function GreedyAlgorithm(Input):
Solution = {}
while (not TerminationCondition(Solution, Input)):
Candidate = SelectCandidate(Input)
if (FeasibilityCheck(Solution, Candidate)):
Solution = Solution ∪ {Candidate}
Remove Candidate from Input
return Solution
6)
a) The Partition Exchange Sort is an algorithm that falls under the class of
divide-and-conquer algorithms. The main idea behind this algorithm is to partition the
array into two subarrays based on a pivot element, and then exchange elements to
ensure that the elements on the left side of the pivot are less than or equal to the pivot,
and those on the right are greater than or equal to the pivot.
Partitioning process
1. Start with two pointers: one at the beginning (`i = -1`) and the other scanning through
the array (`j = 0 to 6`).
2. For each element `arr[j]`:
- If `arr[j] <= pivot (48)`, increment `i` and swap `arr[i]` with `arr[j]`.
- If `arr[j] > pivot`, do nothing and move to the next element.
After partitioning, the pivot (`48`) ends up in its correct position (index 7).
Here, all elements are already less than the pivot, so no swaps are needed except for
putting the pivot in its correct place at the end of the array.
Partitioning process:
- Start with `i = -1` and `j = 0 to 5`.
- For each element `arr[j]`, if `arr[j] <= 20`, increment `i` and swap `arr[i]` and `arr[j]`.
Partitioned array:
[12, 20, 35, 23, 45, 34, 24]
After partitioning, the pivot `20` is at position 1. Now we have two subarrays:
- Left subarray `[12]` (already sorted)
- Right subarray: `[35, 23, 45, 34, 24]`
Pivot: `24`.
Partitioning proce:
- Start with `i = -1` and `j = 0 to 3`.
- If `arr[j] <= 24`, increment `i` and swap `arr[i]` with `arr[j]`.
Partitioning process
- Start with `i = -1` and `j = 0 to 1`.
- If `arr[j] <= 35`, increment `i` and swap `arr[i]` with `arr[j]`.
Final sorted array: `[12, 20, 23, 24, 34, 35, 45, 48]`.
7 a) In general, greedy algorithms have five pillars:
1. A candidate set, from which a solution is created
2. A selection function, which chooses the best candidate to be added to the solution
3. A feasibility function, that is used to determine if a candidate can be used to contribute to
a solution
4. An objective function, which assigns a value to a solution, or a partial solution
Eg:
8 a) There are n jobs to be processed on a machine.
• Each job i has a deadline di≥ 0 and profit pi≥0 .
• Pi is earned iff the job is completed by its deadline.
• The job is completed if it is processed on a machine
for unit time.
• Only one machine is available for processing jobs.
• Only one job is processed at a time on the machine.
b) A Minimum cost spanning tree is a subset T of edges of G such that all the vertices remain
connected when only edges in T are used, and sum of the lengths of the edges in T is as small as
possible. Hence it is then a spanning tree with weight less than or equal to the weight of every other
spanning tree.
b) Algorithm:
Float kruskal (int E[][], float cost[][], int n, int t[][2])
{
int parent[w];
consider heap out of edge cost;
for (i=1; i<=n; i++)
parent[i] = -1; //Each vertex in different set
i=0;
mincost = 0;
while((i<n-1) && (heap not empty))
{
Delete a minimum cost edge (u,v) from the heap and reheapify;
j = Find(u); k = Find(v); // Find the set
if (j != k)
{
i++;
t[i][1] = u;
t[i][2] = v;
mincost += cost[u][v];
Union(j, k);
}
if (i != n-1) printf(“No spanning tree \n”);
else return(mincost);
}
}
9)
10 a) Time
complexity of the program is Θ(sn), where s is the number of terms in the result set. Hence we
can write the worse case complexity as O(n2).
b)Principle of Optimality
An optimal sequence of decisions has the property that whatever the initial state and decisions are,
the remaining decisions must constitute an optimal decision sequence with regard to the state
resulting from the first decision.
Answer:
2.Apply branch and bound to 0/1 knapsack problem and elaborate it? [8M]
Answer:
Step 2: Branching
Step 3: Bounding
● For each node (subproblem), calculate the upper bound on the maximum value
achievable from that node:
○ If including the item in the knapsack would exceed the capacity, exclude that
item.
○ If including the item does not exceed the capacity, add its value and subtract
its weight from the remaining capacity.
○ Use the greedy strategy to fill the remaining capacity with the most valuable
items (based on value-to-weight ratio).
Step 4: Pruning
● If the upper bound at a node is less than the best solution found so far, prune that
branch (i.e., do not explore it further).
● If the upper bound is greater than the best solution, continue branching.
Step 5: Termination
● The algorithm terminates when all nodes have been either explored or pruned.
● The best solution found during the exploration is the optimal solution.
3.Explain the method of reduction to solve TSP problem using branch and bound?
[12M]
Answer:
1. Initial Setup:
○ Start with the cost matrix (or distance matrix), where the element at position
(i,j)(i, j)(i,j) represents the cost (distance) of traveling from city iii to city jjj.
○ The goal is to find the shortest possible cycle that visits each city exactly once
and returns to the starting city.
2. Reduction Techniques: The key idea is to reduce the cost matrix to help compute a
lower bound on the optimal solution. These reductions help eliminate unpromising
branches early in the search.
○ Row Reduction: For each row of the cost matrix, subtract the smallest value
in the row from every element in that row.
■ This step ensures that every row has at least one zero, which is useful
for bounding.
○ Column Reduction: For each column of the cost matrix, subtract the
smallest value in the column from every element in that column.
■ Similarly, this step ensures that every column has at least one zero.
3. The resulting matrix after these reductions is called the reduced cost matrix. After
these reductions, the sum of all the minimum values from each row and column gives
the lower bound (L) on the cost of the optimal tour.
4. Branching (Exploring Partial Solutions):
○ The search space for TSP consists of all possible permutations of cities, and
the goal is to explore this space efficiently. In the Branch and Bound
approach, we create a search tree where each node represents a partial tour.
○ At each node, we decide which cities are still to be visited, and we explore
further branches by choosing the next city to visit.
○ We branch out by considering all possible next cities to visit from the current
city, generating new subproblems. For example, if the salesman is currently at
city iii and we still need to visit cities j,k,lj, k, lj,k,l, we generate three
branches—one for each possible next city (i.e., jjj, kkk, and lll).
5. Bounding:
○ For each node in the search tree, we calculate a lower bound (cost of the
best possible solution that could be obtained from this partial tour). This lower
bound is calculated based on the reduced cost matrix, which gives an
approximation of the minimum cost to complete the tour from the current
node.
○ If the lower bound of a node exceeds the current best solution (known as the
best bound or best tour found so far), we prune that branch (i.e., stop
exploring that subproblem) because it cannot lead to a better solution.
○ If a complete tour is found (i.e., a leaf node in the search tree), we check if the
total cost is less than the current best solution, and if so, update the best
solution.
6. Reduction After Each Branch:
○ After each branching step, the reduced cost matrix is updated to reflect the
new partial solution. This update involves adjusting the matrix to account for
the fixed decisions (i.e., cities already visited) and recalculating the lower
bound.
○ As the search tree grows, the reduced cost matrix is recalculated and used to
prune branches that cannot possibly lead to a better solution than the current
best.
7. Termination:
○ The algorithm continues branching and bounding until all branches are either
explored or pruned. The best solution found during the search is the optimal
solution to the TSP.
4. Explain the principles of FIFO branch and bound? [8M]
Answer:
Answer:
A)
● A lower bound for a problem is the worst-case running time of the best possible algorithm
for that problem.
● To prove a lower bound of Ω(n lg n) for sorting, we would have to prove that no algorithm,
however smart, could possibly be faster, in the worst-case, then n lg n.
● The Decision Tree method is a common technique used to establish lower bounds on the
time complexity of algorithmic problems.
● It provides a framework for proving that a particular problem or computational task cannot
be solved more efficiently than a certain lower bound.
● The key idea is to construct a decision tree that represents different possible executions of
an algorithm and analyze the height of this tree to determine the lower bound.
● This method is particularly useful for establishing lower bounds in the context of
comparison-based sorting and searching algorithms.
B)
6. Briefly explain the FIFO brach and bound solution with example? [12M]
Answer:
● Branch and Bound is another method to systematically search a solution space.
● Just like backtracking, we will use bounding functions to avoid generating subtrees that do
not contain an answer node.
● The term “branch and bound” refers to all state space search methods in which all children
of the E-node are generated before any other live node can becomes the E-node.
● In branch and bound terminology, FIFO search: BFS like state space search: list of live
nodes is a first-in-first-out(FIFO) list, or, a queue.
● LIFO search: D-search like state space search: list of live nodes is a last-in-first-out (LIFO)
list, or, a stack.
● Bounding functions are used to help avoiding the generation of subtree that do not contain
and answer node
● In both LIFO and FIFO Branch and Bound the selection rule for the next E-node in rigid
and blind.
● The selection rule for the next E-node does not give any preference to a node that has a
very good chance of getting the search to an answer node quickly.
● The search for an answer node can be speeded by using an “intelligent” ranking function
c( ) for live nodes.
● The next E-node is selected on the basis of this ranking function.
7. Briefly explain the LC brach and bound solution with example? [12M]
Answer:
● The difficulty with using either of these “ideal” cost functions is that computing the cost of a
node will usually involve a search of the subtree x for an answer node.
● Hence, by the time the cost of a node is determined, that subtree has been searched and
there is no need to explore x again.
● Therefore the search algorithm usually rank nodes based only on an estimate ĝ(.), of their
cost.
Let ĝ(x) be an estimate of the additional effort needed to reach an answer node from x. Node
x is assigned a rank using function ĉ(.) such that
ĉ(x) = f(h(x)) + ĝ(x)
where h(x) is the cost of reaching x from the root and f(.) is any non decreasing function.
● Hence the next node to be selected will be the one with least ĉ(x) value. Hence it is called
LC Search.
● In ĉ(x), if g(x) = 0, f(h(x)) is the level of x, hence LC search generates nodes by level, or it
transforms to BFS (FIFO).
● In ĉ(x), if f(h(x))=0, and ĝ (x) >= ĝ (y) whenever y is a child of x, then LC search transforms
to D-Search (LIFO)
8. State 0/1 knapsack problem and design an algorithm of LC Branch and Bound and find
the solution for the knapsack instance with any example? [12M]
Answer:
● Given:
○ A set of nnn items, each with a weight wiw_iwiand a value viv_ivifor
i=1,2,…,ni = 1, 2, \dots, ni=1,2,…,n.
○ A knapsack with a capacity W.
● Objective:
○ Find a subset of items such that the total weight does not exceed W, and the
total value is maximized.
Branch and Bound (BB) is an algorithmic technique used to solve combinatorial optimization
problems, such as the 0/1 knapsack problem. It systematically explores all possible subsets
of items, pruning branches of the search tree that cannot lead to a better solution than the
current best one.
Key Concepts:
● Bounding function: Provides an estimate of the best possible solution that can be
obtained from a given node (partial solution). The bounding function is used to prune
branches.
● Node: A partial solution to the problem, represented by selecting some items and
excluding others.
● Pruning: When the bound indicates that a branch cannot lead to an optimal solution,
the branch is discarded.
Steps to Solve the 0/1 Knapsack Problem Using Branch and Bound:
Step-by-Step Execution:
1. Initialization:
○ Sort items by value-to-weight ratio:
■ Item 1: 60/10=6
■ Item 2: 100/20=5
■ Item 3: 120/30=4
○ Sorted by descending ratio: [Item 1, Item 2, Item 3]
2. Root Node:
○ No items are selected, profit = 0, weight = 0.
○ The bound is calculated using the fractional knapsack approach, and the root
node is added to the queue.
3. Branching:
○ For each node, two child nodes are generated (one with the current item
included and one with it excluded).
○ The algorithm proceeds by exploring these branches, calculating bounds at
each step, and pruning if necessary.
4. Solution:
○ The maximum profit found is 220, which corresponds to selecting items 2 and
3 (values 100 and 120) with a total weight of 50.
Answer:
● Most straightforward way to solve the puzzle problem is to search the state space for the
goal state and use the path from the initial state to the goal state as an answer.
● There are 16! different arrangements of the tiles on the frame. Only half of them are
reachable from any given initial state.
● If we number frame positions 1 to 16, position i is the frame position containing the tile
number i in the goal arrangement. Position 16 is the empty spot.
● Let position(i) be the position number in the initial state of the title numbered i.
● Then position(16) will denote the position of the empty spot.
10. Apply the branch-and- bound technique in solving the travelling salesman problem?
[12M]
Answer:
• The traveling sales person problem finds application in a variety of situations.
• Suppose we have to route a postal van to pick up mail from mail boxes located at n
different sits.
• An n + 1 vertex graph can be used to represent the situation.
• One vertex represents the post office from which the postal van starts and to which it must
return.
• Edge is assigned a cost equal to the distance from site i to site j.
• The route taken by the postal van is a tour, and we are interested in finding a tour of
minimum length.
Basic Traversal and Search Techniques. Back Tracking
1. Explain any one application back tracking with example?
[8M]
One of the well-known applications of backtracking is solving the N-Queens problem.
2. Describe in detail 8-queens problem using back tracking?
[8M]
Backtracking is used to place queens one by one in different columns and check for possible
conflicts. If placing a queen in a column violates the constraints, the algorithm backtracks
and tries a different column.
Step-by-Step Explanation:
● To compute the spanning tree with minimum cost, the following considerations are
taken:
● Let G = (V,E) be an undirected connected graph with V vertices and E edge.
● A sub-graph t = (V,E’) of the G is a Spanning tree of G iff ‘t’ is a tree.
● A tree is defined to be an undirected, acyclic and connected graph (or more simply, a
graph in which there is only one path connecting each pair of vertices).
● Assume there is an undirected, connected graph G.
● A spanning tree is a subgraph of G, is a tree, and contains all the vertices of G.
● A minimum spanning tree is a spanning tree, but has weights or lengths associated
with the edges, and the total weight of the tree (the sum of the weights of its edges)
is at a minimum.
● Kruskal’s Algorithm
● Prim’s Algorithm
10. Determine Sum of subsets problem?
1 a)
2) NP-complete problems
A decision problem E is NP-complete if every problem in the class NP is polynomial-time reducible
to E. The Hamiltonian cycle problem, the decision versions of the TSP and the graph coloring
problem, as well as literally hundreds of other problems are known to be NP-complete.
NP-hard problems
3)
4)
Class P
P is a set of all decision problems solvable by deterministic algorithms in polynomial time.
The class P EXAMPLE: The Minimum Spanning Tree Problem is in the class P.
The class NP
A problem that is NP-Complete has the property that it can be solved in polynomial time iff all
other NP-Complete problems can also be solved in polynomial time. If an NP-Hard problem can be
solved in polynomial time, then all NP-Complete problems can be solved in polynomial time.
All NP-Complete problems are NP-Hard, but not all NP-Hard problems are NP-Complete.
NP stands for Nondeterministic Polynomial
NP is the set of all decision problems solvable by nondeterministic algorithms in polynomial time.
5) NP-complete problems
A decision problem E is NP-complete if every problem in the class NP is polynomial-time reducible
to E. The Hamiltonian cycle problem, the decision versions of the TSP and the graph coloring
problem, as well as literally hundreds of other problems are known to be NP-complete.
NP-hard problems
Optimization problems whose decision versions are NP complete are called
NP-hard.
6)