0% found this document useful (0 votes)
0 views

Solution First Internal

This document is an exam paper for a B. Tech course in Computer Engineering at Ganpat University, focusing on Artificial Intelligence. It includes three questions related to search algorithms, sliding puzzles, and Boolean functions, each with specific instructions and marks allocation. The exam is open book and emphasizes precise answers, with a total of 20 marks available.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Solution First Internal

This document is an exam paper for a B. Tech course in Computer Engineering at Ganpat University, focusing on Artificial Intelligence. It includes three questions related to search algorithms, sliding puzzles, and Boolean functions, each with specific instructions and marks allocation. The exam is open book and emphasizes precise answers, with a total of 20 marks available.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Exam No: ____________________

GANPAT UNIVERSITY
B. TECH SEM-VI (COMPUTER ENGINEERING)
FIRST INTERNAL EXAMINATION – FEBRUARY 2025
2CEIT606: Artificial Intelligence

TIME: 1 Hour TOTAL MARKS: 20


Instructions: 1) Figures to the right indicate full marks.
2) Be precise and to the point in your answer.
3) Open Book Exam.
4) The text just below marks indicates the Course Outcomes Numbers, (CO) followed by the
bloom’s taxonomy level of the question, i.e., R: Remembering, U: Understanding, A: Applying,
N: Analyzing, E: Evaluating, C: Creating.

Q.1 Consider the search space depicted [10]


in the figure below. S is the initial 2A
state. G1 and G2 are two states that
satisfy the goal test. The cost of
moving between states is indicated
by the numerical values on the
connecting edges. The estimated cost
to the goal is reported inside the
states. Use alphabetical order of
nodes to break ties. Which goal state
is reached if you perform A* (graph)
search? Provide a detailed step-by-
step explanation, including the Open
List and Closed List at each iteration.
Answer: [Each Correct Iteration Marks:1] Final Answer: 1 marks [Reached to Goal state G1]

Page 1 of 3
Q.2 Consider the following sliding puzzle problem: [05]
State Representation: 'W' (White tile), 'B' (Black tile), '_' (Empty space). 4E
Moves:
1. Slide: A tile can move into an immediately adjacent empty cell.
2. Hop: A tile can jump over one adjacent tile into an empty cell immediately beyond.
Initial and Goal States:
Position 1 2 3 4 5
Initial State B W W B _BWWB
Goal State W W B B WW_BB
Heuristic Function:
h(n)=max(0,𝑝1 -1)+max(0,𝑝2 -2)
where, n is current state, 𝑝1 and 𝑝2 are the 1-indexed positions (from left to right) of the two
White (W) tiles in the current state and sorted so that 𝑝1 ≤ 𝑝2 .
Using the Best First Search Algorithm, determine
whether this heuristic is admissible for the given
puzzle. Provide a detailed justification for your answer.
Answer: Given Heuristic function is admissible for best
first search algorithm due to reaching to goal state.
Search Tree with heuristic function value [Marks:4],
Given Heuristic function is Admissible [Marks:1]

Page 2 of 3
Q.3 Consider the following two Boolean functions: [05]
1. Function F₁: 3C
F₁(x₁, x₂, x₃) = 1 if at least two of the inputs are 1, and 0 otherwise.
2. Function F₂:
F₂(x₁, x₂) = 𝑥1 ∧ ¬𝑥2
For each of these functions, do the following:
Step 1: Construct Truth table of function.
Step 2: Try to implement the function using a McCulloch-Pitts neuron. If this model cannot
implement the function, explain why.
Step 3: If the McCulloch-Pitts neuron fails, try using a single perceptron. If it still fails,
explain the reasons.
Answer:
Function F₁: "At Least Two Inputs Are 1"
• MP Neuron Implementation:[Marks:2]
A single MP neuron works here.
• Threshold: 2
Explanation:
The neuron fires (outputs 1) if the sum of
inputs is at least 2, exactly matching F₁.
Function F₂: F₂(x₁, x₂) = 𝑥1 ∧ ¬𝑥2

 [Marks:1]MP Neuron Implementation: With uniform weights, the neuron cannot


differentiate between the case (𝑥1 , 𝑥2 )=(1,0) (which should output 1) and ( 𝑥1 , 𝑥2 ) = (0,1)
(which should output 0), since both cases would yield the same sum.
𝑔(𝑥) = 𝑥1 + 𝑥2
 [Marks:2] Single Perceptron Implementation:

A single perceptron allows different weights for each input. By choosing a positive weight for 𝑥1
and a negative weight for 𝑥2 (for example, 𝑤1 =1 and 𝑤2 =−1) along with an appropriate bias 𝑤0
=-0.5), the perceptron can correctly compute the function:
 For (1,0): -0.5+1×1+(−1) ×0=0.5 (exceeds 0, output 1)
 For (0,1): -0.5+1×0+(−1) ×1=−1.5 (does not exceed 0, output 0)
Thus, while the MP neuron without weights fails to implement the function due to its inability
to treat inputs differently, a single perceptron with adjustable weights can successfully
implement.

----------------------------------------------END OF PAPER--------------------------------------------

Page 3 of 3

You might also like