0% found this document useful (0 votes)
11 views

TOC-DEC-19 StrangeR (3)

Uploaded by

vaishnahegde41
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

TOC-DEC-19 StrangeR (3)

Uploaded by

vaishnahegde41
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

TOC DEC 19 SOLVED

Q.NO QUESTION
1A Find DFA that accepts only the language of all strings with b as the second letter over the
alphabet £={a,b}.
ANS: Let's denote the states as q0, q1, and q2.
• q0: Initial state (where the second letter hasn't been encountered yet)
• q1: State after encountering 'b'
• q2: State after encountering any subsequent letters (valid or invalid)
The transitions can be defined as follows:
• From q0:
• On 'a', stay at q0 (as 'a' can't be the second letter)
• On 'b', move to q1 (as 'b' is the required second letter)
• From q1:
• On any input, move to q2 (since the requirement of having 'b' as the second
letter is fulfilled)
• From q2:
• On any input, stay at q2 (since the condition has already been met)
Now, let's construct the DFA transition table:

Here, q0 is the initial state, q1 is the accepting state (where 'b' is the second letter), and q2 is
the non-accepting state for any subsequent letters.
This DFA will accept strings where 'b' is the second letter in the input string.
2A Explain Chomsky classification of grammars
ANS: Noam Chomsky, a renowned linguist and computer scientist, classified grammars into four types
known as Chomsky's hierarchy. These types are categorized based on their generative power or
expressive capability in generating languages.
Type 0: Unrestricted or Recursively Enumerable (Turing Machine)
• Description: These grammars generate languages that can be recognized by a Turing
machine. They have no restrictions on the rules.
• Formalism: There are no limitations on the production rules.
• Example: Any programming language is an example of a Type 0 language.
Type 1: Context-Sensitive (Linear Bounded Automaton)
• Description: These grammars generate languages that can be recognized by linear
bounded automata, where productions must respect the context of the string being
generated.
• Formalism: Productions must be of the form αAβ → αγβ, where A is a non-terminal, α
and β are strings (possibly empty), and γ is a non-empty string.
• Example: Natural languages often fit into Type 1 grammars due to their context-
sensitive rules.
Type 2: Context-Free (Pushdown Automaton)
• Description: These grammars generate languages that can be recognized by pushdown
automata, where productions have a more straightforward structure than Type 1.
• Formalism: Productions must be of the form A → β, where A is a non-terminal and β is a
string (possibly empty) that can include terminals and non-terminals.
• Example: Many programming language constructs are described by context-free
grammars.
Type 3: Regular (Finite State Automaton)
• Description: These grammars generate languages that can be recognized by finite state
automata. They are the simplest and most restrictive type.
• Formalism: Productions must be of the form A → aB or A → a, where A and B are non-
terminals, and a is a terminal symbol.
• Example: Regular expressions and simple pattern matching often fall into this category.
Each type in Chomsky's hierarchy represents a different level of complexity and expressive
power. As you move from Type 3 to Type 0, the languages become more expressive, but the
grammar rules also become more complex. This hierarchy is fundamental in understanding the
capabilities and limitations of different types of formal languages.

2B Consider the grammar given as G = ({S,A}, {a,b}, P, S ) Where production P consists of ,


S -> aAS | a
A -> SbA | SS | ba

FIND
a) Leftmost derivation
b) Rightmost derivation
c) Parse tree
for the string “ aabbaa ”.
ANS: break down the steps for finding the leftmost derivation, rightmost derivation, and the parse
tree for the given string "aabbaa" using the provided grammar:
The grammar:

Leftmost Derivation:
To derive "aabbaa" using leftmost derivation, at each step, always replace the leftmost non-
terminal.

Rightmost Derivation:
To derive "aabbaa" using rightmost derivation, at each step, always replace the rightmost non-
terminal.
Parse Tree:
The parse tree visually represents the derivations. Constructing a parse tree for the string
"aabbaa":

This tree demonstrates the structure of the string "aabbaa" according to the given grammar.
Each non-terminal in the tree represents a step in the derivation process, and the terminals are
the final symbols of the string.
2C Find a reduced grammar G to the grammar given below
S -> AB|CA
A -> a
B -> BC|AB
C -> aB|b
ANS: start by examining the productions of each non-terminal:
Given Grammar:
• S→AB ∣ CA
• A→a
• B→BC ∣ AB
• C→aB ∣ b
Step 1: Eliminate Non-Generating Symbols
Find and eliminate non-generating symbols (symbols that cannot produce any terminal string):
• A→a (Generating)
• C→aB (C is generating because B is generating)
• C→b (Generating)
Step 2: Eliminate Non-Reachable Symbols
Find and eliminate non-reachable symbols (symbols that cannot be reached from the start
symbol):
• S→AB ∣ CA (Start symbol, reachable)
• B→BC ∣ AB (B is reachable from S)
• A→a (Generated by S)
• C→aB ∣ b (C is reachable from S)
Step 3: Remove Unit Productions
Remove unit productions (productions of the form A -> B):
• No unit productions in this case.
Step 4: Remove ε-Productions (if any)
There are no ε-productions in this grammar.
Step 5: Remove Useless Productions
Combine productions that have common non-terminals on the right-hand side:
• S→AB ∣ CA
• A→a
• B→BC ∣ AB
• C→aB ∣ b
Let's combine similar productions and remove redundancy:
• S→AB ∣ CA
• A→a
• B→BC ∣ AB
• C→aB ∣ b
The grammar is already in a simple form, and there are no redundant productions to remove.
Therefore, this reduced grammar remains the same as the given grammar:
• S→AB ∣ CA
• A→a
• B→BC ∣ AB
• C→aB ∣ b

Q3 A Convert the following CFG to CNF:


S-> aSa| bSb | a |b | aa | bb
ANS: To convert a context-free grammar (CFG) to Chomsky Normal Form (CNF), each production in
the grammar must adhere to specific rules:
1. Start by introducing new variables for terminals (if needed).
2. Eliminate ε-productions (productions that derive the empty string).
3. Remove unit productions (productions of the form A → B).
4. Convert productions to have at most two variables or a single terminal on the right-
hand side.
Given Grammar:
• S→aSa ∣ bSb ∣ a ∣ b ∣ aa ∣ bb
Step 1: Introduce New Variables for Terminals (if needed)
There are no variables that generate terminals. All terminals are already directly derived from
the start symbol or through productions.
Step 2: Eliminate ε-productions
There are no ε-productions in the given grammar.
Step 3: Remove Unit Productions
There are no unit productions in the given grammar.
Step 4: Convert Productions to CNF
The given grammar's productions mostly have strings with terminals or variables on the right-
hand side. To convert this grammar into CNF, each production needs to have either two
variables or a single terminal.
The productions are already close to CNF, except for the first two productions S→aSa and
S→bSb, where the middle non-terminal has three symbols.
Let's introduce new variables to break these productions into smaller parts:
1. Introduce a new variable for the middle symbol in S→aSa:
• S→aX
• X→Sa
2. Introduce a new variable for the middle symbol in S→bSb:
• S→bY
• Y→Sb
The updated grammar is:
• S→aX ∣ bY ∣ a ∣ b ∣ aa ∣ bb
• X→Sa
• Y→Sb
This grammar is now in Chomsky Normal Form (CNF). Each production has either two variables
or a single terminal on the right-hand side.

Q3 C For the grammar given find L(G )


S -> a|Sa|b| bs
ANS: The language generated by a grammar �G denoted as L(G) is the set of all strings that can be
generated by the grammar. Let's determine L(G) for the given grammar:
Given Grammar:
• S→a ∣ Sa ∣ b ∣ bs
Analysis of the Grammar:
• The start symbol S can produce four different strings: a, Sa, b, and bs.
• Starting with S can generate combinations of terminals a and b with additional S non-
terminals.
Generating Strings:
Starting with S, here are some strings that can be generated:
1. a (directly from S→a)
2. b (directly from S→b)
3. aa (from S→Sa→aS→aa)
4. ab (from S→Sa→aS→ab)
5. ba (from S→bs→ba)
6. bb (from S→bs→bb)
Conclusion:
The language L(G) generated by the grammar G consists of strings that contain only
combinations of a and b and can be of various lengths, including strings such as a, b, aa, ab, ba,
and bb.

Q4 A Construct a PDA accepting L = { an bm an |m, n ≤ 1} by a null stack. Is it deterministic?


ANS: To construct a Pushdown Automaton (PDA) accepting L={anbman ∣ m,n≤1} using a null stack, let's
design a non-deterministic PDA first and then check if it can be converted to a deterministic one.
Non-deterministic PDA (PDA-N):
The PDA has a single state and uses the stack to keep track of the number of 'a's encountered.
1. Initial State: q0 (single state)
2. Alphabet: Σ={a,b}
3. Stack Alphabet: Γ={Z} (where Z is the initial stack symbol)
Transitions:
1. q0,ϵ,Z→Z (Initial transition, pushing Z onto the stack)
2. q0,a,Z→ϵ (Consume 'a', pop Z from the stack)
3. q0,a,Z→ϵ (Consume 'a', pop Z from the stack)
4. q0,b,Z→Z (Consume 'b', push Z onto the stack)
5. q0,ϵ,Z→ϵ (Empty the stack if all 'a's are consumed)
Deterministic PDA (PDA-D):
The provided PDA-N operates deterministically because there's only one transition possible for
each input symbol at any given state and stack configuration. This PDA accepts strings of the
form anbman where ≤1m,n≤1 using a null stack.
It's important to note that a non-deterministic PDA can sometimes be converted into a
deterministic one. However, in this case, since the non-deterministic PDA operates
deterministically due to unique transitions for each input symbol and stack configuration,
there's no need for conversion.
Therefore, this PDA is deterministic and accepts the language L={anbman ∣ m,n≤1} using a null
stack.

Q4 B Construct a PDA equivalent to the following context free grammar


S -> aAA
A -> aS|bS|a
ANS: To construct a Pushdown Automaton (PDA) equivalent to the given context-free grammar
S→aAA and A→aS ∣ bS ∣ a, we'll design a PDA that recognizes the language generated by this
grammar.
PDA Construction:
The PDA will consist of states, transitions, the input alphabet, stack alphabet, initial state, final
state(s), and stack operations. The stack is used to keep track of the non-terminals while
processing the input string.
1. States: q0,q1,q2,q3
2. Alphabet: Σ={a,b} (input alphabet)
3. Stack Alphabet: Γ={a,S} (where �S is the start symbol)
4. Initial State: q0 (initial state)
5. Final State: q3 (final state)
Transitions:
• q0,ϵ,ϵ→q1 (Initial transition to move to the next state)
• q1,a,ϵ→q2 (Consume 'a', no stack operation)
• q2,ϵ,ϵ→q2 (Move through consecutive 'a's without changing the stack)
• q2,ϵ,ϵ→q1 (End of consecutive 'a's, go back to check the next symbol)
• q1,a,ϵ→q1 (Encountering 'a' after 'S', no stack operation)
• q1,b,ϵ→q1 (Encountering 'b' after 'S', no stack operation)
• q1,ϵ,S→q3 (Empty the stack, reaching final state)
This PDA operates as follows:
• It starts in q0 and moves to q1 on the initial transition.
• Upon encountering 'a', it transitions to q2 to process consecutive 'a's without any stack
operations.
• Once the sequence of 'a's ends, it goes back to q1 to check the next symbol.
• If it encounters 'a' or 'b', it stays in q1 without any stack operations.
• When the input is consumed and the stack is empty, the PDA moves to the final state q3
indicating the acceptance of the string.
This PDA recognizes the language generated by the given context-free grammar S→aAA and
A→aS ∣ bS ∣ a.

Q5 A Obtain a Turing machine to accept the language | L= {0n 1n 2n| n ≤ 1}.

ANS: to create a Turing machine that accepts the language L={0n1n2n ∣ n≤1}, we'll design a Turing
machine that verifies whether the string follows the pattern 0n1n2n where n is less than or
equal to 1.
Turing Machine Description:
States:
1. q0 (Initial state)
2. q1 (State to move to the end of the tape)
3. qaccept (Accepting state)
4. qreject (Rejecting state)
Transitions:
1. Initialization:
• q0,0,→,q0 (Move right until a 0 is found)
• q0,1,→,q1 (If a 1 is encountered without a preceding 0, move to the end of the
tape)
2. Verify 0's and 1's:
• q0,0,→,q2 (Mark the first 0 and move right)
• q2,0,→,q2 (Continue marking 0's)
• q2,1,→,q3 (Mark the first 1 and move right)
• q3,1,→,q3 (Continue marking 1's)
3. Verify 2's and move to the end:
• q3,2,→,qaccept (If a 2 is encountered after 0's and 1's, accept)
• q0,_,→,qaccept (If there are no symbols left after moving right from q1)
4. Rejection state:
• rejectq0,_,→,qreject (If no 0 is found initially)
• rejectq2,_,→,qreject (If 0's are not followed by 1's)
• rejectq3,_,→,qreject (If 1's are not followed by 2's)
This Turing machine will move through the input tape, marking the sequence of 0's, 1's, and 2's.
If the sequence adheres to the pattern 0n1n2n where n is less than or equal to 1, it will reach
the accepting state qaccept. Otherwise, it will reach the rejecting state qreject.

Q5 B Design a TM to find one’s complement of the binary number.


ANS: To design a Turing machine (TM) that computes the one's complement of a binary number, we
can create a TM that traverses the input tape, replacing each '0' with '1' and each '1' with '0'.
Turing Machine Description:
States:
1. q0 (Initial state)
2. q1 (State to move right)
3. qinvert (State to invert symbols)
4. qaccept (Accepting state)
Transitions:
1. Initialization:
• q0,_,→,q0 (Move right until a symbol is found)
2. Traverse and Invert:
• q0,0,→,qinvert,1 (If '0' is found, replace with '1' and move right)
• q0,1,→,qinvert,0 (If '1' is found, replace with '0' and move right)
3. Continue until the end:
• qinvert,_,→,qaccept (If no more symbols are found)
Operation:
• Start at q0 and move right until a symbol is found.
• At qinvert, replace '0' with '1' and '1' with '0'.
• Continue until there are no more symbols left.
• Reach qaccept to indicate completion of the one's complement.
This Turing machine will perform the one's complement operation by flipping '0's to '1's and '1's
to '0's in the input binary number. Once it finishes processing the input, it reaches the accepting
state, indicating the completion of the one's complement operation on the binary number.
Q5 C Explain the following:
1)Turing Machine with stay-option
2) Multiple Tapes Turing Machine
ANS: 1) Turing Machine with Stay-Option:
A regular Turing machine allows movement of its read/write head in either left or right direction
on its tape, symbol by symbol. However, in a Turing machine with a stay-option, the read/write
head has the additional capability to stay in its current position without moving left or right.
How it Works:
In a Turing machine with a stay-option, when the machine reads a symbol, it has the choice to
stay in the same position, move left, or move right. This provides greater flexibility in designing
algorithms or computations, allowing the machine to choose to not move while performing
operations.
Application:
The stay-option in a Turing machine can be useful in certain scenarios where movement
restriction or specific operations are required. For example, it could be employed in algorithms
or tasks where certain symbols or conditions need to be preserved or accessed multiple times
without shifting the read/write head.
2) Multiple Tapes Turing Machine:
A Multiple Tapes Turing machine is a variation of a standard Turing machine equipped with
multiple tapes instead of a single tape. Each tape operates independently and has its own
read/write head.
Structure:
• Multiple Tapes: The Turing machine has more than one tape, each with its own set of
symbols.
• Read/Write Heads: Each tape has its own read/write head responsible for reading
symbols from and writing symbols to that specific tape.
• States and Transitions: The machine operates based on its current state, symbol read
from each tape, and transition function, allowing it to change states, write symbols, and
move the read/write heads independently on each tape.
Functionality:
• Parallel Processing: Multiple Tapes Turing machines can perform multiple tasks or
computations simultaneously, as each tape can be used for different purposes or parts
of a problem.
• Increased Computational Power: The presence of multiple tapes can enhance the
computational capabilities of the Turing machine, allowing for more complex operations
and algorithms compared to a single-tape Turing machine.
• Complexity Analysis: Multiple Tapes Turing machines are used in complexity theory to
explore the complexity of problems and their solvability within computational models.
Application:
These machines find application in various theoretical computer science studies, particularly in
exploring computational complexity, language recognition, and algorithmic analysis. They're
employed as a tool to understand the computational limits of problems and models in computer
science. However, in practice, single-tape Turing machines are often used due to their simplicity
and equivalence in computational power.
The grid and the borders of the table will NEVER be hidden before final printing. : )

You might also like