Theory of Computation Long Type of Questions-1
Theory of Computation Long Type of Questions-1
Automata
1.Finite State Automata
2.Push Down Automata
3.Linear Bounded Automata
4.Turing Machine
Grammar
1.Regular Grammar
2.Context Free Grammar
3.Context Sensitive
Grammar
Language
1.Regular Language
2.Context Free Language
3.Context Sensitive Language
4.Recursive Language
5.Recursively Enumerable Language
6.Beyond RE
Chomsky Normal Form
𝛅 ϵ a b
q Ⲫ {p} Ⲫ
4. Design a dfa over alphabet {0,1} that accepts all strings with even number of 1 (3)
5.
ϵ a b c
*r Ⲫ Ⲫ Ⲫ Ⲫ
I. Compute ϵ closure of each state (3)
ϵ(p) = {p,q,r}
ϵ(q) = {q}
ϵ(r) = {r}
ii. Give set of all strings of length 3 or less accepted by the automation (2)
Straight of set 3 -
a, c, aa, ab, ac, ca, cb, cc, ba, bb, bc, cca, ccb, ccc, aaa, aab, aac, aba, abb, abc, baa, bab,
bac, bba, bca, bca, bcc, bbb, bbc, caa, caa, cab, cac, cba, cbb, cbc.
𝛅 a b c
7. Consider the regular expression (0+1)* 10(0+1)* + (0+1)* 11(0+1)* use dissributive law
to simplify the expression. (4)
Ans:
(0+1)* 10(0+1)* +(0+1)* 11(0+1)*
(0+1)* (10(0+1)* 11(0+1)*)
(0+1)* (10+11)(0+1)*
(0+1)* 1(0+1)(0+1)*
(0+1)* 1(0+1)*
The pumping lemma is a theorem in formal language theory that can be used to prove that a
given language is not regular. The theorem states that for any regular language, there exists a
positive integer n such that for any string x in the language with length at least n, there exist
strings u, v, and w such that:
● x = uvw
● |uv| ≤ n
● |v| > 0
● for all i ≥ 0, uv^iw ∈ L
In other words, the pumping lemma states that for any regular language, it is possible to "pump"
a middle section of a string any number of times and still end up with a string that is in the
language.
To prove the pumping lemma, we can use a proof by contradiction. Assume that there exists a
regular language L that does not satisfy the pumping lemma. This means that there exists a
positive integer n such that for some string x in L with length at least n, there do not exist strings
u, v, and w such that the above conditions are satisfied.
We can construct a contradiction by considering the string x^n. This string is in L, since L is
regular. However, the string x^n cannot be pumped, since there do not exist strings u, v, and w
such that the above conditions are satisfied. This is a contradiction, since L is assumed to be
regular.
The pumping lemma is a useful tool for proving that a given language is not regular. It is often
used in conjunction with other techniques, such as the Myhill–Nerode theorem, to prove that a
given language is not regular.
These languages are not regular because they cannot be pumped. For example, the language
of all strings of balanced parentheses cannot be pumped because there is no way to pump a
middle section of a string of balanced parentheses and still end up with a string of balanced
parentheses.
The pumping lemma is a powerful tool that can be used to prove that a given language is not
regular. It is often used in conjunction with other techniques to prove that a given language is
not regular.
The pumping lemma is a useful tool in formal language theory used to prove that a language is
not regular. It states that for any regular language L, there exists a constant p (the "pumping
length") such that any string s in L with a length of at least p can be divided into five parts, s =
xyzuv, satisfying the following conditions:
Here is an example of how the pumping lemma can be used to prove that the language L =
{0^n1^n | n ≥ 0} (the language of all strings consisting of any number of 0's followed by an equal
number of 1's) is not regular:
a) Both y and v consist only of 0's (i.e., xyuv contains only 0's).
In this case, pumping the string by increasing i (i.e., xyizuv, where i > 0) will result in more 0's
being added before the 1's. As a result, the string will no longer be in L since the number of 0's
will exceed the number of 1's.
b) Both y and v consist only of 1's (i.e., xyuv contains only 1's).
Similarly, pumping the string by increasing i (i.e., xyizuv, where i > 0) will lead to more 1's after
the initial block of 0's. Consequently, the number of 1's will exceed the number of 0's, causing
the string to be not in L.
5. In all cases, we have demonstrated that the pumped string xyizuv is not in L for some i > 0.
Hence, L cannot be a regular language based on the pumping lemma.
This proof shows that the language L = {0^n1^n | n ≥ 0} is not regular by using the pumping
lemma. Similar techniques can be applied to prove that other languages are not regular as well.
Suggestions:
Suggestions:
Sure. Here is a brief definition of DFA and NFA, along with their
differences and an example.
The main difference between DFAs and NFAs is that DFAs can only
have one next state for each input symbol, while NFAs can have
multiple next states. This means that NFAs can recognize a wider
range of strings than DFAs, but they are also more easier to
implement.
Here is an example of a DFA and an NFA that recognize the same set
of strings:
```
DFA:
Start state: q0
Input symbols: a, b
Transitions:
q0 -> q1 on a
q0 -> q2 on b
q1 -> q2 on a
q2 -> q2 on a, b
Accepting state: q2
NFA:
Start state: q0
States: q0, q1, q2
Input symbols: a, b
Transitions:
Accepting state: q2
```
As you can see, the DFA and the NFA have the same states and
transitions, but the NFA allows for multiple possible next states for a
given input symbol. This means that the NFA can recognize strings
that the DFA cannot, such as the string "aa".
DFAs and NFAs are both important tools in computer science. DFAs
are often used in compilers and other software tools, while NFAs are
often used in natural language processing applications.
https://youtu.be/h4v7x0IMhtI
Neso Academy
4. Algorithm to Minimize a DFA Minimize a given DFA
There are many algorithms for minimizing a DFA. One of the most
common algorithms is the Hopcroft minimization algorithm. This
algorithm works by iteratively merging states that are
indistinguishable.
```
Initial DFA:
q0 -> q1 | q2 on 0
q0 -> q3 | q4 on 1
q1 -> q5 on 0
q2 -> q5 on 1
q3 -> q6 on 0
q4 -> q6 on 1
q5 -> q7 on 0, 1
q6 -> q7 on 0, 1
q7 is accepting
Minimized DFA:
q0 -> q1 | q2 | q3 | q4 on 0
q0 -> q5 | q6 on 1
q1, q2, q3, q4, q5, q6 -> q7 on 0, 1
q7 is accepting
```
As you can see, the minimized DFA has only 6 states, compared to
the original DFA which had 8 states. This is a significant reduction in
the number of states, which can lead to improved performance.
https://youtu.be/kFJaUtkn9wo
```
State | Input | Next states
-------|--------|------------
q0 | a | {q1, q2}
q0 | b | q2
q1 | a | q2
q2 | a, b | q2
```
As you can see, the extended transition function for this NFA maps
the current state and the input symbol to a set of next states. For
example, if the current state is `q0` and the input symbol is `a`, then
the next states are `q1` and `q2`.
* The first production in the derivation must be the start symbol of the
grammar.
* Each subsequent production must be a production that can be
applied to the right-hand side of the previous production.
* The last production in the derivation must be a terminal symbol.
```
Grammar:
S → AB
A→a
B→b
Parse Tree:
S
/ \
A B
\ /
a b
```
Derivation:
```
S → AB
A→a
B→b
```
CNF grammars are a powerful tool for analyzing and proving results
about the theory of computation. They are used in many areas of
computer science, including compilers, parser generators, and
artificial intelligence.
In a DPDA, there is only one possible next state for each combination
of current state and input symbol. This makes DPDAs easier to
implement than NPDAs, but also limits their expressive power.
NPDAs, on the other hand, can have multiple possible next states for
each combination of current state and input symbol. This makes
NPDAs more powerful than DPDAs, but also more difficult to
implement.
```
Start state: q0
Stack symbols: A, B
Transitions:
In this example, the DPDA starts in state q0. When it reads an 'a'
symbol, it moves to state q1 and pushes an 'A' symbol onto the stack.
When it reads a 'b' symbol, it moves to state q1 and pushes a 'B'
symbol onto the stack. When it reaches the end of the input string, it
moves to state q2 and pops symbols off the stack until the stack is
empty. If the stack is empty, then the DPDA accepts the input string.
Otherwise, the DPDA rejects the input string.
```
Start state: q0
Input symbols: a, b
Stack symbols: A, B
Transitions:
In this example, the NPDA starts in state q0. When it reads an 'a'
symbol, it moves to states q1 and q2 and pushes an 'A' symbol onto
the stack. When it reads a 'b' symbol, it moves to states q1 and q2
and pushes a 'B' symbol onto the stack. When it reaches the end of
the input string, it moves to state q2 and pops symbols off the stack
until the stack is empty. If the stack is empty, then the NPDA accepts
the input string. Otherwise, the NPDA rejects the input string.
As you can see, the NPDA is more powerful than the DPDA because it
can accept strings that the DPDA cannot. However, the NPDA is also
more difficult to implement because it has to keep track of multiple
possible next states.
10. Is it true that non deterministic PDA is more powerful than that of
Deterministic PDA? Justify.
```
(current state, input symbol, top of stack) → (new state, new top of
stack)
```
The current state is the state of the PDA before the transition. The
input symbol is the next symbol in the input string. The top of stack is
the symbol at the top of the PDA's stack before the transition. The
new state is the state of the PDA after the transition. The new top of
stack is the symbol that will be pushed onto the PDA's stack after the
transition.
```
CFG:
S → aB | bA | ε
B→a|b
```
GNF:
```
S → aB | bA
B→a|b|ε
```
PDA:
```
Initial state: q
Final states: q
Transitions:
(q, a, S) → (q, B)
(q, b, S) → (q, A)
(q, a, B) → (q, ε)
(q, b, B) → (q, ε)
```
The PDA accepts the input string if it reaches the final state q after
reading the entire input string.