0% found this document useful (0 votes)
13 views

TOC Notes and Short Notes SEE

Notes

Uploaded by

c19
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

TOC Notes and Short Notes SEE

Notes

Uploaded by

c19
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

What is the Universal Turing

Machine?
Turing was inspired by the idea of connecting multiple Turing machines. He asked himself that can
a universal machine be constructed that could simulate other machines. He named this machine as
Universal Turing Machine.

A Universal Turing Machine, in more specific terms, can imitate the behavior of an arbitrary Turing
machine over any collection of input symbols. Therefore, it is possible to create a single machine to
calculate any computable sequence.

The input of a UTM includes:

The description of a machine M on the tape

The input data

The UTM can then simulate M on the rest of the input tape's content. As a result, a Universal Turing
Machine can simulate any other machine.

Creating a general-purpose Turing Machine(UTM) is a more difficult task. Once the Turing
machine's transition is defined, the machine is restricted to performing a specific type of
computation.

We can create a universal Turing machine by modifying our fundamental Turing machine model.
For even simple behavior to be stimulated, the modified Turing computer must have a huge
number of states. We modify our basic model by doing the following:

Increase the number of read/write heads

Increase the number of input tape dimensions

Increasing memory space

The UTM would include three pieces of data for the machine it is simulating:

A basic description of the machine

The contents of the machine tape

The internal state of the machine.

The Universal machine would simulate the machine by checking the tape input and the machine's
state.

It would command the machine by modifying its state in response to the input. This will be like a
computer running another computer.

The schematic diagram of a Universal Turing Machine is as follows:


Construction of UTM
Constructing a Universal Turing Machine (UTM) involves creating a theoretical model capable of
simulating any Turing machine. Here are the fundamental components and steps:

1. Understanding Turing Machines: Familiarize yourself with Turing machines and their
components, including states, transitions, tape, and the finite control.

2. Defining the UTM: Define the structure of your UTM. It consists of a finite control (state
transition function), an infinite tape (memory), and a tape head (read/write mechanism).

3. Encoding Turing Machines: Develop a scheme for encoding the configurations of other
Turing machines onto the UTM's tape. This encoding should include the states, tape contents,
and positions of the tape head.

4. Designing the Transition Function: Develop rules for the UTM's finite control to interpret and
execute the instructions encoded on the tape. These rules should allow the UTM to simulate
the behavior of any given Turing machine.

5. Implementing the Transition Logic: Write code or describe algorithms that enable the UTM
to interpret the encoded instructions, move the tape head, update the tape contents, and
transition between states according to the rules.

6. Testing and Verification: Test the UTM by simulating various Turing machines and verifying
that it behaves correctly according to the rules of Turing machines. Ensure that the UTM can
simulate any Turing machine's behavior accurately.

7. Optimization and Refinement: Refine the design and implementation of the UTM to improve
efficiency, readability, and usability.

8. Documentation: Document the design, implementation details, and usage instructions of the
UTM for future reference and understanding.
Multitape Turing Machine
A Multitape Turing Machine (MTM) is an extension of the classical Turing machine, which is a
fundamental model of computation. The classical Turing machine has a single tape and a single
read/write head, while a multitape Turing machine has multiple tapes, each with its own
independent read/write head. This allows for more complex operations and can make certain
types of computation more efficient.
Formal Definition
A multitape Turing machine can be formally defined as a 6-tuple M = (Q, X, B, δ, q0, F ), where:
Q: A finite set of states.

X: The tape alphabet, which includes the symbols that can be written on the tapes.
∑: A finite set of input symbols (a subset of X).
B: A special blank symbol, part of the alphabet ∑, used to denote empty cells on the tape.
δ: The transition function, which can be defined as:
δ : Q × X K → Q × (X × {Lef t_shif t, Right_shif t})K

Here, K is the number of tapes, and the function determines the machine's behavior based
on the current state and the symbols under each tape head.
q_0: The initial state from which the machine starts its operation.
F: A set of accepting (final) states where the machine halts if it reaches any of these states.
Working of a Multitape Turing Machine
The operation of a multitape Turing machine involves reading symbols from multiple tapes
simultaneously, performing computations based on the transition function, and then writing
symbols back onto the tapes. Each tape has its own head that can move independently, allowing
for parallel processing of information.
1. Initialization: The input string is provided on the first tape, while all other tapes are initially
blank. The heads of all tapes start at the leftmost position.
2. Reading and Transition: At each step, the machine reads the symbols under the heads of all
tapes. Based on these symbols and the current state, the transition function δ dictates the
next state, the symbols to be written on the tapes, and the movement of the heads (left or
right).
3. Computation: The machine continues to process the input until it reaches one of the
accepting states, at which point it halts. The result of the computation can be found on one
or more of the tapes.
Example: Binary Addition with a 3-Tape Turing Machine
Consider a 3-tape Turing machine designed to perform binary addition. The input is given in the
form w1#w2, where w1 and w2 are binary numbers, and the result of the addition is printed on
the third tape.
​ ​ ​

Steps:
1. Copy w2 to Tape 2: The machine first copies the second number w2 from Tape 1 to Tape 2.
​ ​

2. Erase w1: The machine then erases w1 from Tape 1 to prepare for further operations.
​ ​
3. Initialize Carry Bit: The carry bit c is initialized to 0 in the finite control.
4. Position Heads: The heads on all tapes are positioned at the beginning.
5. Binary Addition:
If both current cells s1 and s2 on Tape 1 and Tape 2 contain binary digits, the machine
performs a full-adder operation using s1, s2, and the carry bit c.
​ ​

The sum bit is written on Tape 3, and the carry bit is updated.
​ ​

The heads move one position to the right.


6. Handling Blank Symbols: If one of the cells is blank, it is treated as 0, and the addition
continues.
7. Final Carry Check: If both cells are blank, the machine checks the carry bit. If it’s 1, it writes
1 on Tape 3. The machine then halts.
Expressive Power and Simulation
The expressive power of a multitape Turing machine is equivalent to that of a single-tape Turing
machine, meaning that anything computable by a multitape machine can also be computed by a
single-tape machine. However, the multitape machine can often do so more efficiently.
Specifically, a single-tape Turing machine can simulate a multitape machine, albeit with
potentially greater time complexity.
Regular Languages and Turing Machines
Regular Languages and Turing Machines are fundamental concepts in the field of theoretical
computer science, particularly in automata theory and formal language theory. Understanding
their relationship and differences is crucial for grasping the hierarchical nature of computational
models.
Regular Languages
Regular languages are the simplest class of languages recognized by finite automata, which are
abstract machines with a finite number of states. A regular language can be described by regular
expressions, finite automata (deterministic or non-deterministic), or regular grammars.
1. Definition:
A language L over an alphabet ∑ is called a regular language if there exists a deterministic
finite automaton (DFA), non-deterministic finite automaton (NFA), or a regular expression
that recognizes L.
2. Finite Automata:
Deterministic Finite Automaton (DFA): A DFA is a 5-tuple M = (Q, ∑, δ, q0, F ), where
Q is a finite set of states, ∑ is the input alphabet, δ : Q × ∑ → Q is the transition

function, q0 is the initial state, and F is the set of accepting states.


Non-deterministic Finite Automaton (NFA): An NFA is similar to a DFA, but the

transition function δ can move to multiple states simultaneously or even transition


without consuming any input (via epsilon transitions).
3. Regular Expressions:
A regular language can also be described using regular expressions, which are algebraic
expressions involving union, concatenation, and the Kleene star (denoting zero or more
repetitions).
4. Properties:
Closure Properties: Regular languages are closed under operations such as union,
intersection, complementation, concatenation, and the Kleene star.
Decidability: Membership, emptiness, and equivalence of regular languages are
decidable problems.
Pumping Lemma: The pumping lemma provides a necessary condition for a language to
be regular, useful for proving that certain languages are not regular.
5. Examples:
The set of strings over {0, 1} containing an even number of 0s.
The set of strings over {a, b} that do not contain the substring "bb".
Turing Machines
Turing machines (TM) represent a much more powerful class of computational models compared
to finite automata. They are capable of recognizing a broader class of languages, known as
recursively enumerable languages, and can simulate any computation that can be performed by
a digital computer.
1. Definition:
A Turing machine is a 7-tuple M = (Q, ∑, Γ, δ, q0, B, F ), where:
Q: Finite set of states.

∑: Input alphabet (excluding the blank symbol).


Γ: Tape alphabet, containing ∑ and the blank symbol B.
δ: Transition function, δ : Q × Γ → Q × Γ × {Lef t, Right}.
q_0: Initial state.
B: Blank symbol, representing the empty cells on the tape.
F: Set of accepting states.
2. Operation:
The Turing machine reads and writes symbols on an infinite tape and can move its head left
or right. It transitions between states according to the transition function δ, based on the
current state and the symbol under the tape head.
3. Types of Languages:
Recursively Enumerable Languages: These are languages that a Turing machine can
recognize. If a string belongs to the language, the Turing machine will eventually halt
and accept it.
Recursive Languages: These are a subset of recursively enumerable languages where
the Turing machine will halt on every input, either accepting or rejecting it.
4. Computational Power:
Turing machines are more powerful than finite automata and can recognize any language
that a finite automaton or pushdown automaton can recognize. However, Turing machines
cann also recognize languages that are not regular or context-free, such as the language
{a b c ∣n ≥ 1}, which cannot be recognized by a finite automaton or a pushdown
n n
automaton.
5. Relation to Regular Languages:
Regular languages, being the simplest class, are a subset of the languages recognizable by
Turing machines. A Turing machine can simulate a finite automaton, thus recognizing any
regular language. However, the converse is not true; finite automata cannot recognize non-
regular languages.
6. Decidability:
Turing machines can solve more complex problems than finite automata, but they also
encounter undecidable problems. For example, the halting problem (determining whether a
Turing machine will halt on a given input) is undecidable.
Non-Deterministic Turing Machines (NDTM)
A Non-Deterministic Turing Machine (NDTM) is a theoretical model of computation that
extends the concept of a classical (deterministic) Turing machine by allowing multiple possible
transitions from a given state. This feature introduces non-determinism into the computational
process, meaning that the machine can explore many possible computation paths
simultaneously, effectively branching out into several different states at each step.
Formal Definition
A Non-Deterministic Turing Machine can be formally defined as a 7-tuple M =
(Q, Σ, Γ, δ, q0 , B, F ), where:
Q: Finite set of states.

Σ: Input alphabet, which does not include the blank symbol.


Γ: Tape alphabet, containing Σ and the blank symbol B .
δ: Transition function, defined as:

δ : Q × Γ → 2Q×Γ×{L,R}
Here, the function returns a set of possible state transitions rather than a single one,
allowing for multiple possible outcomes for any given input configuration.
q0 : Initial state where the computation starts.
B: The blank symbol, which indicates empty cells on the tape.

F: Set of accepting (final) states, where the machine halts and accepts the input.
Working of a Non-Deterministic Turing Machine
In a Non-Deterministic Turing Machine, the machine can make several different choices at each
step, based on the current state and the symbol under the tape head. Unlike a deterministic
Turing machine, where each configuration has exactly one possible transition, a non-
deterministic Turing machine may have multiple possible transitions. The machine "branches"
into several different computation paths simultaneously.
1. Multiple Transitions: For any state q and tape symbol X , the transition function δ may yield
multiple possible next states, symbols to write, and head movements. This non-deterministic
behavior means the machine can explore multiple paths in parallel.
2. Acceptance Condition: A string is accepted by a Non-Deterministic Turing Machine if there
exists at least one sequence of transitions that leads to an accepting state in F . If any of the
possible computational paths reaches an accepting state, the input is accepted. Otherwise,
the input is rejected.
3. Parallel Exploration: Conceptually, the NDTM explores all possible computational paths in
parallel. If any path leads to an accepting state, the machine halts and accepts the input. If
all paths fail, the machine rejects the input.
4. Efficiency: While the NDTM itself operates with non-determinism, its significance lies in
theoretical studies. Non-deterministic machines can often solve certain problems more
efficiently than their deterministic counterparts. For example, they can "guess" the correct
path leading to a solution and verify it in polynomial time, a concept central to the study of
complexity classes such as NP (nondeterministic polynomial time).
Relation to Deterministic Turing Machines
Every Non-Deterministic Turing Machine has an equivalent Deterministic Turing Machine that
recognizes the same language. However, simulating a Non-Deterministic Turing Machine with a
Deterministic Turing Machine may involve a significant increase in time complexity.
1. Equivalence: While NDTMs and DTMs have equivalent expressive power (i.e., they can
recognize the same languages), NDTMs can often recognize some languages more
efficiently in terms of time complexity.
2. Simulation by DTMs: A DTM can simulate an NDTM by systematically exploring all possible
computation paths of the NDTM. This is typically done using a breadth-first search, which
explores all branches level by level. However, this simulation may lead to an exponential
increase in time complexity, as the DTM must track and explore every possible
computational path.
3. Complexity Classes: The concept of non-determinism is central to the definition of
complexity classes such as NP (nondeterministic polynomial time). Problems in NP can be
solved in polynomial time by an NDTM, but it is not known whether they can be solved in
polynomial time by a DTM (this is the famous P vs NP problem).
Example: Non-Deterministic Turing Machine for Palindromes
Consider a Non-Deterministic Turing Machine designed to recognize palindromes (strings that
read the same forwards and backwards).
Steps:
1. Guess the Middle: The NDTM non-deterministically guesses the midpoint of the string.
2. Check Symmetry: It then checks that the characters on the left side of the midpoint are
identical to the characters on the right side, moving outward from the center.
3. Acceptance: If the guessed midpoint leads to a match for all corresponding characters, the
machine reaches an accepting state. Otherwise, it backtracks and tries a different midpoint.
This example highlights the non-deterministic nature of the machine: it "guesses" a solution and
verifies it.
Restricted Turing Machines
Restricted Turing Machines are variations of the standard Turing machine model where certain
limitations or constraints are imposed on their operations. These restrictions can be on the
machine’s resources, such as the tape length, the type of head movements, or the transition
rules. Despite these limitations, restricted Turing machines help in understanding the capabilities
and limits of computation under different constraints.
Types of Restricted Turing Machines
1. Linear Bounded Automaton (LBA):
An LBA is a type of Turing machine where the tape is restricted to a length that is
linearly proportional to the length of the input. In other words, the machine’s tape can
only use a bounded region, which prevents it from expanding indefinitely.
Significance: LBAs are used to recognize context-sensitive languages, which are more
complex than context-free languages but still decidable.
2. Two-Way Finite Automaton:
This is a Turing machine that operates with a finite tape but is allowed to move its head
both left and right on the input tape. The machine cannot write on the tape; it can only
read the input.
Significance: It is similar to a deterministic finite automaton (DFA) but with the ability to
move the head in both directions, which can sometimes provide more computational
power.
3. Turing Machines with Semi-Infinite Tapes:
In this model, the tape is infinite in one direction (to the right) but has a fixed left end.
The machine can move freely to the right but cannot go beyond the leftmost cell.
Significance: Semi-infinite tapes are used in theoretical studies to understand the
impact of tape length on computational power.
4. Counter Machines:
A counter machine is a Turing machine with a restricted memory model, typically using a
finite number of counters instead of a tape. Each counter can hold a non-negative
integer value, and operations are limited to incrementing, decrementing, or checking if a
counter is zero.
Significance: Counter machines are equivalent to a specific type of Turing machine and
are useful for studying problems that involve counting or simple arithmetic operations.
5. Turing Machines with Limited Head Movements:
In some models, the Turing machine’s head is restricted in its movement. For example, it
might only be allowed to move in one direction (one-way Turing machine) or be
restricted in how often it can move left or right.
Significance: These restrictions help explore the minimal requirements for a machine to
still be computationally complete.
Importance and Applications
Understanding Computational Complexity: Restricted Turing machines are often studied to
understand how limitations on resources or operations affect the complexity and feasibility
of solving problems. For instance, LBAs help explore the boundaries of context-sensitive
languages and their place in the Chomsky hierarchy.
Simplification of Models: By restricting certain aspects of a Turing machine, researchers
can create simpler models that are easier to analyze, helping to gain insights into the nature
of computation.
Theoretical Insights: Studying restricted Turing machines provides valuable insights into
what is essential for computation and what can be simplified or omitted without losing
computational power.
Linear Bounded Automata (LBA)
Linear Bounded Automata (LBA) is a type of Turing machine with a key restriction: its tape is
limited to a length that is linearly proportional to the size of the input. Unlike a standard Turing
machine, which has an infinite tape, an LBA's tape length is bounded by the length of the input
string, making it a more constrained computational model.
Characteristics of LBA
1. Tape Length Restriction:
The tape of an LBA is not infinite; it can only use a portion of the tape that is directly
proportional to the input size. If the input string has n symbols, the tape can use at most
cn cells, where c is a constant.
2. Memory Efficiency:
LBAs are more memory-efficient compared to standard Turing machines because they
do not require an infinite tape. They operate within the limited space of the tape but still
maintain the ability to perform computations similar to a Turing machine.
3. Language Recognition:
LBAs are used to recognize context-sensitive languages. These languages are more
complex than context-free languages (which are recognized by pushdown automata)
but are still decidable. Every language that an LBA can recognize is a context-sensitive
language, and every context-sensitive language can be recognized by some LBA.
4. Deterministic and Non-Deterministic:
LBAs can be deterministic or non-deterministic. In both cases, they recognize the same
class of languages, which are the context-sensitive languages.
Importance of LBAs
Context-Sensitive Languages: LBAs are crucial for understanding and working with
context-sensitive languages, which are important in fields like compiler design, where the
structure of programming languages often requires context-sensitive rules.
Theoretical Significance: LBAs help bridge the gap between finite automata, which
recognize regular languages, and full Turing machines, which recognize recursively
enumerable languages. They are positioned between these models in terms of
computational power and complexity.
Post Correspondence Problem (PCP)
The Post Correspondence Problem (PCP) is a classic decision problem in theoretical computer
science and formal language theory, introduced by mathematician Emil Post in 1946. It is an
undecidable problem, meaning that there is no algorithm that can solve all instances of this
problem.
Definition of PCP
The Post Correspondence Problem involves two lists of strings, A = [A1, A2, … , An] and B =
[B1 , B2 , … , Bn ], where each Ai and Bi are strings over some alphabet. The task is to determine
​ ​ ​

whether there exists a sequence of indices i1, i2, … , ik (with k ≥ 1) such that the concatenated
​ ​ ​ ​ ​

string formed by the corresponding A-strings is identical to the concatenated string formed by
​ ​ ​

the corresponding B-strings. Formally, the problem asks if there exists a sequence such that:
A i1 A i2 … A ik = B i1 B i2 … B ik






Example of PCP
Consider the following example:
List A: A1 = "ab", A2 = "b", A3 = "a"
​ ​ ​

List B: B1 = "a", B2 = "ba", B3 = "ab"


​ ​ ​

We want to find a sequence of indices where the concatenated strings from A and B are equal.
One possible solution is the sequence (1, 2, 3):
A1 A2 A3 = "ab" + "b" + "a" = "abba"
​ ​ ​

B1 B2 B3 = "a" + "ba" + "ab" = "abba"


​ ​ ​

Since the concatenated strings are equal, this sequence solves the problem.
Importance and Implications
1. Undecidability:
The Post Correspondence Problem is undecidable, meaning there is no general
algorithm that can solve all possible instances of PCP. This makes it a key example of an
undecidable problem in computer science.
2. Applications:
The PCP is often used to demonstrate undecidability in various contexts, such as
reductions from other problems or in proofs of the undecidability of other problems in
formal language theory and automata.
3. Theoretical Significance:
Understanding the Post Correspondence Problem helps in grasping the limitations of
algorithmic computation. It is a gateway to exploring other undecidable problems and
concepts in computational theory, such as the Halting Problem.
The Halting Problem is determining whether a computer program will eventually stop or run forever.
Creating a general algorithm that can accurately predict this for all programs is impossible. Alan
Turing's proof showed no way to solve the Halting Problem for all cases.

Halting means that the system will either accept or halt(reject) a certain input to avoid entering an
infinite loop. The decision of whether or not a specific Turning machine should halt is known as the
Halting Problem of Turning Machines.

The halting problem is one of the most well-known problems proven to be undecidable. In this
article, we will learn about the Halting Problem in the Theory of Computation.

What is the Halting Problem?


The halting problem was proposed by Alan Turing in 1936. The halting problem is a fundamental
issue in theory and computation. The halting problem is the problem of determining whether a
computer program will halt or run forever.

Before moving on to the proof, let's first understand some terms.

1. Turing Machine
A Turing machine is a computational mathematical model. It is a type of CPU that controls all data
manipulation performed by a computer. Turing machines can be either halting or non-halting,
depending on the algorithm and the input associated with the algorithm.

2. Decision Problems
A decision problem has only two possible outcomes (yes or no) on any input. In computability and
computational complexity theories, a decision problem for a given program can be expressed as a
yes/no question of the input values.

3. Computability theory
An undecidable problem is a sort of computational problem requiring a yes/no answer but where no
computer program can give the proper answer all of the time; that is, any possible algorithm or
program would sometimes give the wrong answer or run forever without providing any answer.

(See Decidability and Undecidability)

Proof by Contradiction
Step 1: Assume we can create a machine called HM(P, I), where HM is the Halting machine, P is
the program, and I is the input. After receiving both inputs, the machine HM will output whether or
not the program P terminates.

Step 2: Now, create an inverted halting machine IM that takes a program P as input and,

Loops forever If HM returns YES.


Halts if HM returns NO.

Step 3: Now, take a situation where the program IM is passed to the IM function as an input. Here,
we got a contradiction. Let's understand how.
It is impossible for the outer function to halt if its inner function is in a loop(RHS), and it is likewise
impossible for the outer function to halt even if its inner function is halting(LHS). So both conditions
are non-halting for the IM machine, despite our above assumptions.

Hence, the halting problem is undecidable.


Church's Hypothesis (Church-Turing Thesis)
Church's Hypothesis, also known as the Church-Turing Thesis, is a foundational concept in
theoretical computer science and mathematical logic. It was independently proposed by Alonzo
Church and Alan Turing in the 1930s. The hypothesis posits that any function that can be
computed by an algorithm can be computed by a Turing machine, and thus by any equivalent
computational model.
Key Points of Church's Hypothesis
1. Computability:
The Church-Turing Thesis asserts that the intuitive notion of "computability" (what can
be computed by following a set of rules or instructions) is accurately captured by the
concept of a Turing machine. If a function or problem can be computed by any means, it
can also be computed by a Turing machine.
2. Equivalence of Models:
The hypothesis suggests that all models of computation that have been proposed (such
as Turing machines, lambda calculus, and recursive functions) are equivalent in terms of
what they can compute. They all have the same computational power, meaning they can
solve the same set of problems.
3. Not a Formal Proof:
It’s important to note that Church’s Hypothesis is not a mathematical theorem but rather
an informal assertion based on empirical observation. It cannot be formally proven
because it deals with the abstract notion of what can be computed in principle, rather
than a specific, formalized system.
4. Implications for Computability:
The Church-Turing Thesis forms the basis for understanding the limits of what can be
computed. It implies that if a problem cannot be solved by a Turing machine, it cannot
be solved by any algorithmic process, making it an unsolvable problem.
Examples and Applications
Uncomputable Problems:
Based on the Church-Turing Thesis, certain problems, such as the Halting Problem, are
known to be unsolvable by any algorithm. This highlights the limits of computation.
Foundation for Computer Science:
The Church-Turing Thesis underpins much of modern computer science, particularly in
the fields of computational theory, complexity theory, and algorithm design.
Post Correspondence Problem
Last Updated : 06 Jul, 2022

Post Correspondence Problem is a popular undecidable problem that was introduced by Emil Leon Post in
1946. It is simpler than Halting Problem. In this problem we have N number of Dominos (tiles). The aim is to
arrange tiles in such order that string made by Numerators is same as string made by Denominators. In simple
words, lets assume we have two lists both containing N words, aim is to find out concatenation of these words
in some sequence such that both lists yield same result. Let’s try understanding this by taking
two lists A and B

A=[aa, bb, abb] and B=[aab, ba, b]

Now for sequence 1, 2, 1, 3 first list will yield aabbaaabb and second list will yield same string aabbaaabb. So
the solution to this PCP becomes 1, 2, 1, 3. Post Correspondence Problems can be represented in two ways:

1. Domino’s Form :

2. Table Form :

Lets consider following examples. Example-1:


Explanation –

Step-1: We will start with tile in which numerator and denominator are starting with same number, so we
can start with either 1 or 2. Lets go with second tile, string made by numerator- 10111, string made by
denominator is 10.
Step-2: We need 1s in denominator to match 1s in numerator so we will go with first tile, string made by
numerator is 10111 1, string made by denominator is 10 111.
Step-3: There is extra 1 in numerator to match this 1 we will add first tile in sequence, string made by
numerator is now 10111 1 1, string made by denominator is 10 111 111.
Step-4: Now there is extra 1 in denominator to match it we will add third tile, string made by numerator is
10111 1 1 10, string made by denominator is 10 111 111 0.

Final Solution - 2 1 1 3
String made by numerators: 101111110
String made by denominators: 101111110

As you can see, strings are same.

Example-2:

Explanation –
Step-1: We will start from tile 1 as it is our only option, string made by numerator is 100, string made by
denominator is 1.
Step-2: We have extra 00 in numerator, to balance this only way is to add tile 3 to sequence, string made
by numerator is 100 1, string made by denominator is 1 00.
Step-3: There is extra 1 in numerator to balance we can either add tile 1 or tile 2. Lets try adding tile 1
first, string made by numerator is 100 1 100, string made by denominator is 1 00 1.
Step-4: There is extra 100 in numerator, to balance this we can add 1st tile again, string made by
numerator is 100 1 100 100, string made by denominator is 1 00 1 1 1. The 6th digit in numerator string is
0 which is different from 6th digit in string made by denominator which is 1.

We can try unlimited combinations like one above but none of combination will lead us to solution, thus this
problem does not have solution. Undecidability of Post Correspondence Problem : As theorem says that
PCP is undecidable. That is, there is no particular algorithm that determines whether any Post Correspondence
System has solution or not. Proof – We already know about undecidability of Turing Machine. If we are able
to reduce Turing Machine to PCP then we will prove that PCP is undecidable as well. Consider Turing
machine M to simulate PCP’s input string w can be represented as .

If there is match in input string w, then Turing machine M halts in accepting state. This halting state of Turing
machine is acceptance problem ATM. We know that acceptance problem ATM is undecidable. Therefore PCP
problem is also undecidable. To force simulation of M, we make 2 modifications to Turing Machine M and
one change to our PCP problem.

1. M on input w can never attempt to move tape head beyond left end of input tape.
2. If input is empty string € we use _ .
3. PCP problem starts match with first domino [u1/v1] This is called Modified PCP problem.

MPCP = {[D] | D is instance of PCP starts with first domino}

Construction Steps –

1. Put [# / (#q0w1w2..wn#)] into D as first domino, where is instance of D is MPCP. A partial match is
obtained in first domino is # at one face is same #symbol in other face.
2. Transition functions for Turing Machine M can have moves Left L, Right R. For every x, y z in tape
alphabets and q, r in Q where q is not equal to qreject. If transition(q, x) = (r, y, R) put domino [qx / by] into
D and transition(q, x) =(r, y, L) put domino [zqx / rzy] into D.
3. For every tape alphabet x put [x / x] into D.To mark separation of each configurations put [# / #] and [# /
_#] into D .
4. To read input alphabets x even after Turing Machine is accepting state put [xqa / qa] and [qax / qa]and [qa#
/ #] into D. These steps concludes construction of D.

Since this instance of MPCP, we need to convert this to PCP. So to convert to D, we consider below domino
and strings matching.

You might also like