toc mod 5 notes
toc mod 5 notes
MODULE 5
Quantum computers, on the other hand, are a new type of computing device that
harnesses the principles of quantum mechanics to perform computations. Unlike classical
computers, which store information as bits (0 or 1), quantum computers use qubits,
which can exist in a superposition of states, representing both 0 and 1 simultaneously.
This allows them to perform certain computations exponentially faster than classical
computers.
Quantum computers can solve problems more efficiently, but they cannot
solve problems that are undecidable for Turing machines.
• One of the most renowned undecidable problems is the Halting Problem. This
problem poses the question of whether a given computer program, when executed,
will eventually halt (terminate) or continue running indefinitely. A proof by
contradiction establishes the undecidability of the Halting Problem. This proof
demonstrates that no Turing Machine, a theoretical model encompassing all
possible computations, can solve the Halting Problem for every program and
input combination.
• Another prominent example of an undecidable problem is the Post Correspondence
Problem (PCP). PCP involves two sequences of strings, and the task is to determine
if a specific sequence of indices exists, such that concatenating the corresponding
strings from both sequences results in identical strings. The undecidability of PCP is
established through a reduction from the Halting Problem. This reduction implies
that if PCP were solvable, it could be leveraged to solve the Halting Problem, creating
a logical contradiction, as the Halting Problem is already proven to be undecidable.
The Halting Problem is undecidable, meaning no algorithm exists that can solve
it for all possible programs and inputs. This undecidability is proven using a proof by
contradiction:
1. Assume a Halting Machine (HM) exists that can determine whether any
program P halts on input I. HM(P, I) would output YES if P halts on I and NO if it runs
forever.
This contradiction arises because we assumed the existence of a Halting Machine (HM).
Therefore, such a machine cannot exist, and the Halting Problem is undecidable.
The Halting Problem also serves as a cornerstone for proving the undecidability of other
problems. For example, the undecidability of the Post Correspondence Problem is proven
by showing that if we could solve PCP, we could also solve the Halting Problem.
Halting Problem, we can list out all the programs that halt on given inputs, but we cannot
always definitively determine if a program will run forever.
Problem Definition:
The goal is to determine if there exists a sequence of indices (𝑖₁, 𝑖₂, . . . , 𝑖ₘ) such that the
concatenation of the corresponding strings from both sequences results in identical
strings:
Example:
• 𝐴 = (𝑎𝑏, 𝑏𝑐)
• 𝐵 = (𝑎, 𝑏)
In this case, no solution exists because the concatenation of strings from 𝐴 (′𝑎𝑏𝑏𝑐′) does
not match the concatenation from 𝐵 (′𝑎𝑏′).
Undecidability of PCP:
The undecidability of PCP can be proven through a reduction from the Halting Problem.
This means that if we could solve PCP, we could also solve the Halting Problem, which is
known to be undecidable.
• The Halting Problem asks whether a given Turing machine halts on a given input.
• If we could solve PCP, we could construct an instance of PCP that encodes the
behaviour of a Turing machine on a specific input.
• A solution to this PCP instance would correspond to the Turing machine halting
on that input.
Since the Halting Problem is undecidable, solving PCP would imply solving an
undecidable problem, which is impossible. Therefore, PCP is also undecidable.
Turing Machine
A Turing machine is a theoretical model of computation that serves as a foundation for
understanding the capabilities and limitations of computers. It was introduced by Alan
Turing in 1936 and consists of the following components:
• Tape: An infinite strip of cells, each capable of holding a symbol from a finite
alphabet.
• Head: A read/write head that can move left or right along the tape, reading and
writing symbols.
• State: A finite set of states that the machine can be in, representing its current
computational stage.
• Transition Function: A set of rules that dictate how the machine behaves based
on its current state and the symbol read by the head.
The transition function defines the actions of the Turing machine. It specifies, for each
combination of current state and symbol read, the following:
The formal notation we shall use for a Turing machine (TM) is similar to that used for
finite automata or PDA's. We describe a TM by the 7-tuple
𝛭 = (𝑄, Σ, 𝛤, 𝛿, 𝑞0 , 𝛣, 𝐹)
𝛿 : The transition function. The arguments of 𝛿(𝑞, 𝑋) are a state 𝑞 and a tape
symbol 𝑋. The value of 𝛿(𝑞, 𝑋), if it is defined, is a triple (𝑝, 𝑌, 𝐷), where:
𝑞0 : The start state, a member of 𝑄, in which the finite control is found initially.
𝐵: The blank symbol. This symbol is in 𝛤 but not in Σ; i.e., it is not an input symbol.
The blank appears initially in all but the finite number of initial cells that hold
input symbols.
• Technique:
o Augment the state set to include compound states that encode specific
information.
o For example, instead of having a simple state q1, you might use q1X to
encode that the machine is in state q1 and has seen a specific symbol X.
• Applications:
o Simplifies tape usage for some computations by encoding more
information in the states.
o Commonly used in TMs for pattern matching or for storing small amounts
of context.
3. Multiple Track Turing Machine
• Description:
A multi-track TM uses a single tape divided into multiple parallel tracks. Each
track can hold a separate sequence of symbols, but the head reads/writes on all
tracks simultaneously.
• Technique:
o Define the tape alphabet as a Cartesian product of symbols from individual
track alphabets.
o Transitions operate on tuples representing the current symbols on each
track.
• Advantages:
o Efficient for simulating multiple tapes or keeping auxiliary information
(e.g., counters, markers) in parallel.
• Applications:
o Useful for managing multiple data streams or when auxiliary calculations
need to be performed alongside the primary computation.
4. Subroutines
• Description:
Subroutines allow modular design by breaking down the overall task into smaller,
reusable parts. Each subroutine is essentially a small TM designed to perform a
specific task.
• Technique:
o Define separate state sets for each subroutine, ensuring no overlap
between states of different subroutines.
o Use "call" and "return" states to transition between subroutines.
o Subroutines can handle tasks like shifting, copying, or checking for specific
patterns.
• Applications:
o Simplifies complex machine designs by promoting reusability.
o Common in TMs that implement complex algorithms or simulate higher-
level computations.
Working Principle:
A Turing machine starts in an initial state with an input string written on its tape. It then
repeatedly applies the transition function, reading symbols, writing symbols, moving the
head, and transitioning between states. This process continues until the machine reaches
a halting state, signifying the end of computation.
Turing machines (TMs) are a foundational concept in computer science and have
applications in various domains, even though they are primarily theoretical. Their
significance lies in their ability to model computation and provide insights into the limits
and capabilities of algorithms.
1. Formalizing Computability
• Church-Turing Thesis: The Turing machine is used to state that any function
computable by an algorithm can also be computed by a Turing machine,
making it a universal model of computation.
2. Algorithm Analysis
• Efficiency and Optimization: They are used to study the time and space
required for computations, offering theoretical insights into algorithm
performance.
• Undecidable Problems: TMs are used to prove that certain problems (e.g., the
Halting Problem) cannot be solved algorithmically.
Example: Determining whether a Turing machine halts for a given input.
• Universal Turing Machine (UTM): The concept of the UTM, which can
simulate any other TM, is akin to modern AI systems being capable of general-
purpose problem-solving.
7. Cryptography
• Proofs of Security: TMs are used to prove the infeasibility of breaking certain
cryptographic systems under reasonable assumptions.
8. Compiler Design
• Parsing and Language Translation: TMs are used to model parsers that
analyze and translate programming languages, forming a basis for
understanding compiler construction.
• Turing Test and AI: The idea of machine intelligence and the limits of
computational reasoning draw heavily on the concept of TMs.
While Turing machines are not directly used in practical systems, their theoretical
insights influence the design and analysis of:
While Turing machines are powerful theoretical tools, they have limitations that
distinguish them from practical computers:
• Discrete Time Steps: Turing machines operate in discrete time steps, whereas
physical computers operate continuously.
• Simple Operations: Turing machines only perform basic operations like reading,
writing, and moving the head. Modern computers have much richer instruction
sets.
Despite these limitations, Turing machines remain a crucial concept in computer science,
providing a framework for understanding the fundamental principles of computation and
the limits of what computers can achieve.
Example:
Construct a Turing machine which accepts the language of aba over ∑ = {𝑎, 𝑏}.
Solution:
We will assume that on input tape the string '𝑎𝑏𝑎' is placed like this:
The tape head will read out the sequence up to the ∆ characters (where ∆ 𝑖𝑠 𝑏𝑙𝑎𝑛𝑘 𝑠𝑡𝑎𝑡𝑒.
If the tape head is readout '𝑎𝑏𝑎' string then TM will halt after reading ∆.
Now, we will see how this Turing machine will work for 𝑎𝑏𝑎. Initially, state is q0 and head
points to a as:
The move will be 𝛿(𝑞0, 𝑎) = 𝛿(𝑞1, 𝐴, 𝑅) which means it will go to state 𝑞1, replaced a by
𝐴 and head will move to right as:
The move will be 𝛿(𝑞1, 𝑏) = 𝛿(𝑞2, 𝐵, 𝑅) which means it will go to state 𝑞2, replaced 𝑏 by
𝐵 and head will move to right as:
The move will be 𝛿(𝑞2, 𝑎) = 𝛿(𝑞3, 𝐴, 𝑅) which means it will go to state q3, replaced a by
𝐴 and head will move to right as:
The move 𝛿(𝑞3, ∆) = (𝑞4, ∆, 𝑆) which means it will go to state q4 which is the HALT state
and HALT state is always an accept state for any TM.
States 𝒂 𝒃 ∆
q0 (𝑞1, 𝐴, 𝑅) – –
q1 – (𝑞2, 𝐵, 𝑅) –
q2 (𝑞3, 𝐴, 𝑅) – –
q3 – – (𝑞4, ∆, 𝑆)
q4 – – –
(Current State, Read Symbol) → (New State, Write Symbol, Head Movement).
• Current State and Read Symbol: These represent the input to the transition
function. The machine examines its current state and the symbol under its head
on the tape.
• New State, Write Symbol, Head Movement: This is the output of the transition
function. It specifies how the machine should change its state, what symbol to
write on the tape, and whether to move the head left or right (or stay in the same
position).
To design a transition function for a specific task, you need to carefully consider how each
combination of state and read symbol should be handled to ultimately achieve the desired
computation.
The tape is the Turing Machine's working memory, and the head is its tool for interacting
with that memory.
Here are some common techniques for manipulating the tape and head:
• Marking Cells: You can use special symbols from your alphabet to mark specific
cells on the tape. This helps the machine remember locations it needs to revisit,
store intermediate results, or mark the beginning and end of data sections.
• Simulating Multiple Tapes: While a basic Turing Machine has one tape, you can
simulate multiple tapes using a single tape. You can partition the tape into sections
and use special symbols to delimit these sections, effectively treating each section
as a separate tape.
State Transitions
The finite set of states in a Turing machine represents its memory and control flow
mechanisms.
• States for Control Flow: States can be used to control the sequence of actions
performed by the machine. You can design states to represent different phases of
an algorithm (initialization, processing, output, etc.) and use transitions to move
between these phases based on the symbols read and the intended logic of the
algorithm.
Imagine you want to design a Turing machine to add two unary numbers. The numbers
are represented as strings of 1s separated by a 0. For instance, 11011 represents 2 + 2.
1. States: You might have states like "Start," "Find First Number," "Find Second
Number," "Add," and "Halt."
2. Tape Manipulation: You'd need to move the head to locate the two numbers,
replace the separating 0 with a 1, and possibly clean up extra symbols.
3. Transitions: Transitions would dictate how the machine changes states based on
the symbol read. For example, if in the "Find First Number" state and the head
reads a 1, it would stay in the same state and move right. If it reads a 0, it would
transition to the "Find Second Number" state.
Head
• Structure: Instead of a single tape, a multi-tape Turing machine has k tapes, where
k is a positive integer. Each tape is infinite in length, just like in the single-tape
model. The machine has a separate head for each tape, allowing it to read and
write on multiple tapes simultaneously.
Consider the task of determining whether a given string is a palindrome (reads the same
backward as forward). A two-tape Turing machine can solve this efficiently:
2. Copying: Using both heads, the machine copies the input string from the first tape
to the second tape in reverse order.
3. Comparison: The heads are moved to the beginning of each tape. The machine
compares the symbols under each head, moving both heads to the right
simultaneously. If all symbols match until the end of the tapes is reached, the
string is a palindrome.
Where:
This means that for a given state and tape symbol, there may be multiple possible
next states, tape symbols, and head movements.
• 𝑞0 : Start state.
Key Features
1. Non-Determinism:
• At any step, the machine can choose among several possible transitions. It can
"guess" the correct sequence of choices to solve a problem.
• This is often visualized as a tree of computations where each branch represents a
possible sequence of transitions.
2. Acceptance Condition:
• An input is accepted if at least one computation branch leads to the accept state
(𝑞𝑎𝑐𝑐𝑒𝑝𝑡 ).
• If all branches lead to the reject state (𝑞𝑟𝑒𝑗𝑒𝑐𝑡 ), the input is rejected.
3. Parallelism (Theoretical):
• Conceptually, the NDTM can explore all computation branches simultaneously.
This does not correspond to any physical machine but is useful for theoretical
analysis.
Decidable Problems:
Semi-Decidable Problems:
Semi-decidable problems are characterized by a Turing machine that halts and accepts
strings belonging to the corresponding language. However, for strings not in the
language, the Turing machine might halt and reject or enter an infinite loop. This
ambiguity in halting behavior distinguishes semi-decidable problems from decidable
ones.
Undecidable Problems:
Undecidable problems are those for which no algorithm can be constructed that can
answer the problem correctly in finite time. These problems may be partially decidable,
but they will never be decidable. There will always be a condition that will lead the
Turing Machine into an infinite loop without providing an answer.
Recursive Enumerable Language (Partially TM will halt sometimes and may not
Decidable Language) halt sometimes
Undecidable NO TM for that language
o Halts and rejects any string that does not belong to the language. These
languages are also called decidable languages. The Turing machine acts
as a decider, always providing a definitive answer (accept or reject) for
any given input.
o May halt and reject or run forever for strings not in the language.
predict whether any program will halt or run forever, is a classic example of an
undecidable problem.
Languages beyond Turing Machine capabilities would be those that are not
recursively enumerable. This means there is no Turing Machine that can be
constructed to recognize strings belonging to these languages. These languages are
inherently more complex and lie outside the realm of problems solvable by
computational models like Turing Machines.