P vs NP
P vs NP
This theory helps us understand the limits of what can be computed efficiently, and it
provides a foundation for the design of algorithms and understanding the efficiency of
computational processes.
Class P:
The class P consists of decision problems (i.e., problems with a yes/no answer) that can be
solved by a deterministic Turing machine in polynomial time. A deterministic Turing
machine is a theoretical computational model that processes input one step at a time, and its
behavior is completely determined by the input and the state of the machine at each step.
Formally, a decision problem is in P if there exists an algorithm that can solve it in time
O(nk ) for some constant k , where n is the size of the input. This means that the problem
can be solved efficiently, and the running time grows at a reasonable rate as the input size
increases.
Class NP:
1/39
The class NP (nondeterministic polynomial time) consists of decision problems for which a
given solution can be verified in polynomial time by a deterministic Turing machine. In other
words, if someone gives you a proposed solution to an NP problem, you can check whether
this solution is correct in polynomial time.
Formally, a problem is in NP if, for any input instance, there exists a certificate (a proposed
solution) that can be verified in polynomial time. It is important to note that NP does not
require the solution to be found in polynomial time, only that it can be checked in polynomial
time.
The P vs NP problem is one of the most famous and unresolved questions in computer
science. It asks whether P = NP, meaning whether every problem whose solution can be
verified in polynomial time (NP) can also be solved in polynomial time (P).
If P = NP, then problems that we currently view as difficult, such as the traveling
salesman problem, integer factorization, and many others, could be solved efficiently (in
polynomial time). This would have profound implications across a wide range of fields,
including optimization, cryptography, artificial intelligence, and more.
If P ≠ NP, then there are problems for which finding a solution is much harder than
verifying one. This would imply a fundamental limitation on the power of algorithms and
computational models, solidifying the inherent difficulty of certain problems.
The resolution of this question would not only change our understanding of algorithms but
also affect how we design and implement computational systems. Many of the security
protocols used today, such as RSA encryption, rely on the assumption that certain NP
problems (like factoring large integers) cannot be solved efficiently. If P = NP, such security
guarantees would be shattered, leading to potential vulnerabilities in cryptographic systems.
2/39
other hand, if P ≠ NP, we need to focus on approximate or heuristic solutions for hard
problems, as exact solutions may not be feasible in practice.
Cryptography: Many cryptographic systems, such as RSA, are based on the difficulty of
certain NP problems. For example, RSA encryption relies on the fact that factoring large
numbers is computationally hard. If P = NP, then these problems could be solved
efficiently, and modern cryptography would collapse. The P vs NP problem is therefore
of critical importance in securing sensitive data in today's digital world.
co-NP: The class co-NP is the complement of NP. A problem is in co-NP if its complement
(the negation of the decision problem) is in NP. For example, if an NP problem asks
whether a graph has a Hamiltonian cycle, the corresponding co-NP problem asks
whether a graph does not have a Hamiltonian cycle.
NP-hard: A problem is NP-hard if it is at least as hard as the hardest problems in NP. This
means that if you could solve an NP-hard problem in polynomial time, you could also
solve all NP problems in polynomial time. NP-hard problems are not necessarily in NP, as
they may not be decision problems. For example, the halting problem is NP-hard, but it
is not in NP.
3/39
These topics set the stage for the rest of the course, where we will explore these concepts in
more detail, look at specific NP-complete problems, and examine the tools that have been
developed to address them.
One of the most important results in the history of computational complexity theory was the
discovery of NP-completeness, which occurred in the early 1970s. This breakthrough came
in part due to the work of two key figures: Stephen Cook and Richard Karp.
1. A is in NP.
2. Every problem in NP can be reduced to A in polynomial time.
The result of the Cook-Levin theorem was a crucial step because it identified the first NP-
complete problem, and from this point onward, researchers could investigate other
problems and determine whether they were NP-complete or not. This led to the
development of a rich theory of NP-completeness and polynomial-time reductions.
Richard Karp’s Work (1972): In 1972, Richard Karp expanded on Cook's work by
identifying 21 NP-complete problems. Karp’s paper, Reducibility Among Combinatorial
4/39
Problems, published in 1972, was a landmark in computational complexity theory. In this
paper, Karp demonstrated that many famous problems in optimization, graph theory,
and combinatorics are NP-complete. For example, the traveling salesman problem,
knapsack problem, and graph coloring are all NP-complete problems. Karp's
contribution was significant because it showed that the set of NP-complete problems is
much broader than initially thought, and this set forms a crucial core for the theory of
NP-completeness.
The P vs NP problem, specifically, asks whether these two classes are equal. In other words,
the question asks: Is every problem for which a solution can be verified in polynomial
time (NP) also a problem that can be solved in polynomial time (P)?
At the time, the general belief was that P ≠ NP, based on the apparent difficulty of finding
polynomial-time algorithms for many NP problems, despite the fact that their solutions can
be verified in polynomial time. However, there was no formal proof, and the question of
whether P = NP or P ≠ NP became one of the most important open problems in theoretical
computer science.
Key Milestones and the Progression of the Problem Over the Decades
1970s: The birth of the P vs NP problem is closely tied to the work of Cook, Levin, and
Karp. Cook’s discovery of NP-completeness, followed by Karp’s extensive list of NP-
complete problems, created a surge of interest in understanding the relationship
between P and NP. Early conjectures favored P ≠ NP, but no proof was forthcoming.
1980s: The 1980s saw the introduction of the complexity class NP-hard, which
generalizes the concept of NP-complete. NP-hard problems are those that are at least as
hard as any problem in NP, but they may not themselves belong to NP. Researchers also
started investigating the concept of reductions, which are ways of transforming one
problem into another. Polynomial-time reductions became a central tool for classifying
the complexity of problems.
During this period, cryptography emerged as a field that depended on the assumption
that P ≠ NP. Public-key cryptosystems, such as RSA, relied on the belief that certain NP
problems, such as factoring large integers, were hard to solve efficiently.
5/39
1990s - Present: The P vs NP problem continued to be one of the central questions in
computer science. Despite significant advances in computational complexity, no
progress has been made in proving whether P = NP or P ≠ NP. Efforts to prove the
problem have been tied to deep mathematical structures, including computational
geometry, approximation algorithms, and quantum computing.
In 2000, the Clay Mathematics Institute included the P vs NP problem in its list of
Millennium Prize Problems, offering a prize of $1 million for a correct solution. This
brought renewed attention to the problem, but it remains unsolved.
Stephen Cook: Cook’s work on the Cook-Levin theorem and the concept of NP-
completeness fundamentally shaped the field of computational complexity. His research
showed the importance of understanding not just what problems can be solved but also
the inherent difficulty of solving them.
John Nash: While not directly involved in the P vs NP problem, John Nash's work on
game theory had an indirect impact on computational complexity. Nash's ideas about
equilibrium and optimization influenced the study of approximation algorithms for NP-
complete problems. Many problems in game theory are NP-hard, and Nash’s work
provided tools for analyzing solutions to these problems.
Adleman and Karp: Developed the Adleman-Karp algorithm, a key result in the
study of NP-completeness.
The Significance of This Open Question in Both Theoretical and Applied Domains
The P vs NP problem is not only a central theoretical question in computer science, but it
also has profound implications for many applied fields:
6/39
1. Algorithms: If P = NP, many currently intractable problems could be solved efficiently,
revolutionizing fields like optimization, logistics, and machine learning. On the other
hand, if P ≠ NP, then researchers will focus on developing efficient approximation and
heuristic algorithms for NP-complete problems.
3. Artificial Intelligence and Machine Learning: Many problems in AI, such as planning,
reasoning, and optimization, involve NP-complete problems. The resolution of P vs NP
could have a profound effect on the development of AI systems capable of solving
complex problems efficiently.
The open nature of the problem continues to drive research in computational complexity,
encouraging new ideas, tools, and techniques in algorithm design, cryptography, and
beyond.
In computational complexity theory, the classes P and NP are defined based on the time
complexity of decision problems. A decision problem is a problem with a yes/no answer, and
we are interested in determining the resources required (especially time) to solve or verify
the solution to these problems.
7/39
Class NP (Nondeterministic Polynomial Time):
A decision problem is in NP if, given a solution (often called a certificate or witness),
there exists a deterministic algorithm that can verify the correctness of the solution in
polynomial time. Importantly, this does not mean that the problem itself can be solved
in polynomial time, only that we can efficiently check if a given solution is correct.
The central question of the P vs NP problem is whether P = NP — i.e., whether every problem
whose solution can be verified in polynomial time can also be solved in polynomial time.
The 21 problems identified by Karp are all NP-complete, and they form a core set of
problems that have been extensively studied in terms of their computational difficulty. If
any of these problems can be solved in polynomial time, it would imply that P = NP.
8/39
Reductions and Their Role in NP-Completeness
NP-Hardness:
A problem is NP-hard if it is at least as difficult to solve as the hardest problems in NP.
More formally, a problem is NP-hard if every problem in NP can be reduced to it in
polynomial time. It is important to note that NP-hard problems do not necessarily
belong to NP, because they may not be decision problems. For example, the halting
problem is NP-hard, but it is not in NP, since there is no polynomial-time verifier for its
solutions.
Connection to P vs NP:
The distinction between NP-complete and NP-hard is significant in the context of P
vs NP. NP-complete problems are both in NP and NP-hard, meaning that if any NP-
complete problem is solved in polynomial time, then P = NP. On the other hand, if
any NP-hard problem can be solved in polynomial time, then P = NP as well, because
9/39
it would imply that all NP problems could be reduced to it and solved in polynomial
time.
While P and NP are among the most studied complexity classes, there are many other
important complexity classes in computational complexity theory. These classes extend the
understanding of computational difficulty and help classify problems beyond the scope of P
and NP.
PSPACE vs NP:
PSPACE contains NP, because any problem that can be solved in polynomial time
(and thus in NP) can also be solved using polynomial space. In fact, it is known that
PSPACE = NPSPACE (a result from Savitch’s theorem), which shows that space
complexity is often more fundamental than time complexity in some cases.
EXPTIME vs NP:
EXPTIME is much larger than NP, as the resources required to solve problems in
EXPTIME grow much faster than for NP problems. In general, problems in EXPTIME
are not expected to be solvable in polynomial time, and proving this formally is an
important part of complexity theory.
Other Classes:
Beyond PSPACE and EXPTIME, there are many other complexity classes used to classify
10/39
problems based on their time and space complexity, including co-NP, L (Logarithmic
Space), BPP (Bounded-Error Probabilistic Polynomial Time), and #P (Counting
Problems), among others. These classes help to refine our understanding of what is
computationally feasible.
Co-NP: The class co-NP consists of problems whose complements are in NP. For
example, while an NP problem might ask whether a given Boolean formula is
satisfiable, a co-NP problem might ask whether it is unsatisfiable.
#P: The class #P is concerned with counting the number of solutions to a problem,
as opposed to deciding whether a solution exists. For example, counting the
number of satisfying assignments to a Boolean formula is a #P problem.
This lecture covered the formal definitions of P and NP, the central role of NP-complete
problems in understanding computational complexity, the importance of reductions in
proving NP-completeness, and the relationship between NP-hardness and the P vs NP
question. We also briefly explored other important complexity classes that help to further
refine our understanding of computational difficulty. In subsequent lectures, we will dive
deeper into specific NP-complete problems and explore the complexity theory tools used to
address them.
Polynomial Time:
In computational complexity theory, a problem is said to be solvable in polynomial time
if there exists an algorithm that solves the problem in a number of steps that is bounded
by a polynomial function of the size of the input. The input size is typically represented
by n, and the time complexity of an algorithm is expressed as O(nk ), where k is a
constant and O denotes the Big-O notation that describes an upper bound on the time
complexity.
11/39
For example, if an algorithm has a time complexity of O(n2 ), it means that as the
size of the input increases, the time taken by the algorithm grows quadratically. If it
has O(n3 ), the time taken grows cubically, and so on.
Example:
An algorithm with a time complexity of O(n2 ) may take 1000 steps for an input
size of n = 100, but 1 million steps for an input size of n = 1000, which is still
manageable for modern computational systems. In contrast, an exponential-
time algorithm like O(2n ) would take roughly 1024 steps for n = 10, but it
would take more than a billion steps for n = 30, making it impractical for larger
inputs.
Example: Sorting
The sorting problem (sorting a list of numbers) is solvable in polynomial time with
algorithms such as Merge Sort and QuickSort, which have time complexities of
O(n log n). These algorithms can handle large datasets efficiently, which makes
sorting feasible even for large applications like database management and data
analysis.
12/39
Optimization:
In optimization problems, polynomial-time algorithms allow practitioners to find
optimal or near-optimal solutions efficiently. Problems like linear programming
(solvable in polynomial time) are central in fields such as economics, operations
research, and logistics.
Cryptography:
Most modern cryptographic protocols, like RSA encryption, depend on the
assumption that certain problems (e.g., integer factorization) are hard to solve in
polynomial time. The security of these systems relies on the fact that no known
polynomial-time algorithms exist for certain problems.
Graph algorithms: Algorithms like Dijkstra's shortest path algorithm (which finds
the shortest path between nodes in a graph) have polynomial-time complexity and
are widely used in navigation systems, networking, and operations research.
Tractable Problems:
Problems are classified as tractable if there exists a polynomial-time algorithm for
solving them. Tractable problems can be solved in reasonable time for practical
purposes, even when the input size becomes large. These are the problems that fall
within the class P.
Sorting an array, multiplying two numbers, and searching for a value in a sorted
list are all examples of problems that are tractable because they can be solved
in polynomial time.
Intractable Problems:
Intractable problems are those for which no polynomial-time algorithm is known, and
they are typically considered computationally hard. Many problems in NP that are not
known to be solvable in polynomial time (i.e., NP-complete problems) are considered
13/39
intractable. These problems require time that grows exponentially or factorially with the
size of the input, making them impractical to solve for large inputs.
The Traveling Salesman Problem (TSP) and the Boolean satisfiability problem
(SAT) are examples of problems that are NP-complete, meaning they are
intractable unless P = NP, but they remain impractical for large inputs because
no polynomial-time algorithm is known for solving them.
This result would solidify the notion that certain problems require fundamentally
different approaches for efficient solutions, such as approximation algorithms,
heuristics, or probabilistic methods.
Approximation Algorithms:
For many NP-complete problems, we may focus on approximation algorithms that
can find near-optimal solutions in polynomial time, even though they cannot
guarantee an optimal solution. This approach is commonly used in fields like
operations research and machine learning.
Heuristic Methods:
In cases where exact solutions are too costly to compute, practitioners may rely on
heuristic algorithms that do not guarantee an optimal solution but are fast and can
provide good-enough solutions for practical purposes. For example, in AI, local
14/39
search or genetic algorithms are often used to solve intractable problems like the
traveling salesman problem.
Knapsack Problem:
In the 0/1 knapsack problem, a set of items, each with a weight and value, must be
packed into a knapsack with a given weight capacity to maximize the total value. This is
another NP-complete problem that arises in resource allocation and optimization tasks.
15/39
This problem is NP-complete, and its applications include scheduling, register allocation
in compilers, and frequency assignment in communication networks.
This lecture covered the definition of polynomial time and its importance in distinguishing
tractable problems from intractable ones. It discussed the practical implications of
problems being solvable in polynomial time and the consequences if P ≠ NP. We also
explored several NP-complete problems, including SAT, the Traveling Salesman Problem, and
others, which form the core of complexity theory and algorithm design.
Formally, in an NDTM, for a given state and input symbol, there can be multiple
possible transitions to different states, possibly with different tape symbols and
different head movements. The NDTM "chooses" a branch at each step, and we
16/39
consider the machine to accept an input if at least one of these non-deterministic
paths leads to an accepting state.
Example:
A non-deterministic machine might, given an input string, try all possible
configurations of truth assignments for a Boolean formula simultaneously. If at least
one of the configurations satisfies the formula, the machine accepts the input.
Definition of Non-Determinism:
Non-determinism refers to the ability of a computational model (like an NDTM) to make
multiple possible choices at each step of its computation. In the context of decision
problems, non-determinism allows a machine to simultaneously explore many possible
solutions and accept an input if any path leads to an accepting state.
Verification in NP:
The central idea of NP is that a solution to a problem can be verified efficiently (in
polynomial time), even if finding the solution may not be easy. A non-deterministic
Turing machine can guess a solution (non-deterministically), and then verify it in
polynomial time. This is why the class NP is often described as the set of problems
for which a proposed solution can be verified in polynomial time.
Example:
In the Boolean satisfiability problem (SAT), a non-deterministic machine could
"guess" an assignment of truth values to variables and then verify whether this
assignment satisfies the formula in polynomial time.
Implications:
The ability of non-determinism to explore many possibilities at once is powerful.
However, we do not have a physical machine that can actually perform non-
deterministic computation in parallel across all branches. Instead, the concept of
non-determinism is primarily used in theoretical computer science to describe an
idealized, abstract model of computation.
17/39
Deterministic Computation:
In a deterministic computation (such as that of a deterministic Turing machine or a
classical computer), at each step, the machine is in a single state with a single transition
to the next state based on the current input symbol. There is no ambiguity in how the
machine proceeds. The computation is fully determined by the current configuration and
input.
Example:
A deterministic algorithm for sorting an array will follow a fixed series of steps to
arrange the elements in order. For a given input, the algorithm will always produce
the same output and follow the same series of operations.
Non-Deterministic Computation:
In a non-deterministic computation, multiple possible transitions exist from any given
state. Instead of a single path, there are potentially many paths that can be explored
simultaneously. The machine is thought to "choose" the best path (or paths) that lead to
a solution, which is why non-deterministic machines can be thought of as efficiently
solving problems by exploring all possible solutions at once.
Example:
Consider the problem of finding a Hamiltonian path in a graph. A non-deterministic
machine could non-deterministically guess the path and then verify its correctness
in polynomial time by checking if the path visits all vertices exactly once. If any such
path exists, the machine accepts the input; otherwise, it rejects it.
If P = NP, this would mean that non-deterministic algorithms, which can solve
problems in polynomial time by exploring many possibilities in parallel, can be
simulated by deterministic algorithms in polynomial time.
If P ≠ NP, it means that there are problems for which non-deterministic algorithms
can find solutions in polynomial time, but no deterministic algorithm can do the
same.
18/39
NP-Completeness and Non-Determinism:
A problem is NP-complete if it is in NP and is as hard as any other problem in NP,
meaning that every other problem in NP can be reduced to it in polynomial time. The
fact that NP-complete problems can be solved by non-deterministic Turing machines in
polynomial time (if P = NP) highlights the central role of non-determinism in the P vs NP
question.
19/39
polynomial hierarchy has more levels and that problems higher up in the hierarchy
require more computational resources than those in NP.
Conclusion
Usage of Oracle Machines: Oracle Turing machines are used to investigate the
behavior of classes like P and NP under hypothetical conditions. They allow us to
explore how the inclusion of an oracle changes the relative power of different
complexity classes. If a machine can solve a problem with the help of an oracle, we
analyze how this affects the complexity of the problem.
20/39
oracle machine is then able to solve problems that would otherwise require
exponential time by leveraging this instantaneous access to SAT solutions.
The Baker-Gill-Solovay Result (1975): One of the most famous results regarding the P vs
NP problem is the Baker-Gill-Solovay theorem, which shows that relativization does not
offer a complete solution to the question of whether P = NP . The theorem
demonstrates that relativization can fail to reveal the relationship between P and NP
because there exist oracles relative to which P = NP , and others where P =
NP . In
other words, an oracle can be constructed such that a non-deterministic polynomial-time
machine and a deterministic polynomial-time machine are either both equivalent or
both distinct, but no definitive conclusion can be drawn from these oracle results alone.
The theorem was one of the first to show that oracle results are not sufficient to
settle the P vs NP question. Although an oracle may suggest a particular
relationship between P and NP in some cases, it doesn't necessarily apply
universally.
This result highlighted the need for new techniques that do not rely solely on
relativization. It suggested that the true resolution of P vs NP might lie outside
the realm of relativizing arguments, prompting the development of more
sophisticated methods in complexity theory.
21/39
general resolution to the P vs NP question.
This realization led researchers to seek new techniques that might be able to break
through the limitations of relativization.
1. Diagonalization:
2. Natural Proofs:
Natural proofs is a framework introduced by Razborov and Rudich in the 1990s that
formalizes a certain kind of proof technique. A proof is called "natural" if it satisfies
two properties:
The natural proofs barrier shows that certain approaches to proving that P NP
=
are inherently difficult because they rely on methods that, in some sense, cannot
separate P from NP.
This result essentially shows that typical techniques that are "natural" in the sense of
being broad and applicable to a wide class of problems are unlikely to resolve the P
22/39
vs NP question. As a result, researchers are motivated to find techniques outside of
this framework.
3. Structural Complexity:
Conclusion
This lecture explored several advanced techniques for tackling the P vs NP problem. We
examined relativization and the concept of oracle machines, which help us understand how
the power of a computational model can change with the inclusion of an oracle. The Baker-
Gill-Solovay result highlighted the limitations of relativizing methods in resolving the P vs NP
question, leading to the development of other techniques. We also discussed the importance
of diagonalization, natural proofs, and structural complexity as alternative approaches to
studying the problem. These methods have provided deeper insights into the nature of
computational complexity but have not yet provided a definitive answer to the P vs NP
question. Future progress will likely involve novel techniques that go beyond the traditional
approaches discussed in this lecture.
23/39
Over the decades, there have been many attempts to resolve the P vs NP problem, but none
have succeeded in providing a definitive proof. The main approaches to tackling the problem
have varied, ranging from algebraic and combinatorial methods to circuit complexity, and
from diagonalization techniques to relativization arguments.
1. Attempted Proofs Using Circuit Complexity: One of the major approaches to resolving
P = NP has been through the study of circuit complexity. The goal is to understand
whether NP-complete problems can be solved by circuits with polynomial size. Circuit
complexity concerns itself with how a problem can be computed using Boolean circuits
(which are models of computation like Turing machines) that have a bounded number of
gates and layers.
2. Attempts Using Algebraic Methods: Some researchers have tried to approach the
problem from an algebraic perspective, focusing on whether the algebraic structure of
certain problems (such as polynomial equations or systems of linear constraints) can
provide a way to prove a separation between P and NP. However, while algebraic
methods have proven useful in some areas of complexity theory (such as proving lower
bounds for specific classes of problems), they have not yet been successful in resolving
the P vs NP question.
24/39
Diagonalization does not account for non-determinism effectively, so it cannot
distinguish between deterministic and non-deterministic polynomial-time problems
in a meaningful way for the P vs NP question.
However, despite extensive work, no such lower bounds on proof sizes have been
established, and it remains unclear whether such a result is achievable.
Proof Systems and Length of Proofs: In proof complexity, the goal is to study the
length and structure of proofs required to solve problems in NP. Specifically, the field is
concerned with whether problems in NP can have short, easily checkable proofs (as non-
deterministic machines suggest), and if so, whether those proofs can be found in
polynomial time (as in P).
Complexity of Proofs: One major insight from proof complexity is the realization that
proofs for NP-complete problems could be exponentially large in certain proof systems.
This suggests that even though a non-deterministic machine can guess a solution in
polynomial time, finding the proof of that solution might require exponential resources.
Proving Lower Bounds in Proof Complexity: One significant result in proof complexity
was the advent of natural proofs, which sought to formalize a method for proving that
certain problems cannot have short proofs in certain systems. The discovery of barriers
to natural proofs led to a better understanding of the inherent difficulty of proving lower
bounds for proof systems, and has influenced the direction of research on P vs NP.
Natural Proofs Framework: The natural proofs framework, introduced by Razborov and
Rudich in 1993, was designed to find a way to prove that certain problems cannot have
short proofs in certain proof systems. A proof is considered “natural” if:
25/39
The Razborov-Rudich Barrier: In their result, Razborov and Rudich showed that
natural proofs cannot be used to separate P from NP. Specifically, they
demonstrated that if there existed a natural proof that P ≠ NP, it would contradict
the generalized Markov assumption (a widely believed conjecture about random
objects in complexity theory).
Barriers from Natural Proofs: The Razborov-Rudich result showed that natural proof
methods cannot be used to resolve the P vs NP question, effectively ruling out one
important class of potential approaches. This result highlights the complexity of the
problem and the need for new, more powerful techniques.
Lack of Progress in Circuit Complexity: Despite the development of lower bounds for
specific types of circuits (like constant-depth circuits), proving super-polynomial lower
bounds for general circuits remains elusive. Until a breakthrough is made in this area,
circuit complexity approaches are unlikely to provide a definitive answer to the P vs NP
question.
Although the P vs NP problem remains unsolved, there are several promising research
directions that may eventually lead to new insights or breakthroughs:
1. Quantum Computing and P vs NP: Quantum computing has introduced new paradigms
of computation, and some researchers have explored the possibility of solving NP
problems more efficiently using quantum algorithms. However, even though quantum
computers offer significant speed-ups for certain problems, there is no evidence to
suggest that quantum computing can resolve the P vs NP question directly.
26/39
2. Fine-Grained Complexity: The field of fine-grained complexity studies the precise time
complexities of specific problems, particularly NP-complete problems. Researchers are
investigating whether small improvements in the algorithms for NP problems can lead
to insights about P vs NP. For example, if an NP-complete problem can be solved in
significantly faster than exponential time for certain cases, this might have implications
for the broader P vs NP question.
3. Advanced Circuit Lower Bounds: Progress in proving lower bounds for circuits that
solve NP-complete problems is a key ongoing area of research. Advances in this area
could lead to breakthroughs in understanding the separation between P and NP.
4. New Proof Systems: Researchers are exploring alternative proof systems and logical
frameworks to understand the complexity of NP problems more deeply. Innovations in
proof systems may lead to new insights that are not constrained by the barriers of
natural proofs.
Conclusion
In this lecture, we explored the search for a proof of P = NP and the various attempts to
resolve the problem. We discussed the challenges posed by proof complexity, the natural
proofs barrier, and the limitations of techniques like relativization and circuit complexity.
Despite these challenges, ongoing research in areas like quantum computing, fine-grained
complexity, and advanced circuit lower bounds continues to push the boundaries of
complexity theory, offering hope for future breakthroughs. The P vs NP problem remains
one of the central questions in computer science, and solving it will likely require new
methods that transcend the current barriers.
27/39
Cryptography relies heavily on the assumption that certain problems are computationally
difficult to solve. In particular, the security of many cryptographic protocols depends on the
belief that solving NP-complete problems (or related hard problems) requires exponential
time, and therefore, is infeasible in practice. The P vs NP question directly challenges this
assumption, as a resolution in favor of P = NP would imply that there are efficient
(polynomial-time) algorithms for solving NP-complete problems.
1. RSA and Public-Key Cryptography: The security of RSA encryption, one of the most
widely used public-key cryptosystems, is based on the assumption that integer
factorization is difficult. Integer factorization is believed to be an NP-hard problem
(though not proven to be NP-complete), and there is no known polynomial-time
algorithm to solve it efficiently. The security of RSA would be jeopardized if it were
proven that integer factorization could be done in polynomial time, which would follow
from P = NP .
2. Discrete Logarithm Problem (DLP): In systems like Diffie-Hellman key exchange and
Elliptic Curve Cryptography (ECC), security relies on the hardness of the discrete
logarithm problem. The discrete logarithm problem, which involves finding the
exponent k such that g k = h (where g and h are elements of a finite group), is
28/39
considered computationally difficult. If P = NP , an efficient solution for the discrete
logarithm problem could undermine the security of these cryptographic systems.
29/39
The overwhelming majority of cryptographic systems today are based on the assumption
that P NP , meaning that NP-complete problems are not solvable in polynomial time.
=
These systems include:
1. Public-Key Cryptography (RSA, ECC, DLP): Systems like RSA and ECC rely on the fact that
problems like integer factorization and discrete logarithms are computationally difficult.
The security of these systems assumes that there is no polynomial-time algorithm for
solving these problems, which would be invalidated if P = NP .
2. Symmetric-Key Cryptography (AES, SHA-256): While symmetric-key cryptographic
systems rely on different assumptions (e.g., brute-force attacks being infeasible), their
security often depends on the intractability of certain problems, such as collision
resistance in hash functions (which are related to NP-complete problems). If P = NP ,
attacks on these systems might become feasible.
The computational hardness assumptions on which cryptographic systems are based are
critical to their security. These assumptions typically fall into two categories:
The security of cryptography is thus directly tied to the intractability of certain problems. If
P = NP , many widely used cryptographic systems would no longer be secure, and
30/39
alternative approaches would need to be found.
Conclusion
In this lecture, we explored the significant impact that the P vs NP problem has on the field
of cryptography. Cryptographic systems, both public-key and symmetric-key, are heavily
reliant on the assumption that certain problems are computationally difficult and cannot be
solved in polynomial time. If P = NP , it would imply that many of these problems could be
solved efficiently, undermining the security of existing cryptographic protocols. This would
have profound consequences for the security of digital communication, financial
transactions, and privacy. Current cryptographic systems are based on the assumption that
P =
NP , and resolving the P vs NP problem will have far-reaching consequences for the
future of cryptography and computational security.
While the P vs NP problem is one of the most famous and fundamental questions in
complexity theory, it is not the only open problem. Several other problems are still
unresolved, and they have deep implications for our understanding of computational
complexity.
1. The Polynomial Hierarchy (PH): The polynomial hierarchy is a complexity class that
generalizes NP and co-NP into a multi-level structure. The hierarchy consists of
alternating classes of problems that can be solved by machines with access to oracles
that alternate between NP and co-NP.
ΠPk , where k indicates the level of the hierarchy. The first level ΣP1 is NP, and ΠP1 is
31/39
Open Problem: One of the major open problems in this area is whether the
polynomial hierarchy collapses at some level, which would imply that the hierarchy
is much smaller than expected. If the hierarchy collapses, it would suggest that NP
problems are not fundamentally harder than problems solvable in polynomial time,
or that certain types of problems (such as PSPACE problems) could be NP-hard.
PSPACE is the class of problems that can be solved using polynomial space. Unlike time
complexity, which limits the amount of time an algorithm can use, space complexity focuses
on the amount of memory or space required by the algorithm to solve a problem.
1. Definition of PSPACE: A problem is in PSPACE if there exists a Turing machine that solves
it using a polynomial amount of space (but possibly exponential time). The class PSPACE
includes many problems that are considered harder than those in P, yet solvable with
relatively efficient memory usage.
PSPACE vs NP: PSPACE is also believed to be larger than NP. However, it is known
that PSPACE = NP if and only if PH collapses to its second level, which would be a
32/39
dramatic result and suggest that the structure of complexity classes is much simpler
than currently believed.
Quantum computing has revolutionized our thinking about complexity theory. Quantum
computers utilize quantum mechanics to solve certain problems more efficiently than
classical computers. While quantum computing is unlikely to provide a definitive solution to
P vs NP in the classical sense, it introduces an entirely new perspective on computational
complexity.
Quantum P vs NP: One of the major questions in quantum complexity theory is the
relationship between the quantum classes and the classical classes. It is still
unknown whether quantum polynomial-time (BQP) problems are a subset of NP-
complete problems. The intersection of quantum computing and classical
complexity classes could potentially lead to new insights into P vs NP.
On the other hand, it is believed that quantum computers offer exponential speed-
ups for certain problems (like factoring large numbers, which is the basis for RSA
33/39
encryption), but it is not clear how this affects the classical P vs NP problem.
Conclusion
In this lecture, we explored several important areas of complexity theory that extend
beyond the P vs NP problem. These areas include the polynomial hierarchy, the Karp-Lipton
theorem, the class PSPACE, and the emerging field of quantum complexity. While the P vs
NP question remains a central focus, these other questions and classes are critical for
understanding the broader landscape of computational complexity. Furthermore, we
discussed the limitations of classical computation and the potential boundaries of
theoretical computer science, which are constantly being reshaped by new discoveries in
quantum and alternative computation. The ongoing research in these areas will continue to
34/39
push the boundaries of what is computationally possible and challenge our understanding
of the very nature of computation.
The P vs NP problem remains one of the most central and unsolved problems in theoretical
computer science. Despite being formulated in 1971 by Stephen Cook and independently by
Leonid Levin, and despite major efforts over the decades, there is still no resolution.
Quantum Computing: Although quantum computers have not yet solved the P vs
NP problem, they have introduced new ways of thinking about computational
complexity and have been proven to offer polynomial-time solutions for specific
problems like integer factorization and discrete logarithms (e.g., Shor’s algorithm).
35/39
Relativization and Natural Proofs: Some significant results, such as the Baker-Gill-
Solovay oracle separation and the Razborov-Rudich result on natural proofs, have
shown that certain techniques (like relativization and natural proofs) are insufficient
to resolve P vs NP. This means that the standard techniques used in classical
complexity theory may not be enough to provide a solution.
1. Can P vs NP Be Resolved?
2. The Role of New Techniques: New techniques and approaches may still provide a way
forward. One promising direction is the use of algebraic methods or proof complexity,
which might allow for more insights into the nature of NP-completeness. Additionally, as
quantum computing progresses, new quantum models of complexity may shed light on
the classical P vs NP question, though current evidence suggests that the two are
distinct.
3. Is a Proof Impossible? While many believe that the problem can eventually be resolved,
some have proposed that it might be impossible to prove the P vs NP question within
the standard axiomatic framework of mathematics, similar to other long-standing
unsolved problems in mathematics, such as the continuum hypothesis or Goldbach’s
conjecture. It remains an open question whether the P vs NP problem is inherently
36/39
beyond the reach of formal proof, or whether it is merely waiting for the right
breakthrough.
The outcome of the P vs NP question would have profound implications for both theoretical
and applied computer science.
1. If P = NP :
Impact on Computational Problems: If P = NP , it would imply that every problem
for which a solution can be verified in polynomial time can also be solved in
polynomial time. This would provide efficient algorithms for solving NP-complete
problems, revolutionizing fields like optimization, cryptography, machine learning,
and operations research.
2. If P NP :
=
Validation of Complexity Classes: If P NP , it would affirm the widely-held belief
=
that some problems are fundamentally harder to solve than others, preserving the
distinction between tractable and intractable problems. This would validate the
framework of NP-completeness and provide a deeper understanding of
computational limits.
The P vs NP question has inspired a variety of important open problems in complexity theory
and computer science. These problems often involve approximability, the hardness of
approximation, and finer distinctions within the landscape of computational complexity.
37/39
1. Approximability:
A key open problem is understanding the approximation ratio for various NP-
complete problems and whether efficient approximations exist for problems that
are provably hard to solve exactly.
2. Hardness of Approximation:
The study of hardness of approximation involves proving that certain problems are
not only hard to solve exactly, but also hard to approximate within certain bounds.
The Unique Games Conjecture and its implications for hardness of approximation
have inspired much recent research.
Open questions in this area include determining tight bounds for approximating NP-
complete problems and understanding the relationship between exact and
approximate solutions in computational complexity.
3. Fine-Grained Complexity:
The P vs NP problem will continue to play a critical role in the future of theoretical computer
science. Even though a resolution may be far off, the tools and techniques developed in the
attempt to solve the problem have already had a significant impact on the field.
38/39
3. Furthering Understanding of Complexity Classes: The ongoing research into
complexity theory, inspired by the P vs NP question, will shape the boundaries of what
is computationally feasible. The field of quantum computing and its intersection with
complexity theory will further expand the boundaries of theoretical computer science,
providing new avenues for research.
Conclusion
In this final lecture, we explored the current state of the P vs NP problem, its theoretical
implications, and the open research questions inspired by it. While the problem remains
unsolved, its study has advanced our understanding of computational complexity in
profound ways, influencing both theory and practice. Whether or not a resolution is achieved
in the near future, the P vs NP question will continue to guide and shape the development of
computer science for years to come.
39/39