0% found this document useful (0 votes)
41 views

Theory of Computation Lecture Notes

This document provides lecture notes for a theory of computation class taught by Prof. Yuh-Dauh Lyuu at National Taiwan University. It discusses key topics including: - The best textbook for the class and topics that will be covered - Information on exams, grading policies, and attendance requirements - A brief history of the field, from early foundational works to more recent developments - An introduction to models of computation like Turing machines

Uploaded by

Anju Raman
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Theory of Computation Lecture Notes

This document provides lecture notes for a theory of computation class taught by Prof. Yuh-Dauh Lyuu at National Taiwan University. It discusses key topics including: - The best textbook for the class and topics that will be covered - Information on exams, grading policies, and attendance requirements - A brief history of the field, from early foundational works to more recent developments - An introduction to models of computation like Turing machines

Uploaded by

Anju Raman
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Theory of Computation Lecture Notes

Prof. Yuh-Dauh Lyuu Dept. Computer Science & Information Engineering and Department of Finance National Taiwan University

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 1

Class Information
Papadimitriou. Computational Complexity. 2nd printing. Addison-Wesley. 1995. The best book on the market for graduate students. We more or less follow the topics of the book. More advanced materials may be added. Check www.csie.ntu.edu.tw/~lyuu/complexity/2003 for last years lecture notes. You may want to review discrete mathematics.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 2

Class Information (concluded)


More information and future lecture notes (in PDF format) can be found at www.csie.ntu.edu.tw/~lyuu/complexity.html Please ask many questions in class.

The best way for me to remember you in a large class.a

Teaching assistants will be announced later.


science concentrator [...] said that in his eighth semester of [Harvard] college, there was not a single science professor who could identify him by name. (New York Times, September 3, 2003.)
a [A]

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 3

Grading
No roll calls. No homeworks. Try some of the exercises at the end of each chapter.

Two to three examinations. You must show up for the examinations, in person. If you cannot make it to an examination, please email me beforehand (unless there is a legitimate reason). Missing the nal examination will earn a fail grade.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 4

A Brief History (Biased towards Complexity)


19301931: Gdels (19061978) completeness and o incompleteness theorems and recursive functions. 19351936: Kleene (19091994), Turing (19121954), Church (19031995), Post (18971954) on computability. 1936: Turing dened Turing machines and oracle Turing machines. 1938: Shannon (19162001) used boolean algebra for the design and analysis of switching circuits. Circuit complexity was also born. Shannons masters thesis was possibly the most important, and also the most famous, masters thesis of the century.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 5

A Brief History (continued)


1947: Dantzig invented linear programming simplex algorithm. 1947: Paul Erds (19131996) popularized the probabilistic o method. (Also Shannon (1948).) 1949: Shannon established information theory. 1949: Shannons study of cryptography was published. 1956: Ford and Fulkersons network ows. 1959: Rabin and Scotts notion of nondeterminism.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 6

A Brief History (continued)


19641966: Solomono, Kolmogorov, and Chaitin formalized Kolmogorov complexity (program size and randomness). 1965: Hartmanis and Stearns started complexity theory and hierarchy theorems (see also Rabin (1960)). 1965: Edmonds identied NP and P (actual names were coined by Karp in 1972). 1971: Cook invented the idea of NP-completeness. 1972: Karp established the importance of NP-completeness. 19721973: Karp, Meyer, and Stockmeyer dened the polynomial hierarchy.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 7

A Brief History (continued)


1973: Karp studied PSPACE-completeness. 1973: Meyer and Stockmeyer studied exponential time and space. 1973: Baker, Gill, and Solovay studied NP=P relative to oracles. 1975: Ladner studied P-completeness. 19761977: Rabin, Solovay, Strassen, and Miller proposed probabilistic algorithms (for primality testing). 19761978: Die, Hellman, and Merkle invented public-key cryptography.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 8

A Brief History (continued)


1977: Gill formalized randomized complexity classes. 1978: Rivest, Shamir, and Adleman invented RSA. 1978: Fortune and Wyllie dened the PRAM model. 1979: Garey and Johnson published their book on computational complexity. 1979: Valiant dened #P. 1979: Pippenger dened NC. 1979: Khachiyan proved that linear programming is in polynomial time. 1979: Yao founded communication complexity.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 9

A Brief History (continued)


1980: Lamport, Shostak, and Pease dened the Byzantine agreements problem in distributed computing. 1981: Shamir proposed cryptographically strong pseudorandom numbers. 1982: Goldwasser and Micali proposed probabilistic encryption. 1982: Yao founded secure multiparty computation. 1982: Goldschlager, Shaw, and Staples proved that the maximum ow problem is P-complete. 19821984: Yao, Blum, and Micali founded pseudorandom number generation on complexity theory.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 10

A Brief History (continued)


1983: Ajtai, Komls, and Szemerdi constructed an o e O(log n)-depth, O(n log n)-size sorting network. 1984: Valiant founded computational learning theory. 19841985: Furst, Saxe, Sipser, and Yao proved exponential bounds for parity circuits of constant depth. 1985: Razborov proved exponential lower bounds for monotone circuits. 1985: Goldwasser, Micali, and Racko invented zero-knowledge proofs. 1985: Sleator and Tarjan invented on-line algorithms.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 11

A Brief History (continued)


1986: Goldreich, Micali, and Wigderson proved that every problem in NP has a zero-knowledge proof under certain complexity assumptions. 1987: Adleman and Huang proved that primality testing can be solved in randomized polynomial time. 19871988: Szelepscnyi and Immerman proved that NL e equals coNL. 1989: Blum and Kannan proposed program checking. 1990: Shamir proved IP=PSPACE. 1990: Du and Hwang settled the Gilbert-Pollak conjecture on Steiner tree problems.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 12

A Brief History (concluded)


1992: Arora, Lund, Motwani, Sudan, and Szegedy proved the PCP theorem. 1993: Bernstein, Vazirani, and Yao established quantum complexity theory. 1994: Shor presented a quantum polynomial-time algorithm for factoring. 1996: Ajtai on the shortest lattice vector problem. 2002: Agrawal, Kayal, and Saxena discovered a polynomial-time algorithm for primality testing.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 13

Problems and Algorithms

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 14

I have never done anything useful. Godfrey Harold Hardy (18771947), A Mathematicians Apology (1940)

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 15

What This Course Is All About


Computability: What can be computed? What is computation anyway? There are well-dened problems that cannot be computed. In fact, most problems cannot be computed.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 16

What This Course Is All About (concluded)


Complexity: What is a computable problems inherent complexity? Some computable problems require at least exponential time and/or space; they are intractable. Some practical problems require superpolynomial resources unless certain conjectures are disproved. Other resource limits besides time and space? Program size, circuit size (growth), number of random bits, etc.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 17

Tractability and intractability


Polynomial in terms of the input size n denes tractability. n, n log n, n2 , n90 . Time, space, circuit size, number of random bits, etc. It results in a fruitful and practical theory of complexity. Few practical, tractable problems require a large degree. Exponential-time or superpolynomial-time algorithms are usually impractical. log n n n n , 2 , 2 , n! 2n (n/e)n .

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 18

Growth of Factorials
n 1 2 3 4 5 6 7 8 n! 1 2 6 24 120 720 5040 40320 n 9 10 11 12 13 14 15 16 n! 362,880 3,628,800 39,916,800 479,001,600 6,227,020,800 87,178,291,200 1,307,674,368,000 20,922,789,888,000

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 19

Most Important Results: a Sampler


An operational denition of computability. Decision problems in logic are undecidable. Decisions problems on program behavior are usually undecidable. Complexity classes and the existence of intractable problems. Complete problems for a complexity class. Randomization and cryptographic applications. Approximability.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 20

Turing Machines

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 21

What Is Computation?
That can be coded in an algorithm. An algorithm is a detailed step-by-step method for solving a problem. The Euclidean algorithm for the greatest common divisor is an algorithm. Let s be the least upper bound of compact set A is not an algorithm. Let s be a smallest element of a nite-sized array can be solved by an algorithm.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 22

Turing Machinesa
A Turing machine (TM) is a quadruple M = (K, , , s). K is a nite set of states. s K is the initial state. is a nite set of symbols (disjoint from K). includes (blank) and (rst symbol). : K (K {h, yes, no}) {, , } is a transition function. (left), (right), and (stay) signify cursor movements.
a Turing

(1936).

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 23

A TM Schema

1000110000111001110001110

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 24

Physical Interpretations
The tape: computer memory and registers. : program. K: instruction numbers. s: main() in C. : alphabet much like the ASCII code.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 25

More about
The program has the halting state (h), the accepting state (yes), and the rejecting state (no). Given current state q K and current symbol , (q, ) = (p, , D). It species the next state p, the symbol to be written over , and the direction D the cursor will move afterwards. We require (q, ) = (p, , ) so that the cursor never falls o the left end of the string.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 26

The Operations of TMs


Initially the state is s. The string on the tape is initialized to a , followed by a nite-length string x ( { }) . x is the input of the TM. The input must not contain s (why?)!

The cursor is pointing to the rst symbol, always a . The TM takes each step according to . The cursor may overwrite during the computation. to make the string longer

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 27

Program Count
A program has a nite size. Recall that : K (K {h, yes, no}) {, , }. So |K| || lines suce to specify a program, one line per pair from K . Given K and , there are ((|K| + 3) || 3)|K||| possible s (see next page). This is a constantalbeit large. Dierent s may dene the same behavior.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 28

(| K | + 3) | | 3 possibilities

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 29

The Halting of a TM
A TM M may halt in three cases. yes: M accepts its input x, and M (x) = yes.

no: M rejects its input x, and M (x) = no. h: M (x) = y, where the string consists of a , followed by a nite string y, whose last symbol is not , followed by a string of s. y is the output of the computation. y may be empty denoted by . If M never halts on x, then write M (x) = .

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 30

Why TMs?
Because of the simplicity of the TM, the model has the advantage when it comes to complexity issues. One can develop a complexity theory based on C++ or Java, say. But the added complexity does not yield additional fundamental insights. We will describe TMs in pseudocode.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 31

The Concept of Conguration


A conguration is a complete description of the current state of the computation. The specication of a conguration is sucient for the computation to continue as if it had not been stopped. What does your PC save before it sleeps? Enough for it to resume work later. Similar to the concept of Markov process in stochastic processes or dynamic systems.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 32

Congurations (concluded)
A conguration is a triple (q, w, u): q K. w is the string to the left of the cursor (inclusive). u is the string to the right of the cursor. Note that (w, u) describes both the string and the cursor position.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 33

1000110000111001110001110

w = 1000110000. u = 111001110001110.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 34

Yielding
Fix a TM M . Conguration (q, w, u) yields conguration (q , w , u ) in one step, (q, w, u) (q , w , u ), if a step of M from conguration (q, w, u) results in conguration (q , w , u ). (q, w, u) (q , w , u ): Conguration (q, w, u) yields conguration (q , w , u ) in k N steps. (q, w, u) (q , w , u ): Conguration (q, w, u) yields conguration (q , w , u ).
M Mk M

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 35

Example: How to Insert a Symbol


We want to compute f (x) = ax. The TM moves the last symbol of x to the right by one position. It then moves the next to last symbol to the right, and so on. The TM nally writes a in the rst position. The total number of steps is O(n), where n is the length of x.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 36

Palindromes
A string is a palindrome if it reads the same forwards and backwards (e.g., 001100). A TM program can be written to recognize palindromes: It matches the rst character with the last character. It matches the second character with the next to last character, etc. (see next page). yes for palindromes and no for nonpalindromes. This program takes O(n2 ) steps. Can we do better?

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 37

100011000000100111

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 38

A Matching Lower Bound for palindrome


Theorem 1 (Hennie (1965)) palindrome on single-string TMs takes (n2 ) steps in the worst case.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 39

The Proof: Setup


x m yr P(x, y)

100011000000100111

Cut

Communication: at most log 2 | K | bits

yes/no

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 40

The Proof: Communications


Our input is more restricted; hence any lower bound holds for the original problem. Each communication between the two halves across the cut is a state from K, hence of size O(1). C(x, x): the sequence of communications for palindrome problem P(x, x) across the cut. It is a sequence of states from K.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 41

The Proof: Communications (concluded)


C(x, x) = C(y, y) when x = y. Suppose otherwise, C(x, x) = C(y, y).

Then C(y, y) = C(x, y) by the cut-and-paste argument (see next page). Hence P(x, y) has the same answer as P(y, y)! So C(x, x) is distinct for each x.

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 42

xr

yr

yr

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 43

The Proof: Amount of Communications


Assume | x | = | y | = m = n/3. | C(x, x) | is the number of times the cut is crossed. We rst seek a lower bound on the total number of communications: | C(x, x) |.

x{0,1}m

Dene (m + 1) log| K | 2 log| K | m 1 + log| K | (| K | 1).

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 44

The Proof: Amount of Communications (continued)


There are | K |i distinct C(x, x)s with | C(x, x) | = i. Hence there are at most | K |+1 1 | K |+1 2m+1 |K | = = |K | 1 |K | 1 m i=0
i

distinct C(x, x)s with | C(x, x) | . The rest must have | C(x, x) | > . Because C(x, x) is distinct for each x (p. 42), there are m+1 at least 2m 2 m of them with | C(x, x) | > .

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 45

The Proof: Amount of Communications (concluded)


Thus
x{0,1}m

| C(x, x) | > =

x{0,1}m ,| C(x,x) |>

| C(x, x) |

2m+1 2 m m2 2m . m
m

As = (m), the total number of communications is


x{0,1}m

| C(x, x) | = (m2m ).

(1)

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 46

The Proof (continued)


We now lower-bound the worst-case number of communication points in the middle section.
m x i xr

yes/no

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 47

The Proof (continued)


Ci (x, x) denotes the sequence of communications for P(x, x) given the cut at position i. Then i=1 | Ci (x, x) | is the number of steps spent in the middle section for P (x, x). Let T (n) = maxx{0,1}m
m i=1 m

T (n) is the worst-case running time spent in the middle section when dealing with any P (x, x) with | x | = m.
m i=1

| Ci (x, x) |.

Note that T (n)

| Ci (x, x) | for any x {0, 1}m .

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 48

The Proof (continued)


Now, 2m T (n) =
x{0,1}m m

T (n)

x{0,1}m i=1 m

| Ci (x, x) | | Ci (x, x) |.

i=1 x{0,1}m

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 49

The Proof (concluded)


By the pigeonhole principle,a there exists an 0 i m,
x{0,1}

2m T (n) . | Ci (x, x) | m m

Eq. (1) on p. 46 says that


x{0,1}m

| Ci (x, x) | = (m2m ).

Hence
a Dirichlet

T (n) = (m2 ) = (n2 ).


(18051859).

c 2004 Prof. Yuh-Dauh Lyuu, National Taiwan University

Page 50

You might also like