Many computational processes make use of randomness to run efficiently.
At the same time, computational ideas underlie our basic concepts of
randomness.
We shall study various aspects of this interrelationship. For
example,
- Kolmogorov randomness (a.k.a. incompressibility) with a time bound
- "Probabilistic construction" of computationally useful
structures (expander graphs, etc.)
- Computationally strong pseudo-random number generators
- Proving hardness of approximation problems, without all of the
hairy algebra.
Some sample questions/issues:
- Suppose you have an algorithm that uses
random bits and errs with probability (say) 1/4. Re-running
the algorithm independently uses more random bits and
decreases the error probability. How many random bits must
you use to get an error probability of 1/16? Of
?
- How does the Kolmogorov complexity of a satisfiable Boolean
formula relate to the Kolmogorov complexity of its satisfying
assignments? For example,
- If is satisfied only by the
all-true assignment, can be
maximally incompressible? (And what is the maximal
complexity of a well-formed formula, anyway?)
- Can one easily test compressible formulas for
satisfiability?
- Intuitively, a chaotic dynamical system is both complex and
(quasi-)random. Can notions from computational complexity theory
illuminate chaos? Can notions from chaos theory illuminate
computational complexity?
Two not-too-technical descriptions of some of the course content:
(If either link fails or declines to serve the paper, and you are not
on a UW computer, sign on to one and try again. If either fails from
a UW computer, please let me know.)