Traditionally the analysis of algorithms measures the complexity of a
problem or algorithm in terms of the worst-case behavior over all
inputs of a given size. However, in certain cases an improved
algorithm can be obtained by considering a finer partition of the
input space. For instance, the best algorithm to compute the convex
hull of n points in the plane has worse case complexity O(n log n),
but a better algorithm is known with complexity O(n log h), where h is
the number of vertices on the convex hull. Since h is never greater
than n the latter algorithm is never worse and often better than one
which only optimizes for the worst case.
Outline:
This finer partition of the input analysis has been independently
rediscovered in many areas. We will cover examples from each of these
areas:
In Computational Geometry, where the complexity of many algorithms is
expressed as a function of the size of their output, which in many cases
is a better indicator of the difficulty of the instance.
In Parameterized Complexity, where certain intractable problems become
tractable if a certain parameter of the problem is constrained, while the
input size is allowed to vary.
In Algorithm Design and Analysis, where problems such as sorting have been
studied under a measure of difficulty on the input, yielding improved
algorithms for every instance as opposed to the worst case alone.
In Databases where instance optimal algorithms were introduced in a
celebrated paper by Fagin et al. In this field it is important to resolve
each query as fast as possible, and to analyze the complexity of the
algorithms using a finer measure of difficulty, which can lead to improved
algorithms.
In Online Analysis, where the performance of the off-line optimum plays
the role of a per-instance measure of the input difficulty, and the online
algorithm is expected to match it, usually up to a constant factor.
In Smooth Analysis, where the complexity of an algorithm is measured by
the expected cost over instances within small distance from the input.
In Artificial Intelligence, where adaptive measures have been used to
study the performance of clustering algorithms.
Work required:
A new paper is presented at each class, by an instructor or a student.
For the course each student must
present one paper in class,
write a project proposal,
write and present a litterature survey,
write and present a 10 page article summarizing the results of their project,
write the assignments.
In addition, for each class, each student must write a small feedback
report on other student's presentations.