Parameterized Complexity

[This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Parameterized Complexity".]

In computer science, parameterized complexity is a measure of complexity of problems with multiple input parameters. The theory of parameterized complexity was developed in the 1990s by Rod Downey and Michael Fellows. Their 1999 monograph [1] presents an introduction to the field.

The theory of parameterized complexity is motivated, among other things, by the observation that there exist several hard problems that (most likely) require exponential runtime when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential in a (small) parameter k. Hence, if k is fixed at a small value, such problems can still be considered ‘tractable’ despite their traditional classification as ‘intractable’.

The existence of efficient, exact, and deterministic solving algorithms for NP-complete, or otherwise NP-hard, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is exponential in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input size. Such an algorithm is called a fixed-parameter tractable (fpt-)algorithm, because the problem can be solved efficiently for small values of the fixed parameter.

Problems in which some parameter k is fixed are called parameterized problems. A parameterized problem that allows for such an fpt-algorithm is said to be a fixed-parameter tractable problem and belongs to the class FPT, and the early name of the theory of parameterized complexity was fixed-parameter tractability.

Many problems have the following form: given an object x and a nonnegative integer k, does x have some property that depends on k? For instance, for the vertex cover problem, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is interesting to see whether we can find an algorithm which is exponential only in k, and not in the input size.

In this way, parameterized complexity can be seen as two-dimensional complexity theory. This concept is formalized as follows:

A parameterized problem is a language $L \subseteq \Sigma^* \times \N$ , where $\Sigma$ is a finite alphabet. The second component is called the parameter of the problem.

A parameterized problem L is fixed-parameter tractable if the question “$(x, k) \in L?$” can be decided in running time $f(k) \cdot |x|^{O(1)}$, where f is an arbitrary function depending only on k. The corresponding complexity class is called FPT.

For example, there is an algorithm which solves the vertex cover problem in $O(kn + 1.274^{k})$ time, where n is the number of vertices and k is the size of the vertex cover. This proves that vertex cover is fixed-parameter tractable with respect to this parameter.

[2] and [3] are two recent textbooks on the subject.

Bibliography
1. Downey, Rod; M. Fellows (1999). Parameterized complexity. Springer. ISBN 0-387-94883-X.
2. Flum, J.; Grohe, M. (2006). Parameterized Complexity Theory. Springer. ISBN 978-3-540-29952-3.
3. Niedermeier, Rolf (2006). Invitation to Fixed-Parameter Algorithms. Oxford University Press. ISBN 0-19-856607-7.
4. The Computer Journal. Volume 51, Numbers 1 and 3 (2008). The Computer Journal. Special Double Issue on Parameterized Complexity with 15 survey articles, book review, and a Foreword by Guest Editors R. Downey, M. Fellows and M. Langston.
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License