[This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Parameterized Complexity".]

In computer science, **parameterized complexity** is a measure of complexity of problems with multiple input parameters. The theory of parameterized complexity was developed in the 1990s by Rod Downey and Michael Fellows. Their 1999 monograph [1] presents an introduction to the field.

The theory of parameterized complexity is motivated, among other things, by the observation that there exist several hard problems that (most likely) require exponential runtime when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential in a (small) parameter k. Hence, if k is fixed at a small value, such problems can still be considered ‘tractable’ despite their traditional classification as ‘intractable’.

The existence of efficient, exact, and deterministic solving algorithms for NP-complete, or otherwise NP-hard, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is exponential in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input size. Such an algorithm is called a fixed-parameter tractable (fpt-)algorithm, because the problem can be solved efficiently for small values of the fixed parameter.

Problems in which some parameter k is fixed are called parameterized problems. A parameterized problem that allows for such an fpt-algorithm is said to be a **fixed-parameter tractable** problem and belongs to the class *FPT*, and the early name of the theory of parameterized complexity was **fixed-parameter tractability**.

Many problems have the following form: given an object *x* and a nonnegative integer *k*, does *x* have some property that depends on *k*? For instance, for the vertex cover problem, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is interesting to see whether we can find an algorithm which is exponential *only* in *k*, and not in the input size.

In this way, parameterized complexity can be seen as *two-dimensional* complexity theory. This concept is formalized as follows:

A

parameterized problemis a language $L \subseteq \Sigma^* \times \N$ , where $\Sigma$ is a finite alphabet. The second component is called theparameterof the problem.

A parameterized problem

Lisfixed-parameter tractableif the question “$(x, k) \in L?$” can be decided in running time $f(k) \cdot |x|^{O(1)}$, wherefis an arbitrary function depending only onk. The corresponding complexity class is calledFPT.

For example, there is an algorithm which solves the vertex cover problem in $O(kn + 1.274^{k})$ time, where *n* is the number of vertices and *k* is the size of the vertex cover. This proves that vertex cover is fixed-parameter tractable with respect to this parameter.

[2] and [3] are two recent textbooks on the subject.

*Parameterized complexity*. Springer. ISBN 0-387-94883-X.