Abstract

ly speaking, what have we done in the previous section? After applying a number of rules in polynomial time to an instance of Vertex Cover, we arrived at a reduced instance whose size can solely be expressed in terms of the parameter k. Since this can be easily done in O(n) time, we have found a data reduction for Vertex Cover with guarantees concerning its running time as well as its effectiveness. These properties are formalized in the concepts of a problem kernel and the corresponding kernelization. Definition 1.2. Let L be a parameterized problem, that is, L consists of input pairs (I, k), where I is the problem instance and k is the parameter. A reduction to a problem kernel (or kernelization) means to replace an instance (I, k) by a reduced instance (I , k) called problem kernel in polynomial time such that (1) k ≤ k, (2) I ′ is smaller than g(k) for some function g only depending on k, and (3) (I, k) has a solution if and only if (I , k) has one. While this definition does not formally require that it is possible to reconstruct a solution for the original instance from a solution for the problem kernel, all kernelizations we are aware of easily allow for this. The methodological approach of kernelization, including various techniques of data reduction, is best learned by the concrete examples that we discuss in Section 1.3; there, we will also discuss kernelizations for Vertex Cover that even yield a kernel with a linear number of vertices in k. To conclude this section, we state some useful general observations and remarks concerning Definition 1.2 and its connections to fixed-parameter tractability. Most notably, there is a close connection between fixedparameter tractable problems and those problems that have a problem kernel—they are exactly the same. Theorem 1.3 (Cai et al.). Every fixed-parameter tractable problem is kernelizable and vice-versa. April 3, 2007 16:42 World Scientific Review Volume 9in x 6in fptcluster 8 F. Huffner, R. Niedermeier & S. Wernicke Unfortunately, the practical use of this theorem is limited: the running times of a fixed-parameter algorithm directly obtained from a kernelization is usually not practical; and, in the other direction, the theorem does not constructively provide us with a data reduction scheme for a fixedparameter tractable problem. Hence, the main use of Theorem 1.3 is to establish the fixed-parameter tractability or amenability to kernelization of a problem—or show that we need not search any further (e.g., if a problem is known to be fixed-parameter intractable, we do not need to look for a kernelization). Rule VC3 explicitly needed the value of the parameter k. We call this a parameter-dependent rule as opposed to the parameter-independent rules VC1 and VC2, which are oblivious to k. Of course, one typically does not know the actual value of k in advance and then has to get around this by iteratively trying different values of k. While, in practice, one would naturally prefer to avoid this extra outer loop, assuming explicit knowledge of the parameter clearly adds some leverage to finding data reduction rules and is hence frequently encountered in kernelizations. 1.2.2. Depth-Bounded Search Trees After preprocessing the given input data of a problem by a kernelization and cutting away its “easy parts,” we are left with the “really hard” problem kernel to be solved. A standard way to explore the huge search space of a computationally hard problem is to perform a systematic exhaustive search. This can be organized in a tree-like fashion, which is the main subject of this section. Certainly, search trees are no new idea and have been extensively used in the design of exact algorithms (e.g., see Ref. 37–41). The main contribution of fixed-parameter theory to search tree approaches is the consideration of search trees whose depth is bounded by the parameter, usually leading to search trees that are much smaller than those of naive brute-force searches. Additionally, the speed of search tree exploration can (provably) be improved by exploiting kernelizations. An extremely simple search tree approach for solving Vertex Cover is to just take one vertex and branch into two cases: either this vertex is in the vertex cover or not. This leads to a search tree of size O(2). As we outline In general, the constraint k < n is easily established. As Dehne et al. point out in their studies of Cluster Editing, it depends on the concrete problem which search strategy for the “optimum” value of k is most efficient to employ in practice. April 3, 2007 16:42 World Scientific Review Volume 9in x 6in fptcluster Fixed-Parameter Algorithms for Graph-Modeled Data Clustering 9

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call