The decomposition method is currently one of the major methods for solving the convex quadratic optimization problems being associated with Support Vector Machines (SVM-optimization). A key issue in this approach is the policy for working set selection. We would like to find policies that realize (as well as possible) three goals simultaneously: “(fast) convergence to an optimal solution”, “efficient procedures for working set selection”, and “a high degree of generality” (including typical variants of SVM-optimization as special cases). In this paper, we study a general policy for working set selection that has been proposed in [Nikolas List, Hans Ulrich Simon, A general convergence theorem for the decomposition method, in: Proceedings of the 17th Annual Conference on Computational Learning Theory, 2004, pp. 363–377] and further analyzed in [Nikolas List, Hans Ulrich Simon, General polynomial time decomposition algorithms, in: Proceedings of the 17th Annual Conference on Computational Learning Theory, 2005, pp. 308–322]. It is known that it efficiently approaches feasible solutions with minimum cost for any convex quadratic optimization problem. Here, we investigate its computational complexity when it is used for SVM-optimization. It turns out that, for a variable size of the working set, the general policy poses an NP-hard working set selection problem. But a slight variation of it (sharing the convergence properties with the original policy) can be solved in polynomial time. For working sets of fixed size 2, the situation is even better. In this case, the general policy coincides with the “rate certifying pair approach” (introduced by Hush and Scovel). We show that maximum rate certifying pairs can be found in linear time, which leads to a quite efficient decomposition method with a polynomial convergence rate for SVM-optimization.