Abstract

One can try to parametrize the set of the instances of an optimization problem and look for in polynomial time achievable approximation ratio with respect to this parametrization. When the approximation ratio grows with the parameter, but is independent of the size of the instances,then we speak about stable approximation algorithms. An interesting point is that there exist stable approximation algorithms for problems like TSP that is not approximable within any polynomial approximation ratio in polynomial time (assuming P is not equal to NP).The investigation of the stability of approximation overcomes in this way the troubles with measuring the complexity and approximation ratio in the worst-case manner, because it may success in partitioning of the set of all input instances of a hard problem into infinite many classes with respect to the hardest of the particular inputs. We believe that approaches like this will become the core of the algorithmics,because they provide a deeper insight in the hardness of specific problems and in many application we are not interested in the worst-case problem hardness, but in the hardness of forthcoming problem instances.KeywordsApproximation AlgorithmProblem InstanceComplete GraphTriangle InequalityApproximation RatioThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call