Abstract

In search-based software engineering, one actively studied problem is the optimal software product selection from a feature model using multiple (usually more than three) optimization objectives simultaneously. This can be represented as a many-objective optimization problem. The primary goal of solving this problem is to search for diverse and high-quality valid products as rapidly as possible. Previous studies have shown that combining search-based techniques with satisfiability (SAT) solvers was promising for achieving this goal, but it remained open that how different solvers affect the performance of a search algorithm, and that whether the ways to randomize solutions in the solvers make a difference. Moreover, we may need further investigation on the necessity of mixing different types of SAT solving techniques. In this paper, we address the above open research questions by performing a series of empirical studies on 21 features models, most of which are reverse-engineered from industrial software product lines. We examine four conflict-driven clause learning solvers, two stochastic local search solvers, and two different ways to randomize solutions. Experimental results suggest that the performance can be indeed affected by different SAT solvers, and by the ways to randomize solutions in the solvers. This study serves as a practical guideline for choosing and tuning SAT solvers for the many-objective optimal software product selection problem.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call