In practice, the decision-maker (DM) may be only interested in a particular part of Pareto optimal front (PF). For this reason, many preference-based multi-objective evolutionary algorithms (MOEA) have been proposed to find a solution set that approximates the region of the interest (ROI). Most existing preference-based methods focus on selecting solutions in the ROI. Nevertheless, the enhancement of convergence in preference-based MOEAs has been neglected. Most decomposition-based approaches often employ the achievement scalarizing function (ASF) as their scalarizing function. However, it holds a weaker search ability than the weighted sum function (WSF) despite its capability to tackle problems with arbitrary PF geometries. In order to strengthen the selective pressure toward the PF, this paper proposes a new scalarizing function, LSF, which is a linear combination of the WSF and the ASF. Then, a simple adaptive penalty scheme is employed in LSF to balance the search ability and robustness. To focus the search on the ROI, we develop a reference point adjustment method that dynamically adjusts the position of the reference point according to the distance from the approximated target point. We apply the above two innovations to the MOEA/D framework and propose a new preference-based MOEA, namely RAMOEAD. Experimental results show that RAMOEAD is highly competitive when compared with five state-of-the-art preference-based MOEAs. Finally, the proposed algorithm is extended to double reference points for solving the problems in which the ROIs are defined by the reservation and aspiration points.
Read full abstract