Abstract

The ultimate goal of multiobjective optimization is to help a decision maker (DM) identify solution(s) of interest (SOI) achieving satisfactory tradeoffs among multiple conflicting criteria. This can be realized by leveraging DM's preference information in evolutionary multiobjective optimization (EMO). No consensus has been reached on the effectiveness brought by incorporating preference in EMO (either a priori or interactively) versus a posteriori decision making after a complete run of an EMO algorithm. Bearing this consideration in mind, this article: 1) provides a pragmatic overview of the existing developments of preference-based EMO (PBEMO) and 2) conducts a series of experiments to investigate the effectiveness brought by preference incorporation in EMO for approximating various SOI. In particular, the DM's preference information is elicited as a reference point, which represents her/his aspirations for different objectives. The experimental results demonstrate that preference incorporation in EMO does not always lead to a desirable approximation of SOI if the DM's preference information is not well utilized, nor does the DM elicit invalid preference information, which is not uncommon when encountering a black-box system. To a certain extent, this issue can be remedied through an interactive preference elicitation. Last but not the least, we find that a PBEMO algorithm is able to be generalized to approximate the whole PF given an appropriate setup of preference information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call