How to evaluate Pareto front approximations generated by multi/many-objective optimizers is a critical issue in the field of multiobjective optimization. Currently, there exist two types of comprehensive quality indicators (i.e., volume-based and distance-based indicators). Distance-based indicators, such as inverted generational distance (IGD), are usually computed by summing up the distance of each reference point to its nearest solution. Their high computational efficiency leads to their prevalence in many-objective optimization. However, in the existing distance-based indicators, the distributions of the solution sets are usually neglected, leading to their lack of ability to well distinguish between different solution sets. This phenomenon may become even more severe in high-dimensional space. To address such an issue, a kernel-based indicator (KBI) is proposed as a comprehensive indicator. Different from other distance-based indicators, a kernel-based maximum mean discrepancy is adopted in KBI for directly measuring the difference that can characterize the convergence, spread, and uniformity of two sets, i.e., the solution set and reference set, by embedding them in reproducing kernel Hilbert space (RKHS). As a result, KBI not only reflects the distance between the solution set and the reference set but also can reflect the distribution of the solution set itself. In addition, to maintain the desirable weak Pareto compliance property of KBI, a nondominated set reconstruction approach is also proposed to shift the original solution set. The detailed theoretical and experimental analysis of KBI is provided in this article. The properties of KBI have also been analyzed by the optimal <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu $ </tex-math></inline-formula> -distribution.
Read full abstract