Abstract

Pareto optimization combines independent objectives by computing the Pareto front of its search space, defined as the set of all candidates whose scores are not dominated by others in at least one objective. This gives, in a precise sense, better information that an artificial amalgamation of different scores into a single objective, but is more costly to compute. We define a general Pareto product operator *Par on scoring schemes. Independent of a particular algorithm, we prove that for two scoring schemes A and B used in dynamic programming, the scoring scheme A *Par B correctly performs Pareto optimization over the same search space. We show that a Pareto-eager implementation of dynamic programming can achieve the same asymptotics as a single-objective optimization which computes the same number of results. For a concrete application in RNA structure prediction, we show that the empirical size of the Pareto front remains within reasonable bounds. Without artificial amalgamation of objectives, and with no heuristics involved, Pareto optimization is faster than computing the same number of answers separately for each objective.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.