Abstract

Multiple Criteria Decision Aiding (MCDA) offers a diversity of approaches designed for providing the decision maker (DM) with a recommendation concerning a set of alternatives (items, actions) evaluated from multiple points of view, called criteria. This paper aims at drawing attention of the Machine Learning (ML) community upon recent advances in a representative MCDA methodology, called Robust Ordinal Regression (ROR). ROR learns by examples in order to rank a set of alternatives, thus considering a similar problem as Preference Learning (ML-PL) does. However, ROR implements the interactive preference construction paradigm, which should be perceived as a mutual learning of the model and the DM. The paper clarifies the specific interpretation of the concept of preference learning adopted in ROR and MCDA, comparing it to the usual concept of preference learning considered within ML. This comparison concerns a structure of the considered problem, types of admitted preference information, a character of the employed preference models, ways of exploiting them, and techniques to arrive at a final ranking.

Highlights

  • Ranking problems In ranking problems one aims at ordering a finite set of alternatives from the best to the worst, using a relative comparison approach

  • – universality of the preference model, which considers the non-specificity of the form of the value function, in the sense that the less specific the form, the greater the chance that the model learns in a sequence of iterations: the additive value function with monotone marginal value functions considered within Robust Ordinal Regression (ROR) constitutes a very general preference model and it reaches a very good level of universality, which is far more universal than the model admitting only linear marginal value functions;

  • We have reviewed a non-statistical methodology of preference learning designed for multiple criteria ranking

Read more

Summary

Introduction

Ranking problems In ranking problems one aims at ordering a finite set of alternatives (items, actions) from the best to the worst, using a relative comparison approach. In ROR, the DM provides some judgments concerning selected alternatives in the form of pairwise comparisons or rank-related requirements, expressed either holistically or with respect to particular criteria. This is the input data for the ordinal regression that finds the whole set of value functions being able to reconstruct the judgments given as preference information by the DM. We compare different aspects of ranking problems and preference learning as considered in ROR and PL-ML This comparison is further continued throughout the paper with respect to the input preference information, exploitation of the preferences, and evaluation of the provided recommendation.

Problem formulation
Data description
Input data
Performance measure
Ranking results
Interaction
Features of preference learning in robust ordinal regression
Preference information
Pairwise comparisons
Intensities of preference
Rank-related requirements
Hierarchy of criteria
Interaction between criteria
Margin of the misranking error
Recommendation
Necessary and possible preference relations
Extreme ranking analysis
Representative value function
Credibility of preference information and recommendation
Dealing with the inconsistency in ROR
Illustrative case study
Second iteration
Computational cost
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call