Abstract

In this paper, we consider two parametric dominance-based rough set approaches (DRSA) proposed in the literature: variable precision DRSA (VP-DRSA) and variable consistency DRSA (VC-DRSA). They were introduced to cope with classification data encountered in practice for which the original definition of lower approximations is too restrictive. Both these extensions allow an augmentation of lower approximations, which is controlled parametrically in different ways. We give statistical interpretations for VP-DRSA and VC-DRSA from the perspective of empirical risk minimization typical for machine learning. Given families of classifiers and loss functions, we consider classification problems which relate directly VP-DRSA and VC-DRSA to ordinal classification. Then, we characterize the parametrically augmented lower approximations of both approaches as optimal solutions of associated empirical risk minimization problems. As a consequence, a connection between parametric DRSA and statistical learning is established. Moreover, new characterizations of the augmented lower approximations allow us to exhibit differences and similarities between VP-DRSA and VC-DRSA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call