Abstract

Recently, Yager and Petry were proposing a quality-based methodology to combine data provided by multiple probabilistic sources to improve the quality of information for decision-makers. This paper offers a sort of companion paper that adapts this methodology to possibilistic sources. Possibility theory is particularly well suited to cope with incomplete information from poor-data sources. The methodology and algorithms used for the probabilistic approach are adapted for the possibilistic case. Both approaches are then compared by the means of a numerical example and four experimental benchmark datasets: one, the IRIS data set, being data-poorer than the three other ones (Diabetes dataset, Glass dataset and Liver-disorder dataset). A vector representation is introduced for a possibility distribution as in the probabilistic case and, the Gini's formulation of entropy is being used. However, the Gini's entropy has to be used differently than with the probabilistic case. This has an impact on the selection of subsets. A fusion scheme is designed to select the best-quality subsets according to two information quality factors: quantity of information and source credibility. Results obtained from comparison of both approaches on the four experimental benchmarks confirm the superiority of the possibilistic approach in the presence of information scarcity or incompleteness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.