Abstract

Many big data applications give rise to distributional data wherein objects or individuals are naturally represented as K-tuples of bags of feature values where feature values in each bag are sampled from a feature and object specific distribution. We formulate and solve the problem of learning classifiers from distributional data. We consider three classes of methods for learning distributional classifiers: (i) those that rely on aggregation to encode distributional data into tuples of attribute values, i.e., instances that can be handled by traditional supervised machine learning algorithms, (ii) those that are based on generative models of distributional data, and (iii) the discriminative counterparts of the generative models considered in (ii) above. We compare the performance of the different algorithms on real-world as well as synthetic distributional data sets. The results of our experiments demonstrate that classifiers that take advantage of the information available in the distributional instance representation outperform or match the performance of those that fail to fully exploit such information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call