Abstract

The idea of computable aggregation operators was introduced as a generalization of aggregation operators, allowing the replacement of the mathematical function usually considered for aggregation, by a program that performs the aggregation process. There are different reasons to justify this extension. One of them is the interest in exploring some computational properties not directly related to the aggregation itself but to its implementation (complexity, recursivity, parallelisation, etc). Another reason, the one driving to the present paper, is the need to define a framework where the quite common process of first sampling (over a large data set) and then aggregating the sample, could be analysed as a formal aggregation process. This process does not match with the idea of an aggregation function, due to its non-deterministic nature, but could easily be adapted to that of a (non-deterministic) computable aggregation. The idea of non-deterministic aggregation requires the extension of the concept of monotonicity (a key aspect of aggregation operators) to this new framework. The present paper will explore this kind of non-deterministic aggregation processes, first from an empirical point of view and then in terms of populations, adapting the idea of monotonicity to both of them and finally defining a common framework for its analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call