Abstract

In this article, we propose a process-based definition of big data, as opposed to the size- and technology-based definitions. We argue that big data should be perceived as a continuous, unstructured and unprocessed dynamics of primitives, rather than as points (snapshots) or summaries (aggregates) of an underlying phenomenon. Given this, we show that big data can be generated through agent-based models but not by equation-based models. Though statistical and machine learning tools can be used to analyse big data, they do not constitute a big data-generation mechanism. Furthermore, agent-based models can aid in evaluating the quality (interpreted as information aggregation efficiency) of big data. Based on this, we argue that agent-based modelling can serve as a possible foundation for big data. We substantiate this interpretation through some pioneering studies from the 1980s on swarm intelligence and several prototypical agent-based models developed around the 2000s.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.