Abstract

Computer scientists, as far as they aim at modeling knowledge, are faced with the so-called “curse of dimensionality”, which is nothing but the common name of the “combinatorial explosion” pitfall.This situation is directly linked to the type of knowledge model they are looking after, fitted to traditional computation, namely formal and hierarchical, based on sub-categorization: better modeling quality is supposed to follow better accuracy, indefinitely, leading to costly combinatorial explosions.What if we consider situated and/or embodied knowledge theories, where each piece of knowledge can make different sense depending on its local variations, including its time and space configuration? Is that even worst for computer scientists? Most of current computer science researchers answer positively: because the traditional “knowledge modeling” paradigm is not questioned enough.In this paper, we propose a solution to go beyond that difficulty, using Big Data and Data-Driven Intelligent Predictive Algorithms to support creativity in “knowledge collection making”, which aims at electing meaningful knowledge spatiotemporal configurations. Thanks to that innovation, accuracy is not anymore the only parameter to play with for targeting knowledge improvement, but also its relative disposition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.