Abstract

The core of big data is intelligence still. Facing the challenge of big data, AI needs a deep and united theory, especially, a deep and united cognition math. There were three branches of cognition math emerging in 1982. One of them is Factor space theory initiated by the first author. Factor is factor, i.e. the initiator of fact, the quality-root of things, which is the generalization of gene. Factor space is the coordinate space with dimensions named by factors, which is generalization of Cartesian coordination for describing things and thinking. The paper introduces how to emulate cognition functions by factor space and how clear and pertinent the emulation is. Four simple and fast algorithms are presented. Based on factor space, the cognition packet is built as the basic unit in factor databases. Different from the existent data processing, factor databases are built by cultivation, whose target is cultivating the sample S of background relation R to emulate R. With the lapse of time, the background sample S becomes more mature and stable. Once the S equals to R, cognition packet will have the whole correct knowledge. Maintaining such a powerful function for big data, factor databases can employ background base to drastically compress data without information loss. As for the existent data processing frightened by the multi-challenge of big data, factor space theory brings us a sedative. The tide of big data will be tamed in factor databases. The cultivation is easy to be made since the sample of background relation don’t concern about privacy. The bottlenecks caused by big data can be overcome by factor space theory, which is the best framework for cognition math.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call