Abstract

This is a theoretical paper on conscious learning for thoughts and creativity through general-purpose and autonomous imitation of demonstrations. This conscious learning is end-to-end (3D-to-2D-to-3D) and free from annotations of 2D images and 2D motor images (e.g., a bounding box to be attended to). The conscious learning algorithm directly takes that of the Developmental Networks that has been previously published extensively with rich experimental results. Apparently, humans and animals do this type of fully automated learning daily, but it is unclear a robot can do the same. Recently, [1], [2] presented a theory of conscious learning rooted in emergent universal Turing machines. It appeared to be the first algorithmic level theory of holistic consciousness, other than many papers in the literature about piecemeal consciousness. However, [1], [2] proved only conscious learning in motor-imposed training mode, namely 3D-to-2D taught by 2D motor impositions, free from 2D annotations. This paper fills the challenging gap in [1], [2] so the conscious learning is 3D-to-2D-to-3D (end-to-end) without motor-impositions or computing “inverse kinematics”. This is a major departure from traditional AI-handcrafting symbolic labels that tend to be brittle (e.g., for driverless cars) and then “spoon-feeding” pre-collected “big data”. Autonomous imitations drastically reduce the teaching complexity compared to pre-collected “big data”, especially because no annotations of training data are needed. Furthermore, conscious learning allows creativity beyond what is taught. This work is directly related to consumer electronics because it requires large-scale on-the-fly brainoid chips in future wearable robots/devices for consumers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call