Abstract

The most fundamental oversight in Artificial Intelligence (AI) is probably the avoidance of conscious learning. The most widespread misconduct in AI seems to be Post-Selection. Avoiding conscious learning, we program humanoid robots that do not have the intent to learn new skills, such as standing up, walking, jumping, speaking, and thinking. Because programmed-in skills are all brittle, we must imitate natural intelligence, from insects to humans, that all learn from consciousness. Worldwide AI researchers and the public have been misled by media hypes about false AI performances, from Deep Learning to ChatGPT, rooted in Post-Selection. However, the Post-Selection protocol behind such hypes is fatally flawed by alleged misconduct — Misconduct 1, Cheating in the absence of a test; Misconduct 2, hiding bad-looking data. In other words, the reported errors are only data-fitting errors, instead of testing errors. This paper discusses how Conscious Learning is enabled by Developmental Network 3 (DN-3) to learn from its own intents without Post-Selection misconduct. Among many new concepts presented here, this paper establishes a new theorem that the expected error of the luckiest system in a future test is the same as any other less lucky system, namely, average. In contrast, DN-3 develops a sole network that is optimal in the sense of maximum likelihood (ML), better than the luckiest system on a validation set. The ML optimality transfers the performance on a validation set in the prior lifetime to a test set in the future lifetime. Many other AI techniques, e.g., symbolic, connectionist, and evolutional, also use Post-Selection which lacks such a transfer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call