Abstract

In the quest for sci-fi-like artificial intelligence, the two halves of research are beginning to learn to live with each other. The article considers topics such as the role of deep learning and mispriming a deep neural network, which involves feeding it some irrelevant information as part of the question or problem statement that it then either regurgitates as part of the answer or connects to some other information that has no relevance to the core request. Both mispriming and arithmetic provide clues as to why language models can do so well and so badly at the same time. It also looks at automated optimisation, getting computers to 'optimise' their own way to an artificial general intelligence by trying out different structures, benchmarking them and using that data to learn new structures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call