Abstract

More than half a century has passed since Chomsky’s theory of language acquisition, Green and colleagues’ first natural language processor Baseball, and the Brown Corpus creation. Throughout the early decades, many believed that once computers became powerful enough, the development of A.I. systems that could understand and interact with humans using our natural languages would quickly follow. Since then, Moore’s Law has basically held; computer storage and performance has kept pace with our imaginations. And yet, 60 years later, even with these dramatic advances in computer technology, we still face major challenges in using computers to understand human language. The authors suggest that these same exponential increases in computational power have led current efforts to rely too much on techniques designed to exploit raw computational power and, in so doing, efforts have been diverted from advancing and applying the theoretical study of language to the task. In support of this view, the authors provide empirical evidence exposing the limitations of techniques – such as n-gram extraction – used to pre-process language. In addition, the authors conducted an analysis comparing three leading natural-language processing question-answering systems to human performance, and found that human subjects far outperformed all question answering-systems tested. The authors conclude by advocating for efforts to discover new approaches that use computational power to support linguistic and cognitive approaches to natural language understanding, as opposed to current techniques founded on patterns of word frequency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call