Abstract

Question-answering systems like Watson beat humans when it comes to processing speed and memory. But what happens if we compensate for this? What are the fundamental differences in power between human and artificial agents in question answering? We explore this issue by defining new computational models for both agents and comparing their computational efficiency in interactive sessions.Concretely, human agents are modeled by means of cognitive automata, augmented with a form of background intelligence which gives the automata the possibility to query a given Turing machine and use the answers from one interaction to the next. On the other hand, artificial question-answering agents are modeled by QA-machines, which are Turing machines that can access a predefined, potentially infinite knowledge base (‘advice’) and have a bounded amount of learning space at their disposal.We show that cognitive automata and QA-machines have exactly the same potential in realizing question-answering sessions, provided the resource bounds in one model are sufficient to match the abilities of the other. In particular, polynomially bounded cognitive automata with background intelligence (i.e. human agents) prove to be equivalent to polynomially bounded QA-machines with logarithmic learning space. It generalizes Pippenger's theorem on the computational power of switching circuits (without background intelligence) to a foundational result for question answering in cognitive science. The framework reveals why QA-machines have a fundamental advantage.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call