Abstract

Traditional architectures have fundamental epistemological problems. Perception is inherently resource limited so controlling perception involves all the same AI-complete problems of reasoning about time and resources as the full-scale planning problem. Allowing a planner to transparently assume that the information it needs will automatically be present and up-to-date in the model thus presupposes a solution to a problem at least as difficult as planning itself. Although one can imagine many possible solutions to this problem, such as allowing the planner to recurse on its own epistemological problems, there have been no convincing attempts at this. In this paper, I compare behaviour-based and traditional systems in terms of their representational power and the strengths of their implicit epistemological theories. I argue that both have serious limitations and that those limitations are not addressed simply by joining the two into a hybrid. I discuss my work with using vision to support real-time activity and give an example of an interesting intermediate point between reactive and classical architectures that preserves the simplicity and parallelism of behaviour-based systems while supporting ‘symbolic’ representations. Traditionally, AI theories have assumed, either implicitly or explicitly, an architecture in which modules of the mind (perception, reasoning, motor control, etc.) are linked by way of some centralized database like structure, often referred to as a world model. Recently, a number of alternative architectures have been proposed which, to greater or lesser degrees, claim to do away with world models or with representations entirely. Many of the criticisms of traditional architectures revolve around speed and timescale. Planning, so the story goes, is slow but flexible, while feedback loops are fast but stupid. A common approach, both in this special issue and in the literature in general, is to adopt a hybrid which fuses a slow planner running on a long time-scale and a set of fast feedback loops running on a short time-scale. The problem with this argument is that planning is not slow, it is combinatorially explosive. Running an O(2 n ) algorithm on a time-scale ten times slower is the same as running it on a computer ten times faster : it simply lets one increase n by three. If time-scale were the true problem, faster CPUs would make tiered architectures obsolete in a few years. I believe the true issues are not speed, in the sense of time-scale, but combinatorics and epistemology. The former has been extensively discussed, so I will focus on epistemology. Clearly, if an agent architecture is to be successful it must take into account the capacities and limitations of perception. In this paper I discuss the influence of perceptual architecture on agent architecture, argue that the recent wave of tiered architectures do not adequately address these problems, and discuss my work on using vision to support real-time activity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call