Abstract

Gibson (1979) argued that “control lies not in the brain, but in the animalenvironment system.” To make good on this claim, we must show how adaptive behavior emerges from the interactions of an agent with a structured environment guided by occurrent information. Here we attempt to model the behavioral dynamics of human walking and show how locomotor paths emerge “online” from simple laws for steering and obstacle avoidance. Our approach is inspired by Schoner, Dose, and Engels’s (1995) control system for mobile robots. By behavioral dynamics, we mean a description of the time evolution of observed behavior. Assume that goal-directed behavior can be described by a few behavioral variables, which define a state space for the system. Goals correspond to attractors in state space to which trajectories converge, whereas states to be avoided correspond to repellors from which trajectories diverge. The problem is to formalize a system of differential equations, or dynamical system, whose solutions capture the observed behavior. We take the current heading direction φ and turning rate as behavioral variables, assuming travel at a constant speed v (see Figure 1). From the agent’s current (x, z) position, a goal lies in the direction ψg at a distance dg; an obstacle may also lie in direction ψo at a distance do. The simplest description of steering toward a goal is for the agent to bring the heading error between the current heading direction and the goal direction to zero (φ – ψg = 0), which defines an attractor in state space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call