Event Abstract Back to Event The Selective Attention for Action Model (SAAM) Christoph Böhme1* and Dietmar Heinke1 1 University of Birmingham, School of Psychology, United Kingdom Classically, visual attention is assumed to be influenced by visual properties of objects, e.g. as assessed in visual search tasks. However, recent experimental evidence suggests that visual attention is also guided by action-related properties of objects ("affordances", Gibson, 1966, 1979), e.g. the handle of a cup affords grasping the cup; therefore attention is drawn towards the handle (see Pellegrino, Rafal, and Tipper, 2005 for an example).In a first step towards modelling this interaction between attention and action, we implemented the Selective Attention for Action model (SAAM). The design of SAAM is based on the Selective Attention for Identification model (SAIM, Heinke & Humphreys, 2003). For instance, we also followed a soft-constraint satisfaction approach in a connectionist framework. However, SAAM's selection process is guided by locations within objects suitable for grasping them whereas SAIM selects objects based on their visual properties.In order to implement SAAM's selection mechanism two sets of constraints were implemented. The first set of constraints took into account the anatomy of the hand, e.g. maximal possible distances between fingers. The second set of constraints (geometrical constraints) considered suitable contact points on objects by using simple edge detectors. At first, we demonstrate that SAAM can successfully mimic human behaviour by comparing simulated contact points with experimental data. Secondly, we show that SAAM simulates affordance-guided attentional behaviour as it successfully generates contact points for only one object in two-object images. Our model shows that stable grasps can be derived directly from visual inputs without doing object-recognition and without constructing three dimensional internal representations of objects. Also, no complex torque and forces analysis is required. The similar mechanisms employed in SAIM and SAAM make it palpable to combine both into a unified model of visual selection for action and identification.