Abstract

Visual spatial relations are the foundation for encoding information in graphs, diagrams, and maps. While successfully using these displays requires that we extract, remember, and integrate these relations, there is little existing work measuring how many we can store. Some related types of visual information seem to be robustly encoded, such as the 'shape' of the layout of a simple display (Chun & Jiang, 1998; Jiang & Wagner, 2004), or the absolute spatial locations of a set of objects (Hollingworth, 2006). However, these types of information do not explicitly encode the relative locations between objects with different identities. Here we tested memory capacity for relative spatial location between pairs of briefly presented objects, and found that it was strikingly limited. Participants viewed a sequence of three vertically presented image pairs in which each pair appeared for 600ms at one randomly chosen corner of the screen. Participants were immediately tested on their memory, for either object identity or relative spatial location. In the object identity task, participants were instructed to decide which of two images they had previously seen. In the spatial memory task, they viewed one image and identified its relative location (up or down) within the previously studied pair. Participants were not informed about which task they would perform beforehand. While accuracy for identity memory was high (M=92%, SD=9%), accuracy for relative spatial location was significantly lower (M=81%, SD=9%). Memory for relations was low despite displaying only 3 pairs before each test phase, despite the continuous possibility of being tested on the spatial relation task, and despite using top/bottom relations, which are typically easier to extract compared to left/right relations (Logan, 1995). A capacity estimate would place memory for relations at 1-2. Contrary to our intuitions, memory for relative spatial locations was much more impoverished than memory for object identities. Meeting abstract presented at VSS 2013

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call