Abstract

There has been a lot of attention on so-called 'adversarial images' that fool machine learning models (MLM). Manipulations to an image or including an unexpected object in a scene can severely disrupt object labeling and parsing by state-of-the-art MLM such as convolutional neural networks (CNN). In contrast, human observers may not even notice these changes, highlighting a significant gap between the robustness of human vision and CNNs. One well-studied class of novel objects are 'Greebles,' introduced by Gauthier and Tarr (1997; Gauthier et al., 1998). A large number of 'families', 'genders' and individuals can be created by systematically varying the arrangements and shapes of Greeble components. We report on eye-tracking (SMI iRed 250) participants as they are taught to recognize Greebles following the procedure described by Gauthier, Tarr and colleagues. Greeble expertise is assessed using a naming task and a verification task. Once an individual became a Greeble 'expert' we assessed their tolerance for rotations of the Greebles around the y-axis. The pattern of eye movements when learning and being tested on the Greebles are compared to the output of CNNs trained to recognize Greebles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call