Face to face with an expert: Exploring joint visual attention during forensic face and feature comparison learning in expert-novice pairs

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Face to face with an expert: Exploring joint visual attention during forensic face and feature comparison learning in expert-novice pairs

Similar Papers
  • Research Article
  • Cite Count Icon 17
  • 10.1016/j.infbeh.2019.101368
Social touch alters newborn monkey behavior
  • Sep 12, 2019
  • Infant Behavior and Development
  • Elizabeth A Simpson + 7 more

Social touch alters newborn monkey behavior

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.infbeh.2024.101934
Look at Grandma! Joint visual attention over video chat during the COVID-19 pandemic
  • Mar 12, 2024
  • Infant Behavior and Development
  • Lauren J Myers + 10 more

Look at Grandma! Joint visual attention over video chat during the COVID-19 pandemic

  • Book Chapter
  • Cite Count Icon 2
  • 10.4324/9781410605979-29
Learning of Joint Visual Attention by Reinforcement Learning
  • Jul 1, 2001
  • Goh Matsuda + 1 more

In this paper, we propose a neural network model of joint visual attention learning that plays an important role in infant development, and we discuss previous studies of experimental psychology on joint visual attention based on simulation results using the model. We assumed an imaginary experiment to develop the model. A mother and an infant are sitting face to face with a table between them. Some objects familiar to the infant are placed on the table, and toys operated by remote control are put outside of the infant’s view. The infant is given a reward of seeing something interesting only when the infant follows the mother’s gaze after eye contact. We constructed the model of this experiment with a reinforcement learning algorithm, and simulated the experiment on a computer. As a result, it was revealed that the infant could learn a series of joint-visual-attention-like actions by receiving rewards from an environment, although it initially has little knowledge of the environment. This result suggests that infants can acquire joint visual attention without comprehension of the nature of joint attention, i.e., ”I’m looking at the same thing that others are looking at.” Introduction Modeling the development of infant intelligence is one of the strategies for understanding human intelligence. We focus on development in infancy from the viewpoint of engineering. Neonatal babies have little knowledge of the environment, nevertheless they acquire new knowledge and behavior suitable for the environment step by step in their developmental stage. Although the whole brain system of adults is very complicated, we believe that we can create a model of intelligence relatively easily by pursuing those developmental steps one by one. In this study, we focus on joint visual attention as one of those developmental processes. In an engineering sense, joint attention can be defined as the sharing of attention with others, and joint visual attention is defined as looking at what others are looking at. Although this definition may cause some objections, we adopt it in this paper. The detailed study of joint visual attention began with Scaife and Bruner’s work (1975). They observed that a child in early and middle infancy follows an adult’s gaze, and stated that this behavior is an important factor in early development. However, it is not yet clear how we acquire joint visual attention. There are two theories, nature and nurture, at present (Baron-Cohen, 1995; Butterworth & Jarrett, 1991; Corkum & Moore, 1995). In this paper, we propose an engineering model of joint visual attention learning by conditioning with signals from the environment, and examine this behavior by computer simulation. Based on the results, we discuss the requirements of fundamental parts that are necessary for such learning. Behavior Acquisition by Reinforcement Learning Imaginary Experiment We contrived the following imaginary experiment for our study based on the behavioral experiment of Corkum and Moore (1995). A mother and an infant are sitting face to face with a table between them. Some objects familiar to the infant are placed on the table. In the early stage of learning, the infant randomly directs its attention to the objects including the mother’s face. The mother, however, is always gazing at the infant’s eyes. Toys are set outside of the infant’s view, and the observer can operate them by remote control (Figure 1). When the infant looks at the mother’s face and they make eye contact, the mother moves her eyes to gaze toward any one of the toys. Furthermore, when the infant follows the direction of her gaze , the observer operates the toy, arousing pleasure in the infant’s mind. In other words, the infant can obtain a reward of seeing something interesting only when it perform the two consecutive actions of looking at the mother’s face and following her gaze. Temporal Difference Learning In this study, we used the temporal difference (TD) method (Sutton & Barto, 1998) for the learning of joint visual attention. TD learning is an algorithm that learns the value function V (st) of each state st based on a reward r that is given later from the environment. An agent learns behavior strategy so that the value function may increase (0 < γ < 1).

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.bbr.2014.08.032
Effects of lurasidone on ketamine-induced joint visual attention dysfunction as a possible disease model of autism spectrum disorders in common marmosets
  • Aug 25, 2014
  • Behavioural Brain Research
  • Tomokazu Nakako + 10 more

Effects of lurasidone on ketamine-induced joint visual attention dysfunction as a possible disease model of autism spectrum disorders in common marmosets

  • Research Article
  • Cite Count Icon 11
  • 10.1080/07370008.2022.2157418
Using Mobile Dual Eye-Tracking to Capture Cycles of Collaboration and Cooperation in Co-located Dyads
  • Dec 22, 2022
  • Cognition and Instruction
  • Bertrand Schneider + 1 more

The goal of this paper is to bring new insights to the study of social learning processes by designing measures of collaboration using high-frequency sensor data. More specifically, we are interested in understanding the interplay between moments of collaboration and cooperation, which is an understudied area of research. We collected a multimodal dataset during a collaborative learning activity typical of makerspaces: learning how to program a robot. Pairs of participants were introduced to computational thinking concepts using a block-based environment. Mobile eye-trackers, physiological wristbands, and motion sensors captured their behavior and social interactions. In this paper, we analyze the eye-tracking data to capture participants’ tendency to synchronize their visual attention. This paper provides three contributions: (1) we use an emerging methodology (mobile dual eye-tracking) to capture joint visual attention in a co-located setting and replicate findings that show how levels of joint visual attention are positively correlated with collaboration quality; (2) we qualitatively analyzed the co-occurrence of verbal activity and joint visual attention in low and high performing groups to better understand moments of collaboration and cooperation; (3) inspired by the qualitative observations and theories of collaborative learning, we designed a new quantitative measure that captures cycles of collaborative and cooperative work. Compared to simple measures of joint visual attention, we found it to increase correlation coefficients with learning and collaboration scores. We discuss those results and describe how advances in analyzing sensor data can contribute to theories of collaboration. We conclude with implications for capturing students’ interactions in co-located spaces using Multimodal Learning Analytics (MMLA).

  • Conference Article
  • 10.1109/isce.2011.5973809
Joint visual attention and rendering complexity based sample rate estimation in selective rendering
  • Jun 1, 2011
  • Lu Dong + 3 more

In this work, a sample rate estimator is proposed for selective graphics rendering so that the samples are allocated such that the best perceived image quality can be achieved under a given sample budget. In the proposed estimator, the sample rate of a pixel is decided by not only the visual attention (VA) level of the region to which the pixel belongs but also the required rendering complexity (RC) level of the pixel. The VA and RC values are determined based on the phase-spectrum of the Fourier transform (PFT). Compared with existing sample rate estimators for selective rendering that only consider the VA information, the proposed estimator helps to produce synthesized images with higher perceived quality using the same number of samples.

  • Research Article
  • Cite Count Icon 1
  • 10.2466/pms.2001.92.3.755
Establishing joint visual attention and pointing in autistic children with no functional language.
  • Jun 1, 2001
  • Perceptual and Motor Skills
  • Jun-Ichi Yamamoto + 2 more

Joint visual attention is defined as looking where someone else is looking. The purpose of this study was to examine the conditions for establishing joint visual attention in autistic children who have no functional speech. An experimenter, sitting facing the child, looked at one of six pictures near the child. Analysis showed that joint visual attention to stimuli behind the child and therefore outside of the visual field occurred at a higher rate when the visual angle between the stimuli was about 60 degrees. Spontaneous pointing at the target object increased with training which included feedback and physical guidance. These results are discussed in terms of the effects of environmental variables and perceptual mechanisms on the emergence of joint visual attention in autistic children. The possibility of using an adult's social cues and expanding the child's visual field as a remedial procedure is also addressed.

  • Research Article
  • Cite Count Icon 58
  • 10.1111/j.1468-5584.2004.00253.x
Gaze‐following and joint visual attention in nonhuman animals
  • Sep 1, 2004
  • Japanese Psychological Research
  • Shoji Itakura

Abstract: In this paper, studies of gaze‐following and joint visual attention in nonhuman animals are reviewed from the theoretical perspective of Emery (2000). There are many studies of gaze‐following and joint visual attention in nonhuman primates. The reports concern not only adult individuals but also the development of these abilities. Studies to date suggest that monkeys and apes are able to follow the gaze of others, but only apes can understand the seeing‐knowing relationship with regards to conspecifics in competitive situations. Also, there have recently been some reports of ability to follow the gaze of humans in domestic animals, such as dogs or horses, interacting with humans. These domestic animals are considered to have acquired this ability during their long history of selective breeding by humans. However, we need to clarify social gaze parameters in various species to improve our knowledge of the evolution of how we process others’ gazing, attention, and mental states.

  • Research Article
  • Cite Count Icon 169
  • 10.1037/0012-1649.36.4.511
Effects of gesture and target on 12- and 18-month-olds' joint visual attention to objects in front of or behind them.
  • Jan 1, 2000
  • Developmental Psychology
  • Gedeon O Deák + 2 more

Factors affecting joint visual attention in 12- and 18-month-olds were investigated. In Experiment 1 infants responded to 1 of 3 parental gestures: looking, looking and pointing, or looking, pointing, and verbalizing. Target objects were either identical to or distinctive from distractor objects. Targets were in front of or behind the infant to test G. E. Butterworth's (1991b) hypothesis that 12-month-olds do not follow gaze to objects behind them. Pointing elicited more episodes of joint visual attention than looking alone. Distinctive targets elicited more episodes of joint visual attention than identical targets. Although infants most reliably followed gestures to targets in front of them, even 12-month-olds followed gestures to targets behind them. In Experiment 2 parents were rotated so that the magnitude of their head turns to fixate front and back targets was equivalent. Infants looked more at front than at back targets, but there was also an effect of magnitude of head turn. Infants' relative neglect of back targets is partly due to the "size" of adult's gesture.

  • Book Chapter
  • Cite Count Icon 21
  • 10.4324/9780203772010-22
Joint Visual Attention, Manual Pointing, and Preverbal Communication in Human Infancy
  • Dec 7, 2018
  • George Butterworth + 1 more

In the world of the prelinguistic infant an adult's direction of gaze exerts a powerful effect in redirecting the infant's visual attention. The adult's behaviour serves to signal potentially interesting objects and events to the baby. Our experiments suggest that three cognitive mechanisms are implicated in the comprehension of the adult's head and eye movements between 6 and 18 months. These we call the ecological, the geometric, and the representational mechanisms of looking where someone else is looking. This chapter explores the relationship in development between the signalling function of joint visual attention and the infant's comprehension and production of manual pointing.

  • Research Article
  • Cite Count Icon 318
  • 10.1177/016502548000300303
Towards a Mechanism of Joint Visual Attention in Human Infancy
  • Sep 1, 1980
  • International Journal of Behavioral Development
  • George Butterworth + 1 more

Three experiments are reported which aim to distinguish between mechanisms that might serve joint visual attention between human infants and adults. Between 6 and 18 months of age, the infant will adjust his (or her) line of gaze contingent on a change in the adult's focus of attention but behaves as if the adult is referring to loci within the infants' visual space. Thus, if the adult looks behind the infant, the infant scans the space in front of him. Various explanations of this phenomenon and of the capacity for joint visual attention are discussed.

  • Book Chapter
  • 10.1007/978-3-319-59773-7_14
Robust Joint Visual Attention for HRI Using a Laser Pointer for Perspective Alignment and Deictic Referring
  • Jan 1, 2017
  • Darío Maravall + 2 more

In Human Robot Interaction (HRI), it is a basic prerequisite to guarantee joint attention, also known as shared attention, to get a proper coordination of the involved agents. A particular and important case of joint attention is joint visual attention, also referred to as perspective taking alignment, in which both the human agent and the robot must align their corresponding visual perspectives to look at the same scene or object of mutual interest. In this paper we present experimental work on the alignment of the visual perspectives of a humanoid-like robot and a human agent by means of a laser pointer used as a deictic or pointing device by both agents. We have developed experimental work to validate our proposed method using a scenario based on “I spy” game. After a brief discussion of joint visual attention, we introduce the humanoid-like robot specifically built for our experiments and afterwards we discuss the results obtained in the above-mentioned scenario. We would like to emphasize that for this scenario the human agents and the robot use limited linguistic words to facilitate coordination. These verbal exchanges are based on a common language (a lexicon plus grammar rules) for both humans and robots.

  • Research Article
  • Cite Count Icon 420
  • 10.1037/0012-1649.34.1.28
The origins of joint visual attention in infants.
  • Jan 1, 1998
  • Developmental Psychology
  • Valerie Corkum + 1 more

Two experiments examined the origins of joint visual attention with a training procedure. In Experiment 1, infants aged 6-11 months were tested for a gaze-following (joint visual attention) response under feedback and no feedback conditions. In Experiment 2, infants 8-9 months received feedback for either following the experimenter's gaze (natural group) or looking to the opposite side (unnatural group). Results of the 2 experiments indicate that (a) joint visual attention does not reliably appear prior to 10 months of age, (b) from about 8 months of age, a gaze-following response can be learned, and (c) simple learning is not sufficient as the mechanism through which joint attention cues acquire their signal value.

  • Research Article
  • 10.5926/jjep1953.35.3_271
乳児における視線の共有と指さしへの反応
  • Jan 1, 1987
  • The Japanese Journal of Educational Psychology
  • Masato Yamamoto

The purpose of this study was to clarify the developmental process of joint visual attention and response to pointing in infants from 3 to 8 months old. In experiment I, target object was not set up, and in experiment II, target object was set up in the direction of experimenter's regard or pointing. The results obtained from experiment I were as follows ; (1) joint visual attention was found in 3 months, and (2) regarding the direction of pointing was hardly found in all months. The results obtained from experiment II were different from those of experiment I ; (1) regarding the direction of pointing was found in 3 months, while (2) regarding the pointing of finger was not found. These results indicated that joint visual attention was brought about from 3 months, and responses to pointing was closely related to existence of target object.

  • Research Article
  • Cite Count Icon 16
  • 10.1068/p260333
Preschoolers' perception of other people's looking: photographs and drawings.
  • Mar 1, 1997
  • Perception
  • James R Anderson + 1 more

Children aged 3-4 years were tested for their ability to decide which of two photographs or drawings of a face depicted the act of fixating on a target object; in each control photograph or drawing the same face and object were present without fixation. Performance was above chance on both stimulus types, but low enough to call into question conclusions from previous research. The same children were also tested on their ability to discriminate between photographs/drawings depicting two faces fixating the same object (joint visual attention) and the same two faces fixating different objects. While discrimination of joint visual attention depicted in drawings was as good as discrimination of fixation in the single-face tasks, the ability to reliably choose between a photograph of two people attending to a common object and a control photograph was significantly poorer. The results suggest that, while young infants and children may be highly sensitive to face-on gaze, even well into the fourth year of life children are unable consistently to interpret (1) direction of non-self-directed gaze in static faces and (2) joint visual attention by others.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.