468 publications found
Sort by
Design and Validation of a Virtual Reality Mental Rotation Test

Mental rotation, a common measure of spatial ability, has traditionally been assessed through paper-based instruments like the Mental Rotation Test (MRT) or the Purdue Spatial Visualization Test: Rotations (PSVT:R). The fact that these instruments present 3D shapes in a 2D format devoid of natural cues like shading and perspective likely limits their ability to accurately assess the fundamental skill of mentally rotating 3D shapes. In this paper, we describe the Virtual Reality Mental Rotation Assessment (VRMRA), a virtual reality-based mental rotation assessment derived from the Revised PSVT:R and MRT. The VRMRA reimagines traditional mental rotation assessments in a room-scale virtual environment and uses hand-tracking and elements of gamification in attempts to create an intuitive, engaging experience for test-takers. To validate the instrument, we compared response patterns in the VRMRA with patterns observed on the MRT and Revised PSVT:R. For the PSVT:R-type questions, items requiring a rotation around two axes were significantly harder than items requiring rotations around a single axis in the VRMRA, which is not the case in the Revised PSVT:R. For the MRT-type questions in the VRMRA, a moderate negative correlation was found between the degree of rotation in the X direction and item difficulty. While the problem of occlusion was reduced, features of the shapes and distractors accounted for 50.6% of the variance in item difficulty. Results suggest that the VRMRA is likely a more accurate tool to assess mental rotation ability in comparison to traditional instruments which present the stimuli through 2D media. Our findings also point to potential problems with the fundamental designs of the Revised PSVT:R and MRT question formats.

Open Access
Relevant
The Haptic Intensity Order Illusion is Caused by Amplitude Changes

When two brief vibrotactile stimulations are sequentially applied to observers’ lower back,there is systematic mislocalization of the stimulation: If the second stimulation is of higher intensity than the first one, observers tend to respond that the second stimulation was above the first one, and vice versa when weak intensity stimulation follows a strong one. This haptic mislocalization effect has been called the intensity order illusion . In the original demonstration of the illusion, frequency and amplitude of the stimulation were inextricably linked so that changes in amplitude also resulted in changes in frequency. It is therefore unknown whether the illusion is caused by changes in frequency, amplitude or both. To test this, we performed a multifactorial experiment, where we used L5 actuators that allow independent manipulation of frequency and amplitude. This approach enabled us to investigate the effects of stimulus amplitude, frequency and location and assess any potential interactions between these factors. We report four main findings: 1) we were able to replicate the intensity order illusion with the L5 tactors, 2) the illusion mainly occurred in the upwards direction, or in other words, when strong stimulation following a weaker one occurred above or in the same location as the first stimulation, 3) the illusion did not occur when similar stimulation patterns were applied in the horizontal direction and 4) the illusion was solely due to changes in amplitude, while changes in frequency (100 Hz vs 200 Hz) had no effect.

Open Access
Relevant
The Influence of the Other-Race Effect on Susceptibility to Face Morphing Attacks

Facial morphs created between two identities resemble both of the faces used to create the morph. Consequently, humans and machines are prone to mistake morphs made from two identities for either of the faces used to create the morph. This vulnerability has been exploited in “morph attacks” in security scenarios. Here, we asked whether the “other-race effect” (ORE)— the human advantage for identifying own- vs. other-race faces— exacerbates morph attack susceptibility for humans. We also asked whether face-identification performance in a deep convolutional neural network (DCNN) is affected by the race of morphed faces. Caucasian (CA) and East-Asian (EA) participants performed a face-identity matching task on pairs of CA and EA face images in two conditions. In the morph condition, different-identity pairs consisted of an image of identity “A” and a 50/50 morph between images of identity “A” and “B”. In the baseline condition, morphs of different identities never appeared. As expected, morphs were identified mistakenly more often than original face images. Of primary interest, morph identification was substantially worse for cross-race faces than for own-race faces. Similar to humans, the DCNN performed more accurately for original face images than for morphed image pairs. Notably, the deep network proved substantially more accurate than humans in both cases. The results point to the possibility that DCNNs might be useful for improving face identification accuracy when morphed faces are presented. They also indicate the significance of the race of a face in morph attack susceptibility in applied settings.

Open Access
Relevant
The effect of interocular contrast differences on the appearance of augmented reality imagery

Augmented reality (AR) devices seek to create compelling visual experiences that merge virtual imagery with the natural world. These devices often rely on wearable near-eye display systems that can optically overlay digital images to the left and right eyes of the user separately. Ideally, the two eyes should be shown images with minimal radiometric differences (e.g., the same overall luminance, contrast, and color in both eyes), but achieving this binocular equality can be challenging in wearable systems with stringent demands on weight and size. Basic vision research has shown that a spectrum of potentially detrimental perceptual effects can be elicited by imagery with radiometric differences between the eyes, but it is not clear whether and how these findings apply to the experience of modern AR devices. In this work, we first develop a testing paradigm for assessing multiple aspects of visual appearance at once, and characterize five key perceptual factors when participants viewed stimuli with interocular contrast differences. In a second experiment, we simulate optical see-through AR imagery using conventional desktop LCD monitors and use the same paradigm to evaluate the multifaceted perceptual implications when the AR display luminance differs between the two eyes. We also include simulations of monocular AR systems (i.e., systems in which only one eye sees the displayed image). Our results suggest that interocular contrast differences can drive several potentially detrimental perceptual effects in binocular AR systems, such as binocular luster, rivalry, and spurious depth differences. In addition, monocular AR displays tend to have more artifacts than binocular displays with a large contrast difference in the two eyes. A better understanding of the range and likelihood of these perceptual phenomena can help inform design choices that support a high-quality user experience in AR.

Open Access
Relevant
Calibrated passability perception in virtual reality transfers to augmented reality

As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45 degrees and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.

Relevant
On Human-like Biases in Convolutional Neural Networks for the Perception of Slant from Texture

Depth estimation is fundamental to 3D perception, and humans are known to have biased estimates of depth. This study investigates whether convolutional neural networks (CNNs) can be biased when predicting the sign of curvature and depth of surfaces of textured surfaces under different viewing conditions (field of view) and surface parameters (slant and texture irregularity). This hypothesis is drawn from the idea that texture gradients described by local neighborhoods—a cue identified in human vision literature—are also representable within convolutional neural networks. To this end, we trained both unsupervised and supervised CNN models on the renderings of slanted surfaces with random Polka dot patterns and analyzed their internal latent representations. The results show that the unsupervised models have similar prediction biases as humans across all experiments, while supervised CNN models do not exhibit similar biases. The latent spaces of the unsupervised models can be linearly separated into axes representing field of view and optical slant. For supervised models, this ability varies substantially with model architecture and the kind of supervision (continuous slant vs. sign of slant). Even though this study says nothing of any shared mechanism, these findings suggest that unsupervised CNN models can share similar predictions to the human visual system. Code: github.com/brownvc/Slant-CNN-Biases

Relevant
Effect of Subthreshold Electrotactile Stimulation on the Perception of Electrovibration

Electrovibration is used in touch enabled devices to render different textures. Tactile sub-modal stimuli can enhance texture perception when presented along with electrovibration stimuli. Perception of texture depends on the threshold of electrovibration. In the current study, we have conducted a psychophysical experiment on 13 participants to investigate the effect of introducing a subthreshold electrotactile stimulus (SES) to the perception of electrovibration. Interaction of tactile sub-modal stimuli causes masking of a stimulus in the presence of another stimulus. This study explored the occurrence of tactile masking of electrovibration by electrotactile stimulus. The results indicate the reduction of electrovibration threshold by 12.46% and 6.75% when the electrotactile stimulus was at 90% and 80% of its perception threshold, respectively. This method was tested over a wide range of frequencies from 20 Hz to 320 Hz in the tuning curve, and the variation in percentage reduction with frequency is reported. Another experiment was conducted to measure the perception of combined stimuli on the Likert scale. The results showed that the perception was more inclined towards the electrovibration at 80% of SES and was indifferent at 90% of SES. The reduction in the threshold of electrovibration reveals that the effect of tactile masking by electrotactile stimulus was not prevalent under subthreshold conditions. This study provides significant insights into developing a texture rendering algorithm based on tactile sub-modal stimuli in the future.

Open Access
Relevant