Abstract

This paper introduces a cognitive psychological experiment that was conducted to analyze how traditional film editing methods and the application of cognitive event segmentation theory perform in virtual reality (VR). Thirty volunteers were recruited and asked to watch a series of short VR videos designed in three dimensions: time, action (characters), and space. Electroencephalograms (EEG) were recorded simultaneously during their participation. Subjective results show that any of the editing methods used would lead to an increased load and reduced immersion. Furthermore, the cognition of event segmentation theory also plays an instructive role in VR editing, with differences mainly focusing on frontal, parietal, and central regions. On this basis, visual evoked potential (VEP) analysis was performed, and the standardized low-resolution brain electromagnetic tomography algorithm (sLORETA) traceability method was used to analyze the data. The results of the VEP analysis suggest that shearing usually elicits a late event-related potential component, while the sources of VEP are mainly the frontal and parietal lobes. The insights derived from this work can be used as guidance for VR content creation, allowing VR image editing to reveal greater richness and unique beauty.

Highlights

  • With the rapid development of virtual reality technology, the integration of VR technology and film has gradually become an important breakthrough in traditional screen cinema [1], and VR films have shone at major film festivals, such as Venice, Sundance, and Golden Shaker

  • It could be speculated that the change of character and the violation of the 180◦ rule of editing methods could bring a greater load to the audience

  • Both subjective and objective data could confirm the disruption of the continuity of viewing by editing, but the impact of different editing methods varies, as the frontal and occipital lobes are more sensitive to changes in characters and changes in perspective

Read more

Summary

Introduction

With the rapid development of virtual reality technology, the integration of VR technology and film has gradually become an important breakthrough in traditional screen cinema [1], and VR films have shone at major film festivals, such as Venice, Sundance, and Golden Shaker. Immersive and interactive VR films might present people with the most extreme visual impact and sensory experience to date, enabling viewers to actively watch multi-threaded films while breaking numerous traditional rules of film shooting and editing. Filmmakers have developed a series of film editing rules for better transitions between scenes, collectively known as “continuity editing” [2,3]. The visual content might change dramatically according to different editing methods, viewers can effortlessly perceive the discontinuous flow of information as a series of coherent events [4], e.g., the 180-degree rule [5], which may smooth out the changes between scenes, and whose violation can cause confusion and discontent among the audience. The direction of the camera is controlled by the audience, the editing techniques, such as camera orientation and zoom, are no longer applicable in traditional films, and attention guidance becomes the editing method of VR films [6]

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.