A preliminary comparison of bellringer performance across three visual modalities for the assessment of anatomy knowledge.
Anatomy education is an inherently visual field, particularly in bellringer (BR) testing, which requires learners to identify anatomical structures on human-donated specimens. While the traditional use of these physical specimens in BR testing has long remained the standard, three-dimensional (3D) viewing in virtual reality platforms and two-dimensional (2D) images of specimens on paper have become common alternatives due to the ease and feasibility of use. Despite widespread use, there is a paucity of literature comparing the assessment validity of these modalities to physical specimens as the historic standard. Thus, this study sought to assess BR testing performance and question validity (using point biserial evaluation) across all three modalities. In total, 140 undergraduate students, enrolled in an Introductory Anatomy and Physiology course at the time of testing, participated in a BR examination with specimens presented in three visual formats: physical specimens, printed 2D images, and 3D reconstructions in virtual reality. When comparing all three modalities, no notable differences were found between question difficulty, point biserial values, presentation of cybersickness, visuospatial ability, or modality preference. Additionally, modality preference and student opinion did not significantly affect test scores, suggesting that these student attributes were unrelated to BR performance. The examinations had high reliability as measured by KR-20 values, supporting the applicability of our results to undergraduate anatomy BR testing. This study provides preliminary evidence supporting the utility and validity of both 2D images and 3D virtual reality as alternative modalities for BR testing within the undergraduate anatomy education setting.
- Research Article
- 10.1096/fasebj.2020.34.s1.02470
- Apr 1, 2020
- The FASEB Journal
The application of three‐dimensional (3D) visualization technology has generated interest due to its potential to augment or even replace cadaver use in anatomical education. By adopting stereoscopic 3D digital technologies, educational institutions may be able to mitigate issues related to the resource intensiveness of cadaver use such as prosection availability, body donation and the infrastructure to maintain physical specimens. We previously developed a smartphone application, titled VRBR, which uses an inexpensive Google Cardboard headset to display 2D or stereoscopic 3D images of cadaveric and plastinated specimens and plastic models for learning and testing anatomical knowledge. The purpose of this study was to compare the effectiveness and validity of stereoscopic 3D images, 2D images and cadaveric specimens in testing anatomical knowledge with practical examinations (OSPEs) for undergraduate students with prior anatomy education. As cadaveric specimens and stereoscopic 3D images are both stereoscopic, we hypothesized that participants would perform similarly between those modalities, and more poorly in 2D. Students who had previously completed an undergraduate anatomy and physiology course (N = 60) were randomized to one of three testing groups (A, B or C). Each testing group was administered OSPEs in three distinct modalities: VRBR 2D images, VRBR stereoscopic 3D images or cadaveric specimens. In total, each participant answered 45 anatomy‐based questions (15 per modality, test order randomized across groups) and completed a questionnaire assessing cybersickness and user satisfaction. Participants completed a stereo fly test and mental rotation test to assess their stereoacuity and visuospatial ability, respectively. These are potential covariates of the testing outcome when assessing spatially complex objects of varying depth. This study was approved by the Hamilton Integrated Research Ethics Board. In regard to cybersickness, a small fraction reported nausea (7.7%), vertigo (7.7%), and dizziness (15.4%) while responses for headache (23.1%) and fatigue (23.1%) were moderate and general discomfort (61.5%) and eyestrain (76.9%) were more prevalent. Preliminary data (N = 13) assessed via two‐way ANOVA, suggested that there were no statistically significant effects of test version (F(1, 22) = 0.005, p = 0.945) or stereoscopy (F(1, 22) = 0.009, p = 0.728). If this trend persisted through completion of the study, it may suggest that stereoscopy, as provided by the headset, does not improve the effectiveness of digital tools for testing anatomy knowledge recall. Complete data collection and analysis in February 2020 will provide a more complete picture of the validity of stereoscopy in testing anatomy knowledge recall. With ongoing analysis, the results from this study may elucidate the effect of stereopsis in testing anatomical knowledge and guide the future development of undergraduate anatomy education curricula.Support or Funding InformationFunded by the Education Program in Anatomy.
- Research Article
1
- 10.1093/neuros/nyz310_404
- Aug 20, 2019
- Neurosurgery
INTRODUCTION Cavernous malformations in deep-seated eloquent cortex pose significant management challenges. Surgical resection, when indicated, carries a high risk of postoperative neurologic deficit. Obtaining safe surgical access is highly dependent upon location and the patient's clinical status. Three-dimensional (3D) virtual reality platforms, neuronavigation, and advanced neuroimaging can be used as adjuncts to determine and perform the safest surgical approach. METHODS Five patients who underwent surgical resection of deep seated cavernomas in eloquent cortex were included. 3D images were created from Diffusion Tensor MR Imaging (DTI) using a 360 Virtual Reality planning software (Surgical Theater SRP, Version7.4.0, Cleveland, Ohio). The Surgical Theater system was integrated with the neuronavigation system (Brainlab AG Version 3.0.5, Feldkirchen, Germany) to allow for real time evaluation of the 3D tractography. The 360°VR model was used to assess the location of the cavernoma and relevant anatomy, map multiple trajectories, and compare the advantages and disadvantages of each. SRP and Brainlab projections were used to determine the limits of the cavernoma, guide the placement of the tubular retractor, and confirm the approach vector intraoperatively. RESULTS Locations of the cavernomas included left caudate head, right superior colliculus, right posteroinferior thalamus, right insula, and left medulla. All patients underwent craniotomy and microsurgical resection. Two patients who presented with a neurologic deficit partially improved after prolonged rehabilitation. Three patients who were neurologically intact on presentation remained so post-op. Gross-total resection was obtained in all patients. CONCLUSION Virtual segmentation allows for better understanding of complex anatomic relationships between the lesion and surrounding critical structures. Advanced planning familiarizes surgeons to the patient in the context of the surgical field, increasing spatial and positional orientation. 3D virtual reality planning platforms in conjunction with neuronavigation and DTI can assist in choosing the optimal surgical corridor to achieve gross-total resection of deep-seated cavernomas while minimizing neurologic risk.
- Research Article
10
- 10.1093/ejcts/ezad014
- Dec 2, 2022
- European Journal of Cardio-Thoracic Surgery
When surgical resection is indicated for a congenital lung abnormality (CLA), lobectomy is often preferred over segmentectomy, mostly because the latter is associated with more residual disease. Presumably, this occurs in children because sublobar surgery often does not adhere to anatomical borders (wedge resection instead of segmentectomy), thus increasing the risk of residual disease. This study investigated the feasibility of identifying eligible cases for anatomical segmentectomy by combining virtual reality (VR) and artificial intelligence (AI). Semi-automated segmentation of bronchovascular structures and lesions were visualized with VR and AI technology. Two specialists independently evaluated via a questionnaire the informative value of regular computed tomography versus three-dimensional (3D) VR images. Five asymptomatic, non-operated cases were selected. Bronchovascular segmentation, volume calculation and image visualization in the VR environment were successful in all cases. Based on the computed tomography images, assignment of the CLA lesion to specific lung segments matched between the consulted specialists in only 1 out of the cases. Based on the three 3D VR images, however, the localization matched in 3 of the 5 cases. If the patients would have been operated, adding the 3D VR tool to the preoperative workup would have resulted in changing the surgical strategy (i.e. lobectomy versus segmentectomy) in 4 cases. This study demonstrated the technical feasibility of a hybridized AI-VR visualization of segment-level lung anatomy in patients with CLA. Further exploration of the value of 3D VR in identifying eligible cases for anatomical segmentectomy is therefore warranted.
- Research Article
1
- 10.1152/advan.00084.2021
- Feb 24, 2022
- Advances in Physiology Education
There is a widely variable breadth of coverage of skeletal muscle content across both undergraduate human anatomy and undergraduate anatomy and physiology (A&P) courses. In response to the need for a more global understanding of the content taught in undergraduate anatomy courses, we developed an online survey (administered through Qualtrics) where both human anatomy and A&P faculty could report skeletal muscle coverage in their courses. The survey also collected comparative demographic institutional data such as the type of institution (community college vs. 4 year), course format, and geographic location of the undergraduate institution. Skeletal muscles surveyed included those listed and described in a typical undergraduate human anatomy text (McKinley MP, O'Loughlin VD, Pennefather O. Human Anatomy (5th ed.), 2017, p. 960). The data indicated some interesting instructional trends regarding muscular system coverage. First, both the "identification" and "action" of specific muscles are taught at a higher frequency than the teaching of either "attachments or innervation." Innervation of specific skeletal muscles is the least taught concept. In each body region, certain muscles were taught with higher frequency than others. This research shows there is a global trend in teaching identification of specific skeletal muscles within each body region and often this is accompanied by teaching actions of said muscles. These general instructional trends may increase our understanding of the anatomical and physiological education our undergraduate students are receiving and will lead to further critical conversations about content development and curriculum.
- Research Article
12
- 10.1027/0269-8803/a000278
- Mar 18, 2021
- Journal of Psychophysiology
Abstract. Virtual reality (VR), which can represent real-life events and situations, is being increasingly applied to many fields, such as education, entertainment, and medical rehabilitation. Correspondingly, the neural information processing of VR has attracted attention. However, the underlying neural mechanisms of VR environments have not yet been fully revealed. The purpose of this study was to examine the possible differences in brain activities and networks between the less immersive 2D and the fully immersive 3D VR environments. 3D VR videos and the same 2D scenes were presented to the participants and the scalp electroencephalogram (EEG) was recorded, respectively. Power spectral density (PSD) and the functional connectivity of these EEG signals were analyzed. The results showed that 3D VR videos significantly enhanced the PSD of θ rhythm (4–7 Hz) in the frontal lobe; decreased the PSD of α rhythm (8–13 Hz) in the parietal and the occipital lobes; increased the PSD of β rhythm (14–30 Hz) in the frontal, the parietal, the temporal, and the occipital lobes, relative to 2D VR watching. Furthermore, 3D versus 2D VR-induced alterations in the patterns of brain networks were similar to the patterns of PSD. Specifically, for the θ rhythm, 3D VR significantly enhanced the frontal and the temporal brain functional connectivity; for the α rhythm, 3D VR increased the parietal and the occipital networks; for the β rhythm, 3D VR remarkably increased the frontal, the occipital, the frontal-temporal and the frontal-occipital brain functional connectivity, relative to 2D VR. These significant differences between 3D and 2D VR video-watching suggest that the neural information processing of cortical activities and networks is correlated to the degree of immersion. The present results, collected with previous researches, implicate that some visual-related information processes, such as visual attention, visual perception, and visual immersion are more robust in 3D VR environments.
- Research Article
9
- 10.1016/j.jss.2023.07.001
- Aug 2, 2023
- Journal of Surgical Research
Virtual Reality for Preoperative Planning in Complex Surgical Oncology: A Single-Center Experience
- Research Article
18
- 10.4293/jsls.2021.00011
- Jan 1, 2021
- JSLS : Journal of the Society of Laparoscopic & Robotic Surgeons
Background and Objectives:Minimally invasive surgery for renal masses is complex and relies on two-dimensional (2D) computer tomography (CT) and magnetic resonance imaging (MRI) scans for surgical planning. We sought to determine if three-dimensional (3D) virtual reality (VR) models generated from imaging of patients undergoing robotic partial nephrectomy influenced presurgical planning approaches when compared to routine planning.Methods:The initial 15 patients underwent robotic assisted laparoscopic partial nephrectomy performed by one urologic surgeon. All patients pre-operatively underwent a CT and/or MRI scan. A pre-operative surgical plan was then recorded. 3D VR models were generated from these scans and reviewed. A second surgical plan was developed based on the 3D VR images. A comparison was made between the two studies prior to surgical intervention. All final surgical plans were implemented based on the 3D VR imaging studies.Results:Six surgical approaches were changed based on the 3D VR images. Two surgical approaches were changed from a transperitoneal to a retroperitoneal approach and two from a retroperitoneal to a transperitoneal approach. Two patients had distinctive renal vasculature related to the renal cancers which were not appreciated on routine scans but were well delineated by VR imaging studies. As a result, the surgical approach for two patients was altered to accommodate the new findings.Conclusion:Operative planning is paramount when performing robotic partial nephrectomy and developing a 3D surgical approach from 2D imaging can be difficult. Three-dimensional VR models affords the surgeon a 3D view prior to and during surgery and can ensure the selection of the appropriate surgical approach.
- Research Article
- 10.1096/fasebj.2019.33.1_supplement.444.38
- Apr 1, 2019
- The FASEB Journal
IntroductionWith the increasing number of online resources for anatomical education available to students, understanding why students use one resource over another is crucial for resource design. The benefit of using stereoscopic images in anatomical education has recently been demonstrated (Remmele et al, 2018; Cui et al, 2017). However, few resources utilize stereopsis in depicting anatomical dissections. Using images from the Stereoscopic Atlas of Human Anatomy (D.L. Bassett & W.B. Gruber, Stanford University, 1962), we developed a smartphone application that uses the Google cardboard platform to visualize stereoscopic images in an inexpensive and accessible manner. The app was implemented in a second year undergraduate anatomy and physiology course with 975 students. The app was designed to incorporate virtual pins in the 3D space that enabled it to be used as a self‐evaluated Objective Structured Practical Exam (OSPE) called the Virtual Reality Bell Ringer (VRBR). Questions and answers accompanying the images were provided on the online course management system. This allowed the use of the app to be correlated with course evaluation performance. The PURPOSE of this study was to track use of the app by students, get feedback to develop better online resources, and correlate the app use with final scores.Methods67 OSPE practice questions using stereo pairs from the Bassett collection were selected, 30 of which pertained to the first semester: 5 questions for each of 6 bi‐weekly labs. Questions related to each sets of images were available throughout the semester and students had unlimited attempts to successfully answer the questions. Upon submitting their response, students would see their answer, the correct answer, and the rationale behind the answer. Student participation and correlation between their VRBR score and multiple choice (MCQ) midterm exam were evaluated.ResultsBy mid‐semester, 89 of 975 students attempted the VRBR practice questions. Of those who participated in the VRBR quiz, 93% achieved above average scores on their MCQ midterm exam. Correlation of midterm scores with successful VRBR question responses for the lowest, middle, and highest third percentile‐scoring students resulted in Pearson correlation coefficients (r) of 0.39, 0.5, and 0.78, respectively. This indicates that there is a good correlation between success on the VRBR practice questions and success on different types of course evaluations. Although 91% of students did not use the app to prepare for the midterm MCQ exam, a qualitative mid‐semester survey of app use revealed that 51% planned to complete the VRBR questions prior to the final OSPE exam. Only 13% of students reported that they feel that the app would not prepare them for the final exam.ConclusionDespite less than 10% of students using the app at mid‐semester, a large proportion planned to use the app to prepare for the final exam. There was a correlation between success on the midterm MCQ and VRBR quizzes, however, this is probably correlative rather than causative. Evaluation of VRBR app use on the outcome of the final OSPE exam will occur at the end of the first and second semesters.This abstract is from the Experimental Biology 2019 Meeting. There is no full text article associated with this abstract published in The FASEB Journal.
- Research Article
2
- 10.3390/brainsci15010075
- Jan 16, 2025
- Brain sciences
Virtual reality (VR) has become a transformative technology with applications in gaming, education, healthcare, and psychotherapy. The subjective experiences in VR vary based on the virtual environment's characteristics, and electroencephalography (EEG) is instrumental in assessing these differences. By analyzing EEG signals, researchers can explore the neural mechanisms underlying cognitive and emotional responses to VR stimuli. However, distinguishing EEG signals recorded by two-dimensional (2D) versus three-dimensional (3D) VR environments remains underexplored. Current research primarily utilizes power spectral density (PSD) features to differentiate between 2D and 3D VR conditions, but the potential of other feature parameters for enhanced discrimination is unclear. Additionally, the use of machine learning techniques to classify EEG signals from 2D and 3D VR using alternative features has not been thoroughly investigated, highlighting the need for further research to identify robust EEG features and effective classification methods. This study recorded EEG signals from participants exposed to 2D and 3D VR video stimuli to investigate the neural differences between these conditions. Key features extracted from the EEG data included PSD and common spatial patterns (CSPs), which capture frequency-domain and spatial-domain information, respectively. To evaluate classification performance, several classical machine learning algorithms were employed: ssupport vector machine (SVM), k-nearest neighbors (KNN), random forest (RF), naive Bayes, decision Tree, AdaBoost, and a voting classifier. The study systematically compared the classification performance of PSD and CSP features across these algorithms, providing a comprehensive analysis of their effectiveness in distinguishing EEG signals in response to 2D and 3D VR stimuli. The study demonstrated that machine learning algorithms can effectively classify EEG signals recorded during watching 2D and 3D VR videos. CSP features outperformed PSD in classification accuracy, indicating their superior ability to capture EEG signals differences between the VR conditions. Among the machine learning algorithms, the Random Forest classifier achieved the highest accuracy at 95.02%, followed by KNN with 93.16% and SVM with 91.39%. The combination of CSP features with RF, KNN, and SVM consistently showed superior performance compared to other feature-algorithm combinations, underscoring the effectiveness of CSP and these algorithms in distinguishing EEG responses to different VR experiences. This study demonstrates that EEG signals recorded during watching 2D and 3D VR videos can be effectively classified using machine learning algorithms with extracted feature parameters. The findings highlight the superiority of CSP features over PSD in distinguishing EEG signals under different VR conditions, emphasizing CSP's value in VR-induced EEG analysis. These results expand the application of feature-based machine learning methods in EEG studies and provide a foundation for future research into the brain cortical activity of VR experiences, supporting the broader use of machine learning in EEG-based analyses.
- Conference Article
- 10.1109/ismar-adjunct51615.2020.00036
- Nov 1, 2020
Virtual training environments (VTEs) using immersive technology have been able to successfully provide training for technical skills. Combined with recent advances in virtual social agent technologies and in affective computing, VTEs can now also support the training of social skills. Research looking at the effects of different immersive technologies on users’ experience (UX) can provide important insights about their impact on user’s engagement with the technology, sense presence and co-presence. However, current studies do not address whether emotions displayed by virtual agents provide the same level of UX across different virtual reality (VR) platforms. In this study, we considered a virtual classroom simulator built for desktop computer, and adapted for an immersive VR platform (CAVE). Users interact with virtual animated disruptive students able to display facial expressions, to help them practice their classroom behavior management skills. We assessed effects of the VR platforms and of the display of facial expressions on presence, co-presence, engagement, and believability. Results indicate that users were engaged, found the virtual students believable and felt presence and co-presence for both VR platforms. We also observed an interaction effects of facial expressions and VR platforms for co-presence (p = .018 < .05).
- Research Article
64
- 10.3109/17453674.2011.623566
- Nov 25, 2011
- Acta Orthopaedica
Background and purpose Non-anatomic bone tunnel placement is the most common cause of a failed ACL reconstruction. Accurate and reproducible methods to visualize and document bone tunnel placement are therefore important. We evaluated the reliability of standard radiographs, CT scans, and a 3-dimensional (3D) virtual reality (VR) approach in visualizing and measuring ACL reconstruction bone tunnel placement.Methods 50 consecutive patients who underwent single-bundle ACL reconstructions were evaluated postoperatively by standard radiographs, CT scans, and 3D VR images. Tibial and femoral tunnel positions were measured by 2 observers using the traditional methods of Amis, Aglietti, Hoser, Stäubli, and the method of Benereau for the VR approach.Results The tunnel was visualized in 50–82% of the standard radiographs and in 100% of the CT scans and 3D VR images. Using the intraclass correlation coefficient (ICC), the inter- and intraobserver agreement was between 0.39 and 0.83 for the standard femoral and tibial radiographs. CT scans showed an ICC range of 0.49–0.76 for the inter- and intraobserver agreement. The agreement in 3D VR was almost perfect, with an ICC of 0.83 for the femur and 0.95 for the tibia.Interpretation CT scans and 3D VR images are more reliable in assessing postoperative bone tunnel placement following ACL reconstruction than standard radiographs.
- Abstract
3
- 10.1016/j.ijrobp.2021.07.623
- Oct 22, 2021
- International Journal of Radiation Oncology*Biology*Physics
3D Virtual Reality Volumetric Imaging Review in Cancer Patients’ Understanding and Education of their Disease and Treatment
- Research Article
- 10.1007/s10278-024-01048-3
- Jun 3, 2024
- Journal of imaging informatics in medicine
The aim of this study was to validate a novel medical virtual reality (VR) platform used for medical image segmentation and contouring in radiation oncology and 3D anatomical modeling and simulation for planning medical interventions, including surgery. The first step of the validation was to verify quantitatively and qualitatively that the VR platform can produce substantially equivalent 3D anatomical models, image contours, and measurements to those generated with existing commercial platforms. To achieve this, a total of eight image sets and 18 structures were segmented using both VR and reference commercial platforms. The image sets were chosen to cover a broad range of scanner manufacturers, modalities, and voxel dimensions. The second step consisted of evaluating whether the VR platform could provide efficiency improvements for target delineation in radiation oncology planning. To assess this, the image sets for five pediatric patients with resected standard-risk medulloblastoma were used to contour target volumes in support of treatment planning of craniospinal irradiation, requiring complete inclusion of the entire cerebral-spinal volume. Structures generated in the VR and the commercial platforms were found to have a high degree of similarity, with dicesimilarity coefficient ranging from 0.963 to 0.985 for high-resolution images and 0.920 to 0.990 for lower resolution images. Volume, cross-sectional area, and length measurements were also found to be in agreement with reference values derived from a commercial system, with length measurements having a maximum difference of 0.22mm, angle measurements having a maximum difference of 0.04°, and cross-sectional area measurements having a maximum difference of 0.16 mm2. The VR platform was also found to yield significant efficiency improvements, reducing the time required to delineate complex cranial and spinal target volumes by an average of 50% or 29min.
- Research Article
2
- 10.1002/mdc3.13961
- Jan 4, 2024
- Movement Disorders Clinical Practice
BackgroundMotor symptoms in functional motor disorders (FMDs) refer to involuntary, but learned, altered movement patterns associated with aberrant self‐focus, sense of agency, and belief/expectations. These conditions commonly lead to impaired posture control, raising the likelihood of falls and disability. Utilizing visual and cognitive tasks to manipulate attentional focus, virtual reality (VR) integrated with posturography is a promising tool for exploring postural control disorders.ObjectivesTo investigate whether postural control can be adapted by manipulating attentional focus in a 3D immersive VR environment.MethodsWe compared postural parameters in 17 FMDs patients and 19 age‐matched healthy controls over a single session under four increasingly more complex and attention‐demanding conditions: simple fixation task (1) in the real room and (2) in 3D VR room‐like condition; complex fixation task in a 3D VR city‐like condition (3) avoiding distractors and (4) counting them. Dual‐task effect (DTE) measured the relative change in performance induced by the different attention‐demanding conditions on postural parameters.ResultsPatients reduced sway area and mediolateral center of pressure displacement velocity DTE compared to controls (all, P < 0.049), but only under condition 4. They also showed a significant reduction in the sway area DTE under condition 4 compared to condition 3 (P = 0.025).ConclusionsThis study provides novel preliminary evidence for the value of a 3D immersive VR environment combined with different attention‐demanding conditions in adapting postural control in patients with FMDs. As supported by quantitative and objective posturographic measures, our findings may inform interventions to explore FMDs pathophysiology.
- Research Article
44
- 10.1093/ehjdh/ztaa011
- Nov 1, 2020
- European Heart Journal. Digital Health
AimsIncreased complexity in cardiac surgery over the last decades necessitates more precise preoperative planning to minimize operating time, to limit the risk of complications during surgery and to aim for the best possible patient outcome. Novel, more realistic, and more immersive techniques, such as three-dimensional (3D) virtual reality (VR) could potentially contribute to the preoperative planning phase. This study shows our initial experience on the implementation of immersive VR technology as a complementary research-based imaging tool for preoperative planning in cardiothoracic surgery. In addition, essentials to set up and implement a VR platform are described.MethodsSix patients who underwent cardiac surgery at the Erasmus Medical Center, Rotterdam, The Netherlands, between March 2020 and August 2020, were included, based on request by the surgeon and availability of computed tomography images. After 3D VR rendering and 3D segmentation of specific structures, the reconstruction was analysed via a head mount display. All participating surgeons (n = 5) filled out a questionnaire to evaluate the use of VR as preoperative planning tool for surgery.ConclusionOur study demonstrates that immersive 3D VR visualization of anatomy might be beneficial as a supplementary preoperative planning tool for cardiothoracic surgery, and further research on this topic may be considered to implement this innovative tool in daily clinical practice.Lay summaryOver the past decades, surgery on the heart and vessels is becoming more and more complex, necessitating more precise and accurate preoperative planning. Nowadays, operative planning is feasible on flat, two-dimensional computer screens, however, requiring a lot of spatial and three-dimensional (3D) thinking of the surgeon. Since immersive 3D virtual reality (VR) is an upcoming imaging technique with promising results in other fields of surgery, we aimed in this study to explore the additional value of this technique in heart surgery. Our surgeons planned six different heart operations by visualizing computed tomography scans with a dedicated VR headset, enabling them to visualize the patient’s anatomy in an immersive and 3D environment. The outcomes of this preliminary study are positive, with a much more reality-like simulation for the surgeon. In such, VR could potentially be beneficial as a preoperative planning tool for complex heart surgery.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.