Abstract

Interactive systems are increasingly used in medical applications with the widespread availability of various imaging modalities. Gesture-based interfaces can be beneficial to interact with these kinds of systems in a variety of settings, as they can be easier to learn and can eliminate several shortcomings of traditional tactile systems, especially for surgical applications. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesture-based interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperform the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the superior interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens interface was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization, and discuss the possible underlying psychological mechanisms why these methods can outperform traditional interaction methods

Highlights

  • The quality of health care depends crucially on the ease and success with which physicians are able to construct accurate mental representations of one or more registered 3D imaging displays of traditional computer monitors [1, 2]

  • The informal feedback from the users was very positive, with many users spontaneously expressing that the Kinect interfaces were interesting and fun to use without being asked by experiment administrators

  • The K2HR was mostly preferred by the users, with 11 out of 15 (69%) indicating they thought K2HR was easier to use than TMR

Read more

Summary

Introduction

The quality of health care depends crucially on the ease and success with which physicians are able to construct accurate mental representations of one or more registered 3D imaging displays of traditional computer monitors [1, 2]. Perceiving the depth relationship among three-dimensional objects on a 2D screen presents a major challenge in the field of medical visualizations. Some of the issues of such visualizations are volume occlusion and ambiguity (or absence) of depth cues To help alleviate this problem, various manipulation tools have been created and tested. The use of real-world objects, referred to as 3SURSVYLUWXDOREMHFWVJXLGHGE\WKHXVHU[6], and controllable animations [7] have been proven effective in perceiving the anatomy and depth relationship between objects. Even though these interaction methods have been shown to be accurate and helpful in volume visualizations, they present challenges when introduced in surgical systems used in the operating room (OR)

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call