Abstract

Introduction/Background Laparoscopic surgery, when performed by a well-trained surgeon, is a remarkably effective procedure that minimizes complications associated with large incisions, operative blood loss and post-operative pain. However, the procedure is more challenging than a conventional surgery due to the restricted vision, hand-eye coordination problems, limited working space, and lack of tactile sensation. These issues make the laparoscopic surgery a very difficult technique for medical students and residents to master. To minimize the potential risks inherent in laparoscopic procedures, effective and innovative training and guidance Methods are needed. The focus of this abstract is a technology that assists in computer-guided minimally invasive surgical training. Some sophisticated training Methods using guided force, or navigation assistance, have been briefly studied in areas such as physical therapy for stroke victims and teaching Japanese calligraphy, and found to be beneficial, especially in conjunction with visual feedback.1-3 Navigational guidance is offered in both neurosurgery and orthopedic surgery, such as through the SpineAssist by Mazor systems.4 Methods Method: We are motivated by the belief that automation-based, guided laparoscopic skills training will lead to improved performance and, ultimately, better surgical outcomes. We have developed a computer-aided surgical trainer (CAST) for laparoscopic skill training.5-7 CAST prototype is a unique mechatronic (mechanical and electronic) device that provides an unlimited range of training exercises using real surgical instruments, with precise performance data collection and analysis tools. This device is perfectly suited to fill the gap between the very expensive virtual reality based trainers, and relatively crude low-cost, box trainers. Computer-based, optimal training path guidance, vision and sensor-based performance data acquisition are capabilities that that set CAST apart from currently available systems. Results: Haptic feedback is given by connecting a small robotic manipulator to surgical instruments. The manipulator exerts force and torque on the devices. The software used in the CAST system has two components: the haptic guidance and the visual guidance systems. The haptic guidancecontroller is implemented using a PID method which provides haptic feedback by applying force when the trainee deviates from the optimal trajectory.8 The controller uses actual position and reference position of the instrument tip, where the actual position is measured by encoders and the reference position is offered by the optimal path generator module. The actual position is also used for the visual guidance system. The visual guidance system uses augmented reality (AR) techniques on the training scenario.9 We map real world coordinates to the image using camera calibration through linear least squares technique.10 This, incorporated with the haptic guidance algorithm, enables us to provide combined force and visual guidance to the trainees. Proper guidance includes displaying the estimated optimal path and instructions on the screen to help the trainees know what to do, and what not to do. Guided training will be validated with through a pilot experimental study in which expertise level of computer-guided trainees will be compared to that of instructor-guided trainees. The system will be tested in an experimental study by comparing the Results of hand-eye coordination and depth perception using the augmented reality trainer versus other systems. Results: Conclusion The potential impact of our proposed work is improved surgical performance and, ultimately, better surgical outcomes. The optimal surgical movement planner and its attendant computer-based guidance will reinforce proper execution of specific tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call