Abstract

The Kinect camera has been a great success in gaming but more importantly it has opened up new fields for researchers. The data streamed from the Kinect in real time requires new haptic algorithms. By developing a novel haptic rendering algorithm we have shown feasibility for using the Kinect in telerobotic applications such as robotic surgery. Providing the surgeon with a sense of touch could improve telerobotic surgery. One way to obtain data for such a haptic rendering is from sensors in the robot or on end-effectors (i.e., the surgical tools). But the latter raises issues of calibration as well as sterilization. An alternative approach is to use depth camera information for haptic rendering. RGB-D cameras like Kinect present a possible way to do this, if the device can be suitably miniaturized. Developing this algorithm into a practical tool for robotic surgeons requires a number of improvements and extensions. One concern is the presence of noise and shadows in depth data especially since shadows will be caused by surgical tools positioned between the Kinect and tissue being acted upon by the tools. A second challenge is to enhance the resolution of the depth data since surgical operations are at a relatively small size scale. A third concern is the need to register the virtual 3-D environment with the real environment. Finally, there is a need to capture haptic properties of tissues such that they can be rendered realistically with the appropriate stiffness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call