Abstract

Deformable objects and surfaces are ubiquitous in the daily lives of humans - from the garments in fashion to soft tissues within the body. Because of this routine interaction with soft materials, humans are adept and trained in manipulation of deformable objects while avoiding irreversible damage. The dexterity and care involved is largely facilitated through a combination of the human haptic sense of touch and visual observations of object deformation [1]. While this scenario presents itself as a trivially intuitive task, it becomes significantly more difficult and complex with the deprivation of both 3D depth perception and haptic senses. This deprived state is not dissimilar to the scenarios encountered in many robot-assisted minimally invasive surgeries. As a result, unintentional tissue damage can occur due to lack of force feedback and fine 3D visibility [2]. One approach to remediate these issues combines real-time dynamic 3D reconstruction and vision-based force estimation for haptic feedback. Toward that end, this work continues research in a series of studies focusing on multicamera 3D reconstruction of dynamic surgical cavities. Previous work introduced a novel approach of camera grouping and pair sequencing [3]. This paper builds upon that work by introducing a method for non-rigid, sparse point cloud registration and subsequent point classification. In particular, to enable deformation and force analyses, surfaces are locally classified into three categories: static, shifting and deforming. The topics addressed in this paper present open challenges and ongoing research directions for researchers to this day [4], and provide a step towards real-time 3D reconstruction and force feedback in robot-assisted surgery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call