Abstract

Recent studies in surgical robotics have focused on automating common surgical subtasks such as grasping and manipulation using deep reinforcement learning (DRL). In this work, we consider surgical endoscopic camera control for object tracking e.g. using the endoscopic camera manipulator (ECM) from the da Vinci Research Kit (dVRK) (Intuitive Inc., Sunnyvale, CA, USA) as a typical surgical robot learning task. A DRL policy for controlling the robot joint space movements is first trained in a simulation environment and then continues the learning in the real world. To speed up training and avoid significant failures (in this case, losing view of the object), human interventions are incorporated into the training process and regular DRL is combined with generative adversarial imitation learning (GAIL) to encourage imitating human behaviors. Experiments show that an average reward of 159.8 can be achieved within 1000 steps compared to only 121.8 without human interventions, and the view of the moving object is lost only twice during the training process out of 3 trials. These results show that human interventions can improve learning speed and significantly reduce failures during the training process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call