Abstract

In surgical navigation, pre-operative organ models are presented to surgeons during the intervention to help them in efficiently finding their target. In the case of soft tissue, these models need to be deformed and adapted to the current situation by using intra-operative sensor data. A promising method to realize this are real-time capable biomechanical models. We train a fully convolutional neural network to estimate a displacement field of all points inside an organ when given only the displacement of a part of the organ's surface. The network trains on entirely synthetic data of random organ-like meshes, which allows us to use much more data than is otherwise available. The input and output data are discretized into a regular grid, allowing us to fully utilize the capabilities of convolutional operators and to train and infer in a highly parallelized manner. The system is evaluated on in-silico liver models, phantom liver data and human in-vivo breathing data. We test the performance with varying material parameters, organ shapes and amount of visible surface. Even though the network is only trained on synthetic data, it adapts well to the various cases and gives a good estimation of the internal organ displacement. The inference runs at over 50 frames per second. We present a novel method for training a data-driven, real-time capable deformation model. The accuracy is comparable to other registration methods, it adapts very well to previously unseen organs and does not need to be re-trained for every patient. The high inferring speed makes this method useful for many applications such as surgical navigation and real-time simulation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.