Abstract

Introduction: The goal of facial transplantation is to restore form and function to patients with devastating or disfiguring injuries or defects of the face. To date, 20 face transplants have been performed around the world. Aesthetic and functional outcomes are difficult to optimize in these three-dimensionally complex procedures. Clearly defining and understanding the complex tissue deficits and defects that accompany devastating facial injuries like electric burns, blast wounds, and accidental trauma is critical for both technical success as well as to objectively analyze return of function after face transplantation. Current state of the art imaging in face transplantation includes multimodality imaging (3D CT, MRI, angio and plain radiography) for surgical planning and 3D plaster or plastic modeling with stereolithography. Conventional imaging modalities, however, are on separate software platforms that are not compatible with real-time user interaction and modification. Our goal was to develop a novel technique of integrating multiple sophisticated modalities ranging from MRI to 3D CT to tractography into a single 3D representation that enables imaging of skeletal, soft tissue, and neurovascular structures and facilitates planning and assessing outcomes of human face transplantation Methods: The craniofacial skeleton is represented as a polygonal model generated from thresholding and “stacking” dicom images from CT scans so it may serve as a framework. A skin model is generated by the same approach. Alternatively, surface scanning with photo-realistic texturing may be employed to generate a skin mesh. Muscles can be extracted from the same dicom dataset as the bone data as follows (if CT fails to capture muscle data in sufficient detail, MRI data sets may be utilized). Key slices are manually segmented by outlining the structures of interest in 2D and lofting between 2D planes to build 3D meshes. If data quality permits, blood vessels relevant to the model can be extracted by thresholding dicoms from a CT angiogram. If the dataset is not of sufficient quality for this approach, key slices can be imported into a 3D package and models manually segmented as above. Nerves are modeled as non-uniform rational basis splines based on tractography data. Results: Intuitive, multiplanar, volume rendered data of 3D relational anatomy (skin, muscle, vessel, nerve and bone) from once-disparate and unwieldy CT, surface scans, CT angiography, MRI and tractography data have been integrated to develop detailed 3D anatomical polygonal meshes compatible with real-time end-user manipulation and modification. Conclusions: For the first time, we have devised a technique that fuses distinct donor and recipient imaging data derived from multiple conventional and cutting edge imaging modalites into a single 3D representation that is compatible with seamless user interaction and visualization. Such 3-D modeling offers critical insight into surface topography and relational anatomy between different facial tissues in donors and recipients. Procedural planning may be enhanced in patients by allowing virtual preoperative interaction with skeletal, soft tissue, and neurovascular anatomy. Combining information from multiple imaging modalities could also optimize patient selection and sequential monitoring of functional outcomes after face transplantation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.