Abstract
One application of medical ultrasound imaging is to visualize and characterize human tongue shape and motion in real-time to study healthy or impaired speech production. Due to the low-contrast characteristic and noisy nature of ultrasound images, it requires knowledge about the tongue structure and ultrasound data interpretation for users to recognize tongue locations and gestures easily. Moreover, quantitative analysis of tongue motion needs the tongue contour to be extracted, tracked and visualized instead of the whole tongue region. Manual tongue contour extraction is a cumbersome, subjective, and error-prone task. Furthermore, it is not a feasible solution for real-time applications where the tongue contour moves rapidly with nuance gestures. This paper presents two new deep neural networks (named BowNet models) that benefit from the ability of global prediction of encoding-decoding fully convolutional neural networks and the capability of full-resolution extraction of dilated convolutions. Both qualitatively and quantitatively studies over datasets from two ultrasound machines disclosed the outstanding performances of the proposed deep learning models in terms of performance speed and robustness. Experimental results also revealed a significant improvement in the accuracy of prediction maps due to the better exploration and exploitation ability of the proposed network models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.