Abstract

Early diagnosis of neurodevelopmental impairments in preterm infants is currently based on the visual analysis of newborns' motion patterns by trained operators. To help automatize this time-consuming and qualitative procedure, we propose a sustainable deep-learning algorithm for accurate limb-pose estimation from depth images. The algorithm consists of a convolutional neural network (TwinEDA) relying on architectural blocks that require limited computation while ensuring high performance in prediction. To ascertain its low computational costs and assess its application in on-the-edge computing, TwinEDA was additionally deployed on a cost-effective single-board computer. The network was validated on a dataset of 27,000 depth video frames collected during the actual clinical practice from 27 preterm infants. When compared to the main state-of-the-art competitor, TwinEDA is twice as fast to predict a single depth frame and four times as light in terms of memory, while performing similarly in terms of Dice similarity coefficient (0.88). This result suggests that the pursuit of efficiency does not imply the detriment of performance. This work is among the first to propose an automatic and sustainable limb-position estimation approach for preterm infants. This represents a significant step towards the development of broadly accessible clinical monitoring applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.