Abstract
Despite the advancements to improve patient safety, a significant number of errors still occur in Operating Suites (OS). To improve medical decision-making and the resulting quality of care, it is essential to monitor and understand medical activities’ workflow and interactions. Although some strategies employ different sensor devices, their focus is not on generating complete workflow information. They only combine different data sources to generate their final output, lacking at least one piece of information from a workflow. To tackle this challenge, this paper presents ▪ , a distributed architecture model for sensor data acquisition and processing. ▪ ’s main contribution lies in its multi-sensor data fusion algorithms to extract a computational representation of activities in surgical procedures. In addition, ▪ is flexible to accommodate different deployment configurations,combining depth cameras, Ultra-Wideband positioning systems,and deep learning-based human pose estimation (HPE) mechanisms. The workflow monitoring mechanism was deployed in an actual hybrid OS for an extensive evaluation of the proposal. Our experiments demonstrate that the architecture can capture the information required to monitor the surgical workflow. In particular, the proposed HPE methodology accurately detects the poses of medical staff members, with a maximum error of 5cm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.