Abstract
PurposeBody weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the patient, which limits their practical applicability. In this work, we aim to decouple accurate weight estimation from such specific conditions by predicting the weight of covered patients from voxelized point cloud data.MethodsWe propose a novel deep learning framework, which comprises two 3D CNN modules solving the given task in two separate steps. First, we train a 3D U-Net to virtually uncover the patient, i.e. to predict the patient’s volumetric surface without a cover. Second, the patient’s weight is predicted from this 3D volume by means of a 3D CNN architecture, which we optimized for weight regression.ResultsWe evaluate our approach on a lying pose dataset (SLP) under two different cover conditions. The proposed framework considerably improves on the baseline model by up to {16}{%} and reduces the gap between the accuracy of weight estimates for covered and uncovered patients by up to {52}{%}.ConclusionWe present a novel pipeline to estimate the weight of patients, which are covered by a blanket. Our approach relaxes the specific conditions that were required for accurate weight estimates by previous contactless methods and thus constitutes an important step towards fully automatic weight estimation in clinical practice.
Highlights
Medical treatments often require the precise knowledge of a patient’s body weight, e.g. for patient-adapted drug dosing
We aim to bring vision-based weight assessment closer to this level and address point cloud-based weight estimation of patients which are covered by a blanket
To further investigate the effect of occlusions by a blanket on weight estimation performance, we learn weight estimation of both covered and uncovered patients with a PointNet architecture [31] and the basis point sets (BPS)-based fully connected network from [4], which operate on raw point cloud data instead of a voxel-based representation
Summary
Medical treatments often require the precise knowledge of a patient’s body weight, e.g. for patient-adapted drug dosing. A straightforward assessment of the patient’s weight is often impossible. Pfitzner et al [28] and our prior work [4] already demonstrated that machine learning-based methods are capable of deriving precise weight estimates of lying patients on the basis of point cloud data. Point clouds carry rich geometric information while preserving the patient’s data privacy [34] and are suitable for the given problem. Whereas the proposed methods predict weight estimates with a promising accuracy, they involve a critical drawback: the methods are designed for and evaluated under highly controlled conditions. Patients are expected to be uncovered and in supine position in [4] and even need to take a specific pose in [28]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Computer Assisted Radiology and Surgery
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.