Abstract

Wearable devices with embedded sensors can provide personalized healthcare and wellness benefits in digital phenotyping and adaptive interventions. However, the collection, storage, and transmission of biometric data (including processed features rather than raw signals) from these devices pose significant privacy concerns. This quantitative, data-driven study examines the privacy risks associated with wearable-based digital phenotyping practices, with a focus on user reidentification (ReID), which is the process of identifying participants' IDs from deidentified digital phenotyping datasets. We propose a machine-learning-based computational pipeline to evaluate and quantify model outcomes under various configurations, such as modality inclusion, window length, and feature type and format, to investigate the factors influencing ReID risks and their predictive trade-offs. This pipeline leverages features extracted from three wearable sensors, resulting in up to 68.43% accuracy in ReID risk for a sample size of N=45 socially anxious participants based on only descriptive features of 10-second observations. Additionally, we explore the trade-offs between privacy risks and predictive benefits by adjusting various settings (e.g., the ways to process extracted features). Our findings highlight the importance of privacy in digital phenotyping and suggest potential future directions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.