Background: Obstructive sleep apnea is a sleep disorder that is linked to many health complications and can even be lethal in its severe form. Overnight polysomnography is the gold standard for diagnosing apnea, which is expensive, time-consuming, and requires manual analysis by a sleep expert. Artificial intelligence (AI)-embedded wearable device as a portable and less intrusive monitoring system is a highly desired alternative to polysomnography. However, AI models often require substantial storage capacity and computational power for edge inference which makes it a challenging task to implement the models in hardware with memory and power constraints. Methods: This study demonstrates the implementation of depth-wise separable convolution (DSC) as a resource-efficient alternative to spatial convolution (SC) for real-time detection of apneic activity. Single lead electrocardiogram (ECG) and oxygen saturation (SpO2) signals were acquired from the PhysioNet databank. Using each type of convolution, three different models were developed using ECG, SpO2, and model fusion. For both types of convolutions, the fusion models outperformed the models built on individual signals across all the performance metrics. Results: Although the SC-based fusion model performed the best, the DSC-based fusion model was 9.4, 1.85, and 11.3 times more energy efficient than SC-based ECG, SpO2, and fusion models, respectively. Furthermore, the accuracy, precision, and specificity yielded by the DSC-based fusion model were comparable to those of the SC-based individual models (~95%, ~94%, and ~94%, respectively). Conclusions: DSC is commonly used in mobile vision tasks, but its potential in clinical applications for 1-D signals remains unexplored. While SC-based models outperform DSC in accuracy, the DSC-based model offers a more energy-efficient solution with acceptable performance, making it suitable for AI-embedded apnea detection systems.
Read full abstract