Abstract

Glucose forecasting serves as a backbone for several healthcare applications, including real-time insulin dosing in people with diabetes and physical activity optimization. This paper presents a study on the use of machine learning (ML) and deep learning (DL) methods for predicting glucose variability (GV) in individuals with open-source automated insulin delivery systems (AID). A three-stage experimental framework is employed in this work to systematically implement and evaluate ML/DL methods on a large-scale diabetes dataset collected from individuals with open-source AID. The first stage involves data collection, the second stage involves data preparation and exploratory analysis, and the third stage involves developing, fine-tuning, and evaluating ML/DL models. The performance and resource costs of the models are evaluated alongside relative and proportional errors for 17 GV metrics. Evaluation of fine-tuned ML/DL models shows considerable accuracy in glucose forecasting and variability analysis up to 48 h in advance. The average MAE ranges from 2.50 mg/dL for long short-term memory models (LSTM) to 4.94 mg/dL for autoregressive integrated moving average (ARIMA) models, and the RMSE ranges from 3.7 mg/dL for LSTM to 7.67 mg/dL for ARIMA. Model execution time is proportional to the amount of data used for training, with long short-term memory models having the lowest execution time but the highest memory consumption compared to other models. This work successfully incorporates the use of appropriate programming frameworks, concurrency-enhancing tools, and resource and storage cost estimators to encourage the sustainable use of ML/DL in real-world AID systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call