Abstract

This paper aims at proposing a new machine-learning based model to improve the calculation of mealtime insulin boluses (MIB) in type 1 diabetes (T1D) therapy using continuous glucose monitoring (CGM) data. Indeed, MIB is still often computed through the standard formula (SF), which does not account for glucose rate-of-change ( ∆G), causing critical hypo/hyperglycemic episodes. Four candidate models for MIB calculation, based on multiple linear regression (MLR) and least absolute shrinkage and selection operator (LASSO) are developed. The proposed models are assessed in silico, using the UVa/Padova T1D simulator, in different mealtime scenarios and compared to the SF and three ∆G-accounting variants proposed in the literature. An assessment on real data, by retrospectively analyzing 218 glycemic traces, is also performed. All four tested models performed better than the existing techniques. LASSO regression with extended feature-set including quadratic terms (LASSO Q) produced the best results. In silico, LASSO Q reduced the error in estimating the optimal bolus to only 0.86U (1.45U of SF and 1.36-1.44U of literature methods), as well as hypoglycemia incidence (from 44.41% of SF and 44.60-45.01% of literature methods, to 35.93%). Results are confirmed by the retrospective application to real data. New models to improve MIB calculation accounting for CGM- ∆G and easy-to-measure features can be developed within a machine learning framework. Particularly, in this paper, a new LASSO Q model was developed, which ensures better glycemic control than SF and other literature methods. MIB dosage with the proposed LASSO Q model can potentially reduce the risk of adverse events in T1D therapy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call