When Andrew Jay Drexler was born, patients with diabetes who wished to monitor their glucose control had to check their urine by using a flame and Benedict's solution and “hope for blue.”1 Being able to dip a stick into urine and estimate amount of glucose in it already represented a marked progress. By the time of his untimely death there were several million patients worldwide routinely monitoring glucose levels by their personal continuous glucose monitoring (CGM) devices. There is nothing more pleasing to the heart of a diabetologist these days than walking into an exam room and being asked by their patient whether it was ok for them to change their prandial insulin dose (or meal composition) on their own because they saw a higher glucose level after the meal than their target level was. How did this remarkable progress happen and what is the role of CGM in management of our patients with diabetes today? Early on, as clinicians attempted to better control patients' glucose levels (initially to relieve symptoms of uncontrolled hyperglycemia, later to limit the long-term dreaded diabetic complications), it became apparent that the traditional way to titrate insulin dosing based on the degree of glycosuria was suboptimal. First, for the urine dipstick to register presence of glucose, the concentrations needed to exceed the threshold for renal glucose excretion which is typically around 180 mg/dL (10 mmol)—and that was before we knew that threshold is increased for patients with diabetes to perhaps as high as 240 mg/dL (13.3 mmol/L)—thus missing significant time spent in hyperglycemia. Second, the amount of glucose in the urine depended on the time in which that urine was sitting in the patient's bladder, thus assuring reacting too late to hyperglycemia. Last, and significantly, it totally missed the ability to detect, much less predict, episodes of hypoglycemia, the most feared aspect of insulin therapy. Fast forward several decades. Advancements in technology in the 1960s brought the ability to detect glucose in small blood samples and the advent of capillary glucose test sticks giving semiquantitative blood glucose measurements, followed in the 1970s by introduction of blood glucose meters, first with analog and then with digital readouts. Self-monitoring of blood glucose by ever-improving consumer versions of these glucose meters became routine and part of standards of diabetes management not only for those on insulin therapy but also on other therapeutic regimens.2, 3 Even though this manner of glucose monitoring represented a major advancement over urine testing, gaps still remained. Results, obtained by painful finger sticks, are valid only for the moment of the sampling. Missing are all the glycemic excursions occurring at the times of not checking glucose, whether overnight, after meals, at bedtime, and at other points critical for designing optimal and safe intensive insulin regimens. In addition, these static results lack the ability to detect trends, that is, the momentary glucose concentration does not inform the patient or a clinician whether glucose is on the rise, decline, or stable. Enter the idea of CGM to overcome these gaps. However, in order to create a practical way to do it, the question needed to be answered whether it would be safe to have an indwelling catheter in a vascular space or whether there is an alternate source of obtaining glucose samples continuously without the risk of potential systemic infection. The relationship between blood glucose levels and diabetic (at least microvascular) complications had been cemented after the publications of the DCCT (Diabetes Control and Complications Trial; for type 1 diabetes) and the UKPDS (UK Prospective Diabetes Study; for type 2 diabetes) data in the 1990s.4, 5 The question remained, however, whether obtaining glucose from subcutaneous space would generate data related to blood/plasma concentrations.6 Once that relationship was established (yes, there is that relationship, but physiologic delay exists since it takes some time for glucose to get from the intravascular to the interstitial space; this delay is exaggerated after meals), it was time to construct sensors, transmitters, and receivers usable for clinical practice. This all might sound easy but it took an enormous amount of technological knowhow, time, and capital to generate a user-friendly technology. The first generation of CGM sensors, transmitters, and receivers was used in a professional setting, as a diagnostic tool. These devices typically collected data for only up to 72 hours and were owned by medical practices.7 Still, valuable retrospective data were generated which, in turn, could be used to adjust patients' insulin regimens. In the meantime, questions arose as to the usefulness of the CGM data for practical management of diabetes. And, as is customary, the issue was raised as to who will pay for the additional expense of obtaining CGM results, especially if patients still have to bear the burden of finger sticking (both the discomfort and cost). Consumer versions of CGM devices were developed for personal use by several companies (Medtronic, Dexcom, Abbott Freestyle) which allowed for longer periods (7-14 days) of monitoring by a single sensor.8 Further, studies demonstrated that the new generations of sensors produced data which were deemed accurate enough by regulators to be used for adjustments of insulin doses.9 Some companies then went on and precalibrated the sensors during the manufacturing process and (at least theoretically) eliminated the need for finger sticks. Implantable sensors were introduced (Eversense) into clinical practice which had to be placed by the endocrinologist in the subcutaneous site but could stay there for 90 to 180 days.10 These developments have led to our current situation where integration of CGM devices with insulin pumps allows automation of insulin delivery enabled by a controller. Several of the systems (MiniMed 670G, t:slim X2 Control-IQ) are already in wide use, with more awaiting regulatory approval or finishing their pivotal trials before being able to enter this burgeoning market.11 All of this is, understandably, exciting news for both the patients and clinicians. After decades of promises the dream of automation of insulin delivery without the fear of hypoglycemia is entering the realm of reality. However, as expected, there are skeptics out there who are asking critical and logical questions: Are there actually tangible benefits of this technology? If so, which ones? And, should they be measured by numbers such as achievement of target A1c or average glucose levels, amount of time spent in hypoglycemia, or by “patient-reported outcomes” such as quality of their sleep, productivity, sense of satisfaction with their diabetes management, or happiness with personal/family life? And, if there are actually objective and measurable benefits, what are their economic impacts? How would one measure them? In actual added costs? In “quality-adjusted” costs? And, who will make decisions about this? Health economists, patients, clinicians, or payers? If these costs are deemed favorable, who would make the best candidates for this technology? There are clearly dozens additional similar but critical questions being asked by the experts which still await answers. What is the current consensus? By now, some clear benefits of CGM have emerged. First, having alerts and alarms on receivers (either dedicated or those on smartphones) and being able to share CGM data with others has allowed more comfort with pursuing more aggressive control of overnight glycemic control. Once patients, significant others, or parents see that the much-feared nocturnal hypoglycemia either does not happen or its approach can be detected, they are more willing to adjust basal insulin doses. Of course, then integrated systems minimize or entirely eliminate nocturnal hypoglycemia automatically. Second, having the CGM devices (either “real-time” or “intermittently-scanned”) which show glycemic trends has allowed properly educated patients to prevent either hypoglycemic or hyperglycemic excursions before needing professional attention. Quantitation of thus avoided personal stress, telephone calls, ambulance runs, emergency room visits, lost productivity, etc. still needs to take place. However, in the long run, for this technology to be widely adopted and paid for, more tangible benefits need to be demonstrated for the ultimate payers. Critically, an understanding had to be reached as to what actually constitutes targets for good glucose control as determined by CGM. By now, fasting and 1- or 2-hour postprandial plasma glucose and A1c targets have been well accepted by the community. First steps toward that goal have been taken. What are these? A consensus meeting of practitioners, researchers, and patients convened in February 2019 and issued recommendations endorsed by leading professional and voluntary associations. These recommendations were formally presented at the 2019 annual scientific meeting of the American Diabetes Association and published in August 2019.12 Briefly, it was recommended that the critical component of desirable glycemic targets should be time in target range (defined as 70-180 mg/dL or 3.9-10 mmol/L) exceeding 70% of measurements obtained for at least 10 to 14 days for most patients, while minimizing time below range (less than 4%, of which <1% should be under clinically significant hypoglycemia—54 mg/dL or 3 mmol/L) and time above range (less than 25%, of which <5% should be very hyperglycemic—above 250 mg/dL or 13.9 mmol/L). There were separate recommendations made for frail and pregnant patients. While that was a necessary step, do these recommendations have a relevant relationship to diabetic complications or other long-term clinical outcomes? Importantly, there is relationship between an accepted measure of A1c and time in range. Further, the FDA (Food and Drug Administration) has accepted a CGM-derived “glucose management indicator” as a measure of average glycemic exposure derived from adequate CGM data.13 Baby steps have been taken to assess the critical association between time in range and diabetic complications. Beck and colleagues reexamined the time in range determined from capillary blood glucose measurements during the DCCT and demonstrated a significant association between retinopathy and microalbuminuria in patients with type 1 diabetes.14 Next, the relationship between diabetic retinopathy and time in range, obtained by 3-day CGM data collection, was documented in a study of patients with type 2 diabetes.15 Such analyses will be critical for future acceptance of this technology as a relevant determinate for glucose management. Currently, most experts agree that CGM represents the standard of care for individuals on intensive insulin regimens. One could conclude that the ability to predict and detect hypoglycemia by CGM devices should also extend their incorporation into care of those persons on any therapy including hypoglycemia agents (ie, any insulin and sulfonylureas). Whether future research demonstrates tangible benefits to patients with diabetes on any other therapeutic (including lifestyle modification) regimen or even to people with prediabetes remains to be seen. One can only wish that Andy were still with us to be a part of this amazing journey to improve lives of all of those with diabetes! None declared. Receiving research funding from Medtronic. 当安德鲁·杰伊·德雷克斯勒出生时, 想要监测血糖控制的糖尿病患者不得不用火焰和贝内迪克特溶液检查自己的尿液, 并“希望是蓝色的”。能够将一根棍子伸进尿液中并估计其中的葡萄糖含量已经是一个显著的进步。到他英年早逝的时候, 全世界已有数百万名患者通过个人连续血糖监测(CGM)设备对血糖水平进行常规监测。如今, 当医生走进检查室, 被患者问道是否可以因为餐后血糖升高而自己改变每日胰岛素剂量(或膳食成分)时, 医生们心里是非常高兴的。这一显著的进步是如何发生的?CGM在我们今天的糖尿病患者管理中扮演着什么角色? 早期, 随着临床医生试图更好地控制患者的血糖水平(最初是为了缓解失控的高血糖症状, 后来是为了控制长期糖尿病并发症), 很明显, 传统的基于尿糖的胰岛素剂量滴定方法不是最优选项。首先, 为了让尿糖试纸能检测到葡萄糖的存在, 尿糖浓度需要超过肾脏葡萄糖排泄的阈值, 通常约为180 mg/dL(10 Mmol)——那是在我们知道糖尿病患者的阈值可能高达240mg/dL(13.3 mmol/L)之前, 因此错过了及时治疗高血糖的重要时间。其次, 尿液中的葡萄糖含量取决于尿液在患者膀胱中停留的时间, 因此导致了对高血糖的反应太晚。 最后, 也是值得注意的是, 它完全没有检测低血糖的能力, 更不用说预测低血糖发作了, 低血糖是胰岛素治疗最令人畏惧的方面。 快进几十年。20世纪60年代技术的进步带来了在少量血样中检测葡萄糖的能力, 以及提供半定量血糖测量的毛细管葡萄糖测试棒的出现, 紧随其后的是20世纪70年代引入的血糖仪, 首先是模拟血糖仪, 然后是数字读数血糖仪。通过不断改进血糖仪的版本, 来自我监测血糖已经成为常规, 不仅对于那些接受胰岛素治疗的人, 而且对于那些接受其他治疗方案的人来说, 这也是糖尿病管理标准的一部分。 尽管这种血糖监测方式比尿液检测有了很大的进步, 但仍然存在差距。通过有痛感的手指血获得的结果只在取样的那一刻有效。所有的血糖波动都是在不检查血糖的时候发生的, 无论是夜间、饭后、就寝时间, 以及对强化胰岛素方案至关重要的其他时间点。此外, 这些静态结果缺乏检测趋势的能力, 也就是说, 瞬时的血糖浓度并不能告知患者或临床医生血糖是上升、下降还是稳定。 通过CGM可以克服这些问题。然而, 为了创造一种可行的方法, 需要解决的问题是, 在血管中留置导管是否安全, 或者是否有替代来源连续获取血糖样本, 而不存在潜在的全身感染风险。血糖水平和糖尿病(至少是微血管)并发症之间的关系在20世纪90年代DCCT(糖尿病控制和并发症试验; 对于1型糖尿病)和UKPDS(英国前瞻性糖尿病研究; 对于2型糖尿病)数据发表后已经确立。然而, 问题是从皮下间隙获取的葡萄糖是否会产生与血液/血浆浓度相关的数据。一旦建立了这种关系(这种关系生理上可能存在延迟, 因为葡萄糖需要一段时间才能渗透入皮下组织, 而这种延迟在用餐后可能被放大了), 现在是时候构建可用于临床实践的传感器、发射器和接收器了。这一切听起来可能很容易, 但这需要大量的技术、时间和资金才能产生一款用户友好的产品。 第一代CGM传感器、发射器和接收器在专业环境中用作诊断工具。这些设备通常只收集长达72小时的数据, 它们归医疗机构所有。尽管如此, 仍产生了有价值的回顾性数据, 这些数据反过来可用于调整患者的胰岛素方案。 与此同时CGM数据对糖尿病实际管理的有用性问题也出现了。按照惯例, 有人提出了由谁来支付获得CGM结果的额外费用问题, 特别是如果患者仍然需要承担手指取血的负担(包括不适和费用)。几家公司(美敦力、德克斯康、雅培·弗里斯特)为个人使用开发了消费者版本的CGM设备, 这使得单个传感器可以进行更长时间(7~14天)的监测。此外, 研究表明, 新一代传感器产生的数据足够准确, 可以用于调整胰岛素剂量。一些公司随后在制造过程中对传感器进行了预校准, (至少在理论上)消除了对手取血的需求。植入式传感器被引入临床实践, 这种传感器必须由内分泌医生放置在皮下, 可以放置90到180天。这些发展将CGM设备与胰岛素泵集成在一起, 可以通过控制器实现胰岛素输送的自动化。其中几个系统(MiniMed 670G, t:slim X2 Control-IQ)已经在广泛使用, 还有更多的系统在进入这个新兴市场之前, 等待监管部门的批准或完成关键试验。 所有这些对患者和临床医生来说都是令人兴奋的消息, 这是可以理解的。在经过几十年的承诺之后, 不用担心低血糖的胰岛素自动化注射的梦想正在变成现实。然而, 正如预期的那样, 有些怀疑者提出了一些批判性和逻辑性的问题:这项技术真的有切实的好处吗?如果是的话, 是哪些呢?是应该以达到目标糖化血红蛋白或平均血糖水平、低血糖时间长短之类的数字来衡量, 还是通过“患者报告的结果”来衡量, 比如他们的睡眠质量、生产力、对糖尿病治疗的满足感, 或者对个人/家庭生活的幸福感?如果确实有客观和可衡量的好处, 其经济影响是什么?如何衡量呢?在实际增加的成本中吗?在“质量调整”成本中?那么, 这件事由谁来做决定呢?健康经济学家、患者、临床医生还是付款人?如果这些成本被认为是有利的, 谁会是这项技术的最佳候选者?显然, 专家们还有几十个类似但关键的问题仍在等待答案。那么, 目前的共识是什么? 到目前为止, CGM的一些明显好处已经显现。首先, 在接收器(无论是专用的还是智能手机上的)上安装警报, 并能够与其他人共享CGM数据, 这让追求更积极的夜间血糖控制变得更加舒适。一旦患者、父母或其他重要的人看到没有夜间低血糖的发生, 或者其过程可以被检测到, 他们就更愿意调整基础胰岛素剂量。然后集成系统会自动最小化或完全消除夜间低血糖。其次, 拥有显示血糖趋势的CGM设备(“实时”或“间歇性扫描”), 可以让受过适当教育的患者在需要专业人员关注之前, 防止低血糖或高血糖波动。对由此可以避免的个人压力、电话、救护车运行、急诊室就诊、生产力损失等的量化仍然需要继续进行。 然而, 从长远来看, 要让这项技术得到广泛采用和购买, 需要向最终付款人展示更多实实在在的好处。最重要的是, 必须就CGM确定的良好血糖控制目标达成共识。到目前为止, 空腹和餐后1或2小时的血糖和糖化血红蛋白目标已经被很好地接受了。朝着这一目标第一步已经迈出。这些是什么?从业者、研究人员和患者于2019年2月召开了一次协商一致的会议, 并发布了得到主要专业和志愿协会认可的建议。这些建议在美国糖尿病协会2019年年度科学会议上正式提出, 并于2019年8月发表。简要地说, 建议理想的血糖目标的关键组成部分, 应该是大多数患者在目标范围内的时间(定义为70~180 mg/dL或3.9~10 mmol/L)在总时间10~14天中占70%, 同时尽量减少低于目标范围的时间(小于4%, 其中临床显著低血糖54 mg/dl或3 mmol/L应<1%。高于目标范围时间应<25%, 其中极高血糖>250 mg/dl或13.9 mmol/L时间应<5%。针对虚弱和怀孕的患者也提出了不同的建议。 虽然这是一个必要的步骤, 但这些建议是否与糖尿病并发症或其他长期临床结果相关?重要的是, 糖化血红蛋白的目标与范围内的时间之间存在关系。FDA(食品和药物管理局)已经接受了CGM衍生的“葡萄糖管理指标”, 作为从充足的CGM数据中得出的平均血糖暴露的衡量标准。已经采取初步行动来评估在目标范围内的时间和糖尿病并发症之间的关联。Beck及其同事重新检查了DCCT期间通过毛细血管血糖测量确定的范围内时间, 并证明了1型糖尿病患者中视网膜病变和微量白蛋白尿之间的显著关联。接下来, 通过3天CGM数据收集获得的糖尿病视网膜病变和范围内时间之间的关系在一项针对2型糖尿病患者的研究中也被记录在案。这样的分析对于未来接受这项技术作为血糖管理的相关方法至关重要。目前, 大多数专家一致认为, CGM代表了强化胰岛素方案的个体护理标准。人们可以得出结论, CGM设备预测和检测低血糖的能力也应该扩大到对这些人进行的任何治疗, 包括降糖药(即任何胰岛素和磺脲类药物)的护理中。未来的研究有待继续观察任何其他治疗方案(包括改变生活方式)对糖尿病患者, 甚至对糖尿病前期患者是否有切实的好处。 我们只能希望安德鲁仍然和我们在一起, 成为改善所有糖尿病患者生活质量这一令人惊叹的旅程的一部分!