The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes. A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to "replay" real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3-22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows. In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively. Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes.