We compared the quality control efficiency of artificial intelligence-patient-based real-time quality control (AI-PBRTQC) and traditional PBRTQC in laboratories to create favorable conditions for the broader application of PBRTQC in clinical laboratories. In the present study, the data of patients with total thyroxine (TT4), anti-Müllerian hormone (AMH), alanine aminotransferase (ALT), total cholesterol (TC), urea, and albumin (ALB) over five months were categorized into two groups: AI-PBRTQC group and traditional PBRTQC group. The Box-Cox transformation method estimated truncation ranges in the conventional PBRTQC group. In contrast, in the AI-PBRTQC group, the PBRTQC software platform intelligently selected the truncation ranges. We developed various validation models by incorporating different weighting factors, denoted as λ. Error detection, false positive rate, false negative rate, average number of the patient sample until error detection, and area under the curve were employed to evaluate the optimal PBRTQC model in this study. This study provides evidence of the effectiveness of AI-PBRTQC in identifying quality risks by analyzing quality risk cases. The optimal parameter setting scheme for PBRTQC is TT4 (78-186), λ = 0.03; AMH (0.02-2.96), λ = 0.02; ALT (10-25), λ = 0.02; TC (2.84-5.87), λ = 0.02; urea (3.5-6.6), λ = 0.02; ALB (43-52), λ = 0.05. The AI-PBRTQC group was more efficient in identifying quality risks than the conventional PBRTQC. AI-PBRTQC can also effectively identify quality risks in a small number of samples. AI-PBRTQC can be used to determine quality risks in both biochemistry and immunology analytes. AI-PBRTQC identifies quality risks such as reagent calibration, onboard time, and brand changes.
Read full abstract