Abstract

Abstract Introduction/Objective The Direct Antiglobulin Test (DAT) is a useful screening test to determine whether a patient’s red blood cells have been sensitized to immunoglobulin or complement. Like the majority of screening tests, the DAT compromises specificity for sensitivity in order to quickly assess for hemolysis and its dangerous sequelae. Because of the high sensitivity of the DAT, improper ordering of this test can confuse the clinical picture at best and result in misdiagnosis at worst. Pre-tests for ruling-out disease have proven effective in limiting test ordering, such as the Pulmonary Embolism Rule-out Criteria (“PERC rule”) for limiting the use of D-dimers. A similar pre-test could prove useful in DATs. We use “in silico testing” – testing via computer simulation – to predict the likelihood of any DAT being positive, given common patient attributes and laboratory values. Methods A three-layer deep-learning artificial neural network (ANN) was created using Python and the machine learning framework Keras. The ANN was compiled to maximize specificity while retaining 100% sensitivity. Input variables to the model were patient sex and age, along with most recent lab values for hemoglobin, hematocrit, white blood cell count, platelet count, total bilirubin, direct bilirubin, and haptoglobin, where available. The output of the ANN was a binary variable: “ruled-out” versus “further testing necessary”. The ANN was trained on all positive (n=30) and negative (n=63) DATs performed on November 2019 through March 2020, with a final total of ninety-three patients. A 10-fold cross-validation of the entire dataset was used to measure performance. Results The ANN was able to maximize specificity to 83% while retaining 100% sensitivity. A ROC curve of the model shows performance well above that of the no-discrimination line. Conclusion “In-silico testing” can accurately screen the likelihood of a positive result for a labor and time-intensive test, such as the DAT, before the actual test is performed. This has the potential to reduce unnecessary testing if validated in clinical practice. Analogous to their clinical counterpart “the PERC rule,” computer models that maximize specificity while retaining 100% sensitivity could achieve more effective test utilization and more informative results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call