The aim of the present study was to train a natural language processing model to recognize key text elements from research abstracts related to hand surgery, enhancing the efficiency of systematic review screening. A sample of 1600 abstracts from a systematic review of distal radial fracture treatment outcomes was annotated to train the natural language processing model. To assess time-saving potential, 200 abstracts were processed by the trained models in two experiments, where reviewers accessed natural language processing predictions to include or exclude articles. The natural language processing model achieved an overall accuracy of 0.91 in recognizing key text elements, excelling in identifying study interventions. Use of the natural language processing reduced mean screening time by 31% without compromising accuracy. Precision varied, improving in the second experiment, indicating context-dependent performance. These findings suggest that natural language processing models can streamline abstract screening in systematic reviews by effectively identifying original research and extracting relevant text elements.Level of evidence: IV.
Read full abstract