Abstract

We evaluate the performance of multiple text classification methods used to automate the screening of article abstracts in terms of their relevance to a topic of interest. The aim is to develop a system that can be first trained on a set of manually screened article abstracts before using it to identify additional articles on the same topic. Here the focus is on articles related to the topic "artificial intelligence in nursing". Eight text classification methods are tested, as well as two simple ensemble systems. The results indicate that it is feasible to use text classification technology to support the manual screening process of article abstracts when conducting a literature review. The best results are achieved by an ensemble system, which achieves a F1-score of 0.41, with a sensitivity of 0.54 and a specificity of 0.96. Future work directions are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.