Abstract

Short introduction: To develop evidence-based person-centred care practice, informed by synthesised international knowledge, there is a need to comprehensively identify relevant citations from systematic data-base searches. However, systematic searches on ‘thick’ concepts with diffuse conceptual boundaries like person-centred care will generate to many citations possible to screen and assess manually. 
 Background and purpose: The implementation of person-centred care is hindered by the fact that research results are not easily accessible. Searches of databases for original studies generate at least 90,000 unique citations. In addition to this large number of publications, there is also a diversity in terminology and conceptualisations used. When decision-makers, practitioners and researchers are unable to review all existing knowledge, the obvious risk is confusion in implementation.
 As part of an ongoing project, which aims to map available international literature of centredness in healthcare, our specific purpose with this presentation is to share some lessons learned from literature screening supported by text-mining functions. This was performed by a team of researchers specialised in interprofessional person-centred care who in the larger team includes patient partners, students and project assistants. Our experience is of importance not only for researchers, but also for policy-makers, decision-makers and healthcare professionals. 
 Method: The use of text-mining functions to semi-automate the process of citation screening is a way to tackle the great amount of research citations available today. Database searches for literature on person-centred care resulted in the retrieval of 94 236 unique citations. A random sample of 5455 records was screened manually by two reviewers independently against inclusion- and exclusion criteria. Results from that screening was used to build two project tailored text-mining classifier models, one built manually by a language technologist based on word frequencies, and one machine learning classifier constructed in the software EPPI-reviewer using the sci-kit-learn library. The 1000 highest-ranking records were retrieved and manually screened for both models. 
 Result: In the exploration, manual screening of the first random sample of 5455 citations resulted in 3,7% of the sample to be included in the specific mapping study. When screening the 1000 highest ranked citations using the manually built classifier model 23,5% of the sample was included. . The EPPI reviewer classifier model resulted in 83,4% to be included, while applying the same criteria for inclusion. 
 Discussion: For our purposes the classifier model built in EPPI-reviewer showed promise in identifying relevant citations earlier in the process as compared to a manually built classifier. Both models performed better than random manual screening. Making use of classifying model software is merited to facilitate processes of screening and sifting citations in knowledge fields that are hard to conceptually delimit in larger data bases, such as the field of person-centred care research. 

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call