Abstract

BackgroundResearch performance assessments have proliferated, but research indicators for use amongst clinicians in poorly resourced countries have been ill-defined. The aims of the present paper were to determine a set of indicators as determined by clinician participants from the Pacific Islands and a panel of research experts for use in the performance assessment of clinicians.MethodsTwo focus group discussions, one for nurses and one for doctors, were used to obtain the views of 28 Pacific Island clinicians of the BRRACAP Study about what the research indicators should be. A modified Delphi survey was used to obtain a consensus amongst 19 research experts, with Pacific Island research experience, as to what the indicators should be and then to rank these in terms of importance. A survey of the participants obtained data on the research tasks/actions performed 20 months after the initial research workshop. A resultant tool comprising of 21 indicators was used to assess the performance of 18 Pacific participants.ResultsThe Pacific Island clinicians determined that research was important and that performance should be measured. They identified research indicators that could be used in their settings and ranked their importance using a points system. The panel of experts identified implementation of research findings, collaborations and actual change in practice as more important, with bibliometric measurements low down in the scale. Although only 64 % of the 28 BRRACAP Study participants returned the questionnaire, 39 % of those performed more than half of the 21 indicators used. Of the 18 Pacific clinicians assessed, 7 (39 %) performed 10 or more tasks.ConclusionsA research performance assessment tool was developed using process and output indicators identified by Pacific clinicians and a panel of research experts. The tool, which placed emphasis on process and outputs that were not bibliometric based, proved useful in assessing the performance of Pacific clinicians working in a low-resource setting.Electronic supplementary materialThe online version of this article (doi:10.1186/s12961-016-0077-4) contains supplementary material, which is available to authorized users.

Highlights

  • Research performance assessments have proliferated, but research indicators for use amongst clinicians in poorly resourced countries have been ill-defined

  • There is a dearth of suggestions as to which tools to use to measure research productivity of part-time staff [10], such as clinicians, who are expected to perform clinical research mostly in their own time [11], especially in low- and middle-income countries (LMICs) where there is a growing drive by funders to measure effectiveness of capacity-building initiatives [12, 13] and by academic institutions to measure individual research productivity [14]

  • There is an abundance of studies using bibliometric indices to assess the health research performance of countries [15] or disciplines [16] within health systems in LMICs, there is only one paper which attempted to measure individual research productivity in these settings [13]; these measures were for those with a doctorate degree from international universities

Read more

Summary

Introduction

Research performance assessments have proliferated, but research indicators for use amongst clinicians in poorly resourced countries have been ill-defined. There is an abundance of studies using bibliometric indices to assess the health research performance of countries [15] or disciplines [16] within health systems in LMICs, there is only one paper which attempted to measure individual research productivity in these settings [13]; these measures were for those with a doctorate degree from international universities. The general lack of well-functioning health systems, research skills, access to the literature, research funding, protected time for research [13, 14, 17,18,19], and the importance of cultural context would mean that the yardstick used in well-developed countries cannot be applied in LMICs. Wooton et al [10] developed a “generalizable method for measuring individual research productivity”, but the measures were confined to research output and, of the 12 papers reviewed, only one was from an LMIC. The input of those to be measured in the identification of relevant context markers of research productivity and in the design of the assessment tool is needed for buy-in and sustainability

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call