Abstract

Qualitative researchers in information management research often need to evaluate inter-coder reliability (ICR) to test the trustworthiness of their content analysis. A suitable method of evaluating ICR enables researchers to rigorously assess the degree of agreement among two or more independent qualitative coders. This allows researchers to identify mistakes in the content analysis before the codes are used in developing and testing a theory or a measurement model and avoid any associated time, effort and financial cost. Different methods have been proposed, but little guidance is available on which approach to evaluating ICR should be used. In this paper, we review and compare leading ICR methods that are suitable for qualitative information management research. We propose an approach for selecting and using an ICR method, supported by an illustrative example. The five steps in our proposed approach include: selecting an ICR method based on its characteristics and requirements of a project; developing a coding scheme; selecting and training independent coders; calculating the ICR coefficient and resolving discrepancies; and reporting the process of evaluating ICR and its results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.