Abstract

Abstract Text answers to open-ended questions are typically manually coded into one of several codes. Usually, a random subset of text answers is double-coded to assess intercoder reliability, but most of the data remain single-coded. Any disagreement between the two coders points to an error by one of the coders. When the budget allows double coding additional text answers, we propose employing statistical learning models to predict which single-coded answers have a high risk of a coding error. Specifically, we train a model on the double-coded random subset and predict the probability that the single-coded codes are correct. Then, text answers with the highest risk are double-coded to verify. In experiments with three data sets, we found that this method identifies two to three times as many coding errors in the additional text answers as compared to random guessing, on average. We conclude that this method is preferred if the budget permits additional double-coding. When there are a lot of intercoder disagreements, the benefit can be substantial.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.