Abstract

pygamma-agreement: Gamma $\gamma$ measure for inter/intra-annotator agreement in Python

Highlights

  • Over the last few decades, it has become easier to collect large audio recordings in naturalistic conditions and large corpora of text from the Internet

  • Depending on the difficulty of the annotation task and the eventual expertise of the annotators, the annotations they produce can include a certain degree of interpretation

  • If that consensus is deemed robust, we infer that the annotation task is well defined, less prone to interpretation, and that annotations that cover the rest of the corpus are reliable (Gwet, 2012)

Read more

Summary

Introduction

Over the last few decades, it has become easier to collect large audio recordings in naturalistic conditions and large corpora of text from the Internet. This broadens the scope of questions that can be addressed in speech and language research. Some types of human intervention are used to reliably describe events contained in the corpus’s content (e.g., Wikipedia articles, conversations, child babbling, animal vocalizations, or even just environmental sounds). These events can either be tagged at a particular point in time, or over a period of time. An objective measure of the agreement (and subsequent disagreement) between annotators is desirable

Statement of Need
Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.