Abstract

IntroductionIntra- and inter-rater concordance studies are important in order to measure the reliability or the reproducibility of evaluations (interviews or scales applied by a rater) in psychiatry. ObjectiveTo present some principles regarding the validation process of diagnostic interviews or scales applied by a rater, and regarding the handling and comprehension of more useful statistical tests. MethodReview of literature. ResultsConcordance is understood as the grade of agreement or disagreement among evaluations made to the same subject successively by an evaluator or among two or more interviewers. This process is part of the validation of instruments, scale reliability, in order to identify possible cases or to confirm the presence of a mental disorder. Inter-rater concordance refers to the case when two or more psychiatrists realize an interview independently and almost simultaneously to a person; this can help to estimate the grade of agreement, convergence or concordance (and disagree, divergence or discordance) among the evaluations and the consequent diagnostics. Intra-rater concordance is the grade of agreement on the diagnosis made by the same rater in different times. Cohen's kappa is used to estimate concordance, and values higher than 0.50 are expected in general. To reliably estimate Cohen's kappa is necessary to know previously the expected prevalence of mental disorder, the number of evaluations or raters, and the number of possible diagnosis categories.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call