Abstract

Inter-observer agreement and reliability in hysteroscopic image assessment remain uncertain and the type of factors that may influence it has only been studied in relation to the experience of hysteroscopists. We aim to assess the effect of clinical information and previous exam execution on observer agreement and reliability in the analysis of hysteroscopic video-recordings. Ninety hysteroscopies were video-recorded and randomized into a group without (Group 1) and with clinical information (Group 2). The videos were independently analyzed by three hysteroscopists, regarding lesion location, dimension, and type, as well as decision to perform a biopsy. One of the hysteroscopists had executed all the exams before. Proportions of agreement (PA) and kappa statistics (κ) with 95% confidence intervals (95% CI) were used. In Group 2, there was a higher proportion of a normal diagnosis (p<0.001) and a lower proportion of biopsies recommended (p=0.027). Observer agreement and reliability were better in Group 2, with the PA and κ ranging, respectively, from 0.73 (95% CI 0.62, 0.83) and 0.44 (95% CI 0.26, 0.63), for image quality, to 0.94 (95% CI 0.88, 0.99) and 0.85 (95% CI 0.65, 0.95), for the decision to perform a biopsy. Execution of the exams before the analysis of the video-recordings did not significantly affect the results. With clinical information, agreement and reliability in the overall analysis of hysteroscopic video-recordings may reach almost perfect results and this was not significantly affected by the execution of the exams before the analysis. However, there is still uncertainty in the analysis of specific endometrial cavity abnormalities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call