Abstract

Simple SummaryThere is a common joke in pathology—put three pathologists in a room and you will obtain three different answers. This saying comes from the fact that pathology can be subjective; pathologists’ diagnoses can be influenced by many different biases, and pathologists are also influenced by the presence or absence of animal information and medical history. Compared to pathology, statistics is a much more objective field. This study aimed to develop a probability-based tool using statistics obtained by analyzing 338 histopathology slides of canine and feline urinary bladders, then see if the tool affected agreement between the test pathologists. Four pathologists diagnosed 25 canine and feline bladder slides and they conducted this three times: without animal and clinical information, then with this information, and finally using the probability tool. Results showed large differences in the pathologists’ interpretation of bladder slides, with kappa agreement values (low value for digital slide images, high value for glass slides) of 7–37% without any animal or clinical information, 23–37% with animal signalment and history, and 31–42% when our probability tool was used. This study provides a starting point for the use of probability-based tools in standardizing pathologist agreement in veterinary pathology.Inter-pathologist variation is widely recognized across human and veterinary pathology and is often compounded by missing animal or clinical information on pathology submission forms. Variation in pathologist threshold levels of resident inflammatory cells in the tissue of interest can further decrease inter-pathologist agreement. This study applied a predictive modeling tool to bladder histology slides that were assessed by four pathologists: first without animal and clinical information, then with this information, and finally using the predictive tool. All three assessments were performed twice, using digital whole-slide images (WSI) and then glass slides. Results showed marked variation in pathologists’ interpretation of bladder slides, with kappa agreement values of 7–37% without any animal or clinical information, 23–37% with animal signalment and history, and 31–42% when our predictive tool was applied, for digital WSI and glass slides. The concurrence of test pathologists to the reference diagnosis was 60% overall. This study provides a starting point for the use of predictive modeling in standardizing pathologist agreement in veterinary pathology. It also highlights the importance of high-quality whole-slide imaging to limit the effect of digitization on inter-pathologist agreement and the benefit of continued standardization of tissue assessment in veterinary pathology.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.