Abstract

BackgroundThe aim of this study was to assess validation evidence for a sedation scale for dogs. We hypothesized that the chosen sedation scale would be unreliable when used by different raters and show poor discrimination between sedation protocols.A sedation scale (range 0–21) was used to score 62 dogs scheduled to receive sedation at two veterinary clinics in a prospective trial. Scores recorded by a single observer were used to assess internal consistency and construct validity of the scores. To assess inter-rater reliability, video-recordings of sedation assessment were randomized and blinded for viewing by 5 raters untrained in the scale. Videos were also edited to allow assessment of inter-rater reliability of an abbreviated scale (range 0–12) by 5 different raters.ResultsBoth sedation scales exhibited excellent internal consistency and very good inter-rater reliability (full scale, intraclass correlation coefficient [ICCsingle] = 0.95; abbreviated scale, ICCsingle = 0.94). The full scale discriminated between the most common protocols: dexmedetomidine-hydromorphone (median [range] of sedation score, 11 [1–18], n = 20) and acepromazine-hydromorphone (5 [0–15], n = 36, p = 0.02).ConclusionsThe hypothesis was rejected. Full and abbreviated scales showed excellent internal consistency and very good reliability between multiple untrained raters. The full scale differentiated between levels of sedation.

Highlights

  • The aim of this study was to assess validation evidence for a sedation scale for dogs

  • Dogs scheduled to be sedated for a diagnostic procedure or before general anesthesia were enrolled over a 12 week period through two clinics following written informed client consent

  • The higher value for ICCaverage reflects the improved reliability of multiple observers

Read more

Summary

Introduction

The aim of this study was to assess validation evidence for a sedation scale for dogs. A sedation scale (range 0–21) was used to score 62 dogs scheduled to receive sedation at two veterinary clinics in a prospective trial. To assess inter-rater reliability, video-recordings of sedation assessment were randomized and blinded for viewing by 5 raters untrained in the scale. Videos were edited to allow assessment of inter-rater reliability of an abbreviated scale (range 0–12) by 5 different raters. Measurement scales for quantifying sedation in dogs have not been formally assessed for validity and reliability of the scores. In the context of measuring sedation, establishing evidence for the validity and reliability of the scores are essential to ensure appropriate scale sensitivity when evaluating levels of sedation and acceptable agreement between raters. Using an appropriately developed scale facilitates comparing results between studies, thereby supporting reproducibility

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.