Abstract

Working-dog organizations often use behavioral ratings by experts to evaluate a dog's likelihood of success. However, these experts are frequently under severe time constraints. One way to alleviate the pressure on limited organizational resources would be to use non-experts to assess dog behavior. Here, in populations of military working dogs (Study 1) and explosive-detection dogs (Study 2), we evaluated the reliability and validity of behavioral ratings assessed by minimally trained non-experts from videotapes. Analyses yielded evidence for generally good levels of inter-observer reliability and criterion validity (indexed by convergence between the non-expert ratings and ratings made previously by experts). We found some variation across items in Study 2 such that reliability and validity was significantly lower for three out of the 18 items, and one item had reliability and validity estimates that were impacted heavily by the behavioral test environment. There were no differences in reliability and validity based on the age of the dog. Overall the results suggest that ratings made by minimally trained non-experts for most items can serve as a viable alternative to expert ratings freeing limited resources of highly trained staff.This article is part of a Special Issue entitled: Canine Behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call