Imaging technology can aid the automatic extraction of measurements from beef carcasses, which can be used for objective grading. Many abattoirs, however, rely on manual grading due to the required infrastructure and cost, making technology unfeasible. This study explores 3-dimensional (3D) imaging technology, requiring limited infrastructure, and its ability to predict carcass weight, conformation class and fat class for non-invasive, objective classification. Time-of-flight near-infrared cameras captured 3-dimensional point clouds of beef carcasses, on-line in one commercial abattoir in Scotland, over a 6-month period. Thirty-five 3D images were captured per carcass and processed using machine vison software. Seventy-four measurements were extracted from each point cloud. Removal of extreme outliers resulted in 285,109 datapoints for 17,250 carcasses. Coefficients of variation (CV) for each measurement on a per-animal basis were low and consistent, and measurements were averaged across images. Using a training and validation dataset (70:30), multiple linear regression models predicted EUROP conformation class, fat class, and carcass weight. Stepwise models included fixed effects (sex, breed type, kill date (and cold carcass weight for conformation and fat class)), and 3D image measurements. Including 3D measurements resulted in prediction accuracies of 70%, 50% and 23% for cold carcass weight, conformation, and fat class respectively. Mapping predictions on the traditional EUROP grid used in the UK showed that 99% of conformation classes and 93% of fat classes were classified within the correct or neighbouring grade. The results of this study indicate the potential for non-invasive, in-abattoir technology requiring limited infrastructure to predict carcass traits objectively.
Read full abstract