Abstract

The body measurement of livestock is an important task in precision livestock farming. To reduce the cost of manual measurement, an increasing number of studies have proposed non-contact body measurement methods using depth cameras. However, these methods only use 3D data to construct geometric features for body measurements, which is prone to error on incomplete and noisy point clouds. This paper introduces a 2D-3D fusion body measurement method, developed in order to exploit the potential of raw scanned data including high-resolution RGB images and 3D spatial information. The keypoints for body measurement are detected on RGB images with a deep learning model. Then these keypoints are projected onto the surface of livestock point clouds by utilizing the intrinsic parameters of the camera. Combining the process of interpolation and the pose normalization method, 9 body measurements of cattle and 5 body measurements of pig (including body lengths, body widths, body heights, and heart girth) are measured. To verify the feasibility of this method, the experiments are performed on 103 cattle data and 13 pig data. Compared with manual measurements, the MAPEs (mean absolute percentage errors) of 5 cattle body measurements and 1 pig body measurement are reduced to less than 10%. Body widths are more susceptible to non-standard posture. The MAPEs of 2 cattle body widths are larger than 20% and the MAPE of 1 pig body width reaches 30%. In comparison with a previous girth measurement method, the presented method is more accurate and robust for the cattle dataset. The same approach can be adapted and implemented for non-contact body measurement for different livestock species.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call