Abstract

The acoustic signals of sheep grazing carry a wealth of information. To date, the information on grazing behavior and intake in acoustic signals has been thoroughly explored and utilized. However, information on grass growth conditions (grass conditions) has received little attention from scholars, although it is crucial for making rotational grazing decisions. This study tries to efficiently mine and process the grass condition information in acoustic signals to obtain a high-performing recognition model. First, the acoustic signals collected under three grass conditions were divided into many segment samples. Second, six types of formal samples were constructed from every segment sample. Last, the log-Mel features of the six formal samples were separately fed into a convolutional neural network (CNN) model or a recurrent neural network (RNN) model to identify the grass conditions. Before the log-Mel features were fed into the model, two methods of unifying the sample length and multiple specified lengths were used to pre-process the formal samples. The results showed that the combination of the fixed chewing and biting connection (FCB) sample and the CNN model performed the best, with an accuracy of 90.24%. The method of filling or truncating the sample’s waveform tended to be better than scaling the sample’s log-Mel feature when the specified length was longer than 8 s. The different specified lengths had an essential effect on the accuracy of the model with the filling method. The application of this study could provide a data reference for constructing a more rational rotational grazing strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call