Putting sensors on the bodies of animals to automate animal activity recognition and gain insight into their behaviors can help improve their living conditions. Although previous hard-coded algorithms failed to classify complex time series obtained from accelerometer data, recent advances in deep learning have improved the task of animal activity recognition for the better. However, a comparative analysis of the generalizing capabilities of various models in combination with different input types has yet to be addressed. This study experimented with two techniques for transforming the segmented accelerometer data to make them more orientation-independent. The methods included calculating the magnitude of the three-axis accelerometer vector and calculating the Discrete Fourier Transform for both sets of three-axis data as the vector magnitude. Three different deep learning models were trained on this data: a Multilayer Perceptron, a Convolutional Neural Network, and an ensemble merging both called a hybrid Convolutional Neural Network. Besides mixed cross-validation, every model and input type combination was assessed on a goat-wise leave-one-out cross-validation set to evaluate its generalizing capability. Using orientation-independent data transformations gave promising results. A hybrid Convolutional Neural Network with L2-norm as the input combined the higher classification accuracy of a Convolutional Neural Network with the lower standard deviation of a Multilayer Perceptron. Most of the misclassifications occurred for behaviors that display similar accelerometer traces and minority classes, which could be improved in future work by assembling larger and more balanced datasets.
Read full abstract