A new vision-based fall detector is proposed that uses the tsfresh tool to generate features from the bounding box motion parameters of an object and performs classification in a sliding window mode. The efficiency of the generated features is demonstrated compared to the primary ones. Using the auto-sklearn library and a generalized dataset compiled from the UR Fall Detection and CAUCAFall datasets, the best human fall detection model is found. This model, based on a gradient boosting classifier, achieved 96% accuracy, which is not inferior to well-known detection algorithms, but uses only two primary motion parameters to generate secondary features. A PCA-based class separability study showed that for secondary features, 99% of the variance is captured by the first 4 principal components, while for primary features, the first 10 principal components contain only 80% of the data variance. Furthermore, the processing time for generating secondary features and making predictions was found to be relatively short, taking only a few seconds per sequence, highlighting the practical applicability of the proposed approach in real-time fall monitoring systems.
Read full abstract