Abstract

AbstractThe motion estimation process has gained significant attention in several video applications such as video compression, global motion estimation, and so on. The deep learning (DL) models that perform effectively in computer vision tasks can be employed for motion estimation. Yet, they have high computational complexity and model parameters since motion estimation in the pixel domain is not effective in practical applications. Instead, a block‐based motion estimation process has been developed where the blocks of sequential images are compared. To that end, this paper presents the Anas platyrhynchos optimizer with deep learning‐enabled block‐based motion estimation (APODL‐BBME) model. The proposed model estimates motion using block‐based concepts and DL approaches. To accomplish this, the training and testing frames are separated into non‐overlapping block categories, and the blocks are filtered via the bilateral filtering (BF) approach. In addition, a histogram of gradients (HOG) and densely connected network (DenseNet) model are employed to extract features, which are then fed into the bidirectional long short‐term memory (BiLSTM) model to classify the input features. Finally, the APO algorithm is applied to optimally tune the hyperparameters of the BiLSTM model, which helps to improve the overall motion estimation efficacy, showing the novelty of the work. To demonstrate the enhanced performance of the APODL‐BBME model, a comprehensive analysis is carried out, and when we compare the results, the APODL‐BBME model outperforms recent motion estimation approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call