Abstract

Even though optical flow approaches based on convolutional neural networks have achieved remarkable performance with respect to both accuracy and efficiency, large displacements and motion occlusions remain challenges for most existing learning-based models. To address the abovementioned issues, we propose in this paper a self-attention-based multiscale feature learning optical flow computation method with occlusion feature map prediction. First, we exploit a self-attention mechanism-based multiscale feature learning module to compensate for large displacement optical flows, and the presented module is able to capture long-range dependencies from the input frames. Second, we design a simple but effective self-learning module to acquire an occlusion feature map, in which the predicted occlusion map is utilized to correct the optical flow estimation in occluded areas. Third, we explore a hybrid loss function that integrates the photometric and smoothness losses into the classical endpoint error (EPE)-based loss to ensure the accuracy and robustness of the presented network. Finally, we compare the proposed method with some state-of-the-art approaches using the MPI-Sintel and KITTI test databases. The experimental results demonstrate that the proposed method achieved competitive performance with respect to both accuracy and robustness, and it produced the better results compared to other methods under large displacements and motion occlusions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call