Abstract

In this paper, we propose BRAFT, an improved deep network architecture based on the Recurrent All-Pairs Field Transforms (RAFT) for optical flow estimation. BRAFT extracts features for each pixel. According to the characteristics of optical flow, the dense visual similarity is calculated based on the strength-weakness correlation in blocks to establish a more precise 4D correlation volume. Using a single dataset for training, the proposed method achieves better results than the original RAFT. We consider the end-point-error as the performance measure. The results show that the proposed method is 1.7 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\%$</tex-math></inline-formula> lower than RAFT on the KITTI 2015 benchmark, and 1.7 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\%$</tex-math></inline-formula> lower than MPI Sintel (final). In addition, the errors in BRAFT are mostly small. Therefore, the proposed method is better for performing the edge extraction of moving objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call