Abstract

In this paper, we present a method for generating bird’s eye video from egocentric RGB videos. Working with egocentric views is tricky since such the view is highly warped and prone to occlusions. On the other hand, a bird’s eye view has a consistent scaling in at least the two dimensions it shows. Moreover, most of the state‐of‐the‐art systems for tasks such as path prediction are built for bird’s eye views of the subjects. We present a deep learning‐based approach that transfers the egocentric RGB images captured from a dashcam of a car to bird’s eye view. This is a task of view translation, and we perform two experiments. The first one uses an image‐to‐image translation method, and the other uses a video‐to‐video translation. We compare the results of our work with homographic transformation, and our SSIM values are better by a margin of 77% and 14.4%, and the RMSE errors are lower by 40% and 14.6% for image‐to‐image translation and video‐to‐video translation, respectively. We also visually show the efficacy and limitations of each method with helpful insights for future research. Compared to previous works that use homography and LIDAR for 3D point clouds, our work is more generalizable and does not require any expensive equipment.

Highlights

  • Egocentric videos, commonly referred to as first-person videos, are captured from the POV of a subject

  • We presented an end-to-end method for translating egocentric views from RGB cameras such as those installed on vehicles into bird’s eye views of the environment the subject vehicle was present in

  • One of the biggest hurdles is that egocentric views have a high level of distortion due to perspective, whereas a bird’s eye view has a consistent scaling

Read more

Summary

Introduction

Egocentric videos, commonly referred to as first-person videos, are captured from the POV of a subject (in our case from the POV of an autonomous vehicle). Egocentric videos are easy to capture and are accessible in real-time to the vehicle. They are deviously hard to for a computer to comprehend and work with. This is because egocentric videos are prone to occlusions, and there is a significant warping effect due to perspective which causes the objects closer to the camera to look inflated. Another drawback of the egocentric view is the nonlinear nature of objects in motion. With advancements in self-driving autonomous vehicle technology, it becomes important that we devise a way to overcome the shortcomings of egocentric perspective and make their accessibility useful [1,2,3]

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.