Abstract

This paper proposes a Track-before-Detect framework for a multibody motion segmentation (named TbD-SfM). Our contribution relies on a tightly coupled tracking before detection strategy intended to reduce the complexity of existing Multibody Structure from Motion approaches. Efforts were done towards an algorithm variant closer and aimed to a further embedded implementation for dynamic scene analysis while enhancing processing time performances. This generic motion segmentation approach can be transposed to several transportation sensor systems since no constraints are considered on segmented motions (6-DOF model). The tracking scheme is analyzed and its performance is evaluated under thorough experimental conditions including full-scale driving scenarios from known and available datasets. Results on challenging scenarios including the presence of multiple and simultaneous moving objects observed from a moving camera are reported and discussed.

Highlights

  • The increasing introduction of Autonomous Vehicles (AV) and Advanced Driver AssistanceSystems (ADAS) into the marketplace is essential in the design of Intelligent Transportation Systems (ITS)

  • Feature points are tracked along of sequences composed of 640 × 480 images acquired with a rate of 15 frames per second

  • 8 frames were processed with 4 sliding windows, an average of 1450 feature points are observed by frame

Read more

Summary

Introduction

Systems (ADAS) into the marketplace is essential in the design of Intelligent Transportation Systems (ITS) These areas have shown an active development towards unmanned transportation solutions (Car autonomy SAE Level 4). In this context, perception is a critical task since it provides meaningful, complete and reliable information about the vehicle surroundings [1,2]. Mapping (VSLAM) are well-suited for inferring ego-localization by reconstructing simultaneously the environment structure [6] Another well-known technique considered for monocular vision applications is Structure-from-Motion (S f M ). This method estimates the camera pose from the image motion and the 3D structure of the scene, up to a scale factor. The motions are computed using S f M formulation [8]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.