Abstract

In recent years, light detection and ranging (LiDAR) sensors have been widely utilized in various applications, including robotics and autonomous driving. However, LiDAR sensors have relatively low resolutions, take considerable time to acquire laser range measurements, and require significant resources to process and store large-scale point clouds. To tackle these issues, many depth image sampling algorithms have been proposed, but their performances are unsatisfactory in complex on-road environments, especially when the sampling rate of measuring equipment is relatively low. Although region-of-interest (ROI)-based sampling has achieved some promising results for LiDAR sampling in on-road environments, the rate of ROI sampling has not been thoroughly investigated, which has limited reconstruction performance. To address this problem, this article proposes a solution to the budget distribution optimization problem to find optimal sampling rates according to the characteristics of each region. A simple yet effective mean absolute error (MAE)-aware model of reconstruction errors was developed and employed to analytically derive optimal sampling rates. In addition, a practical LiDAR sampling framework for autonomous driving was developed. Experimental results demonstrate that the proposed method outperforms all previous approaches in terms of both the object and overall scene reconstruction performances.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.