Abstract

Optical lens systems generally contain non-linear distortion artifacts that impose important limitations on the direct interpretation of the images. Image processing can be used to correct for these artifacts, but due to the calculation-intensive nature of the required distortion correction process, this is usually performed offline. This is not an option in image-based applications that operate interactively, however, where the real-time display of distortion corrected images can be vital. To this end, we propose a new technique to correct for arbitrary geometric lens distortion that uses the parallel processing power of a commercial graphics processing unit (GPU). By offloading the distortion correction process to the GPU, we can relieve the central processing unit (CPU) of doing this computationally very demanding task. We successfully implemented the full distortion correction algorithm on the GPU, thereby achieving a display rate of over 30 frames/sec for fully processed images of size 1024 × 768 pixels without the need for any additional digital image processing hardware.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.