Abstract
Abstract Region covariance is a robust feature descriptor that allows the use of even the simplest image features like intensity and gradient combined to form a well-performing descriptor for regions on the image. Beyond its robustness, it requires many identical heavy computations on different parts of input data which makes it a good candidate for parallel execution. In this manuscript, we present a real-time parallel implementation of the region covariance which, to our best knowledge, is the first in the literature. We experimented against existing implementations and achieved 6 times faster execution time over vectorized CPU parallel implementation that provides necessary speed up for real-time processing. Additionally, we improved the existing integral image calculation method on CUDA, reducing memory usage by 50%, achieving the fastest computation speed compared to exist- ing solutions, and improved the covariance matrix comparison metric by using a distance metric that is lightweight to compute and easy to implement.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.