Abstract

Rapid computation of the Hough transform is necessary in very many computer vision applications. One of the major approaches for fast Hough transform computation is based on the use of a small random sample of the data set rather than the full set. Two different algorithms within this family are the randomized Hough transform (RHT) and the probabilistic Hough transform (PHT). There have been contradictory views on the relative merits and drawbacks of the RHT and the PHT. In this paper, a unified theoretical framework for analyzing the RHT and the PHT is established. The performance of the two algorithms is characterized both theoretically and experimentally. Clear guidelines for selecting the algorithm that is most suitable for a given application are provided. We show that, when considering the basic algorithms, the RHT is better suited for the analysis of high quality low noise edge images, while for the analysis of noisy low quality images the PHT should be selected.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.