Abstract

The innovations and applications in automation, robotics and computer vision are enhanced by the image processing techniques such as object detection, face recognition, image compression etc. This work uses line detection by line Hough transform (LHT). LHT is most efficient technique, which give clear output for a given edge detected image even when there is an appreciable noise present. Normally the LHT requires a large memory for rho and theta calculations. Therefore to reduce the memory requirements rho and theta calculations are implemented with CORDIC Algorithm based functional units. A MATLAB model of LHT with variable word length is developed. The number of bits used to represent the pixel values in edge detected image are varied from 1o Bits to 2o Bits; to trade off the quality of edge detection and time required. This simulation reveals that time for 1o bits is 18.1o29s, 15 bits is 18.2982s and for 2o bits is 19.22s. Here the minimum word length of 1o bits is considered and the same architecture is implemented in TSMC 9onm technology. This requires 4174 MB memory and 5ms as computation time compared to 8384 MB and 2ms. So from the CORDIC based LHT uses lesser memory and relatively longer computation time than the previous implementations

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call