Abstract

Image data is expanding rapidly, along with technology development, so efficient solutions must be considered to achieve high, real-time performance in the case of processing large image datasets. Parallel processing is increasingly used as an attractive alternative to improve the performance, when using existing distributed architectures but also for sequential commodity computers. It can provide speedup, efficiency, reliability, incremental growth, and flexibility. We present such an alternative and stress the effectiveness of the methods to accelerate computations on a small cluster of PCs compared to a single CPU. Our paper is focused on applying edge detection on large image data sets, as a fundamental and challenging task in image processing and computer vision. Five different techniques, mainly Sobel, Prewitt, LoG, Canny, and Roberts, are compared in a simple experimental setup that includes the OpenCV library functions for image pixels manipulation. Gaussian blur is used to reduce high-frequency components to manage the noise that edge detection is impacted by. Overall, this work is part of a more extensive investigation of image segmentation methods on large image datasets, but the results presented are relevant and show the effectiveness of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call