Abstract
Today, one can find images and videos everywhere, they can come from cameras, mobile phones or from other devices. These images and videos are used to illustrate different objects in a large number of situations (airports, hospitals, public areas, sport events, etc.). This makes the task of processing images and videos a necessary tool that can be used for various domains related to computer vision. The performance of these algorithms have been so reduced due to their high intensive computation and energy consumption. In this work, we propose a new framework that allows users to select in a smart and efficient way the computing units (CPU or/and GPU) in case of processing single image, multiple images, multiple videos or single video in real time. This framework enables to affect the CPU or/and GPU units for calculation depending on the type of media to process and the algorithm complexity. The framework provides several image and video functions on GPU, such as silhouette extraction, points of interest extraction, edges detection, sparse and dense optical flow estimation. These functions are exploited in different applications such as vertebra segmentation in X-ray and MR images, gaze estimation, event detection and localization in real time. Experimental results have been obtained by applying the framework for different use case applications showing a speedup ranging from 5 to 116×, by comparison with sequential CPU implementations. In addition to these performance, the parallel and heterogeneous implementation offered lower power consumption as a result for the fast treatment.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have