Abstract

This paper introduces a new super-resolution algorithm based on machine learning along with a novel hybrid implementation for next generation mobile devices. The proposed super-resolution algorithm entails a two dimensional polynomial regression method using only the input image properties for the learning task. Model selection is applied for defining the optimal degree of polynomial by adopting regularization capability in order to avoid overfitting. Although it is widely believed that machine learning algorithms are not appropriate for real-time implementation, the paper in hand proves that there are indeed specific hypothesis representations that are able to be integrated into real-time mobile applications. With aim to achieve this goal, the increasing GPU employment in modern mobile devices is exploited. More precisely, by utilizing the mobile GPU as a co-processor in a hybrid pipelined implementation, significant performance speedup along with superior quantitative results can be achieved. <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call