Abstract

Efficient performance prediction at the source code level is essential in reducing the turnaround time of software development. In this paper, we introduce a new prediction model, which combines several machine learning algorithms, such as KNN, clustering, similarity, sample and attribute weighting with multiple linear regression techniques, to predict the execution time of Digital Signal Processing (DSP) software at the source code level. Prediction at source code level tends to both under-predict the performance for certain testing samples and over-predict for some other samples. Therefore, we propose a new algorithm called MAX/MIN algorithm to select the best-predicted execution time. To validate the new model, we measure experimentally the execution time of a set of functions selected from PHY DSP Benchmark and run them on TIC64 DSP processor. It is observed that the average absolute relative prediction error is less than 10% between the computed performance from the new model and the actual measured execution time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call