This work presents an innovative machine-learning approach to improve loop tiling in computational programming. Loop tiling is a crucial strategy for boosting speed by promoting data locality and reducing cache misses. Conventional methods frequently have difficulties in accurately calculating the most suitable tile size, which is a crucial component impacting the performance of the program. The combination of Multi-Output Generalized Regression Neural Networks (MOGRNN) and Linear Regression in researchers’ techniques allows for precise prediction of optimal tile sizes for various computing workloads. The study entails an extensive gathering of data from 22 benchmark programs, which encompass a diverse set of computational patterns and issue sizes. This data collection is further enhanced by incorporating both static and dynamic program aspects. By employing meticulous preprocessing and doing dual-model analysis, the researchers’ approach effectively captures both linear and intricate non-linear correlations present in the data. The method’s usefulness in boosting prediction accuracy for ideal tile sizes and enhancing overall program performance has been demonstrated through extensive testing on an Intel Core i7 CPU. This novel approach provides a viable avenue for advanced study in code optimization approaches.
Read full abstract