Abstract
AbstractIn this paper, we investigate the applicability of reinforcement learning as a possible approach to select code variants. Our approach is based on the observation that code variants are usually convertible between one another by code transformations. Actor-critic proximal policy optimization is identified as a suitable reinforcement learning algorithm. To study its applicability, a software framework is implemented and used to perform experiments on three different hardware platforms using a class of explicit solution methods for systems of ordinary differential equations as an example application.KeywordsReinforcement learningProximal policy optimizationAutotuningCode variant selectionODE methodsPIRK methods
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.