Application mapping is one of the early stage design processes aimed to improve the performance of Network-on-Chip. Mapping is an NP-hard problem. A massive amount of high-quality supervised data is required to solve the application mapping problem using traditional neural networks. In this article, a reinforcement learning–based neural framework is proposed to learn the heuristics of the application mapping problem. The proposed reinforcement learning–based mapping algorithm (RL-MAP) has actor and critic networks. The actor is a policy network, which provides mapping sequences. The critic network estimates the communication cost of these mapping sequences. The actor network updates the policy distribution in the direction suggested by the critic. The proposed RL-MAP is trained with unsupervised data to predict the permutations of the cores to minimize the overall communication cost. Further, the solutions are improved using the 2-opt local search algorithm. The performance of RL-MAP is compared with a few well-known heuristic algorithms, the Neural Mapping Algorithm (NMA) and message-passing neural network-pointer network-based genetic algorithm (MPN-GA). Results show that the communication cost and runtime of the RL-MAP improved considerably in comparison with the heuristic algorithms. The communication cost of the solutions generated by RL-MAP is nearly equal to MPN-GA and improved by 4.2% over NMA, while consuming less runtime.
Read full abstract