Abstract

Reinforcement learning (RL) stands as one of the three fundamental paradigms within machine learning and has made a substantial leap to build general-purpose learning systems. However, using traditional electrical computers to simulate agent-environment interactions in RL models consumes tremendous computing resources, posing a significant challenge to the efficiency of RL. Here, we propose a universal framework that utilizes a photonic integrated circuit (PIC) to simulate the interactions in RL for improving the algorithm efficiency. High parallelism and precision on-chip optical interaction calculations are implemented with the assistance of link calibration in the hybrid architecture PIC. By introducing similarity information into the reward function of the RL model, PIC-RL successfully accomplishes perovskite materials synthesis task within a 3472-dimensional state space, resulting in a notable 56% improvement in efficiency. Our results validate the effectiveness of simulating RL algorithm interactions on the PIC platform, highlighting its potential to boost computing power in large-scale and sophisticated RL tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call