Reinforcement learning (RL), exhibiting outstanding performance in various fields, requires large amounts of data for high performance. While exploration techniques address this requirement, conventional exploration methods have limitations: complexity of hardware implementation and significant hardware burden. Herein, in‐memory RL systems leveraging intrinsic 1/f noise of synaptic ferroelectric field‐effect‐transistors (FeFETs) for efficient exploration are proposed. The electrical characteristics of fabricated FeFETs with low‐power operation capability verify their suitability for neuromorphic systems. The proposed system achieves comparable performance to the conventional exploration method without additional circuits. The intrinsic 1/f noise of the FeFETs facilitates efficient exploration and offers significant advantages: efficiency in hardware implementation and simplicity in adjusting the 1/f noise level for optimal performance. This approach effectively addresses the challenges of conventional exploration methods. The operation mechanism of the exploration method utilizing the 1/f noise is systematically analyzed. The proposed in‐memory RL system demonstrates robustness and reliability to the device‐to‐device variation and the initial conductance distribution. This work provides further insights into the exploration methods of RL, paving the way for advanced in‐memory RL systems.
Read full abstract