The acceleration of urbanization has led to increasingly severe traffic congestion, creating an urgent need for effective traffic signal control strategies to improve road efficiency. This paper proposes an adaptive traffic signal control method based on offline reinforcement learning (Offline RL) to address the limitations of traditional fixed-time signal control methods. By monitoring key parameters such as real-time traffic flow and queue length, the proposed method dynamically adjusts signal phases and durations in response to rapidly changing traffic conditions. At the core of this research is the design of a model named SD3-Light, which leverages advanced offline reinforcement learning to predict the optimal signal phase sequences and their durations based on real-time intersection state features. Additionally, this paper constructs a comprehensive offline dataset, which enables the model to be trained without relying on real-time traffic data, thereby reducing costs and improving the model’s generalization ability. Experiments conducted on real-world traffic datasets demonstrate the effectiveness of the proposed method in reducing the average travel time. Comparisons with several existing methods highlight the clear advantages of our approach in enhancing traffic management efficiency.