This paper studies a dynamic reconfigurable intelligent surface (RIS)-assisted broadcast communication system where a transmitter broadcasts information to multiple receivers with time-varying locations via a RIS. The goal is to minimize the maximum bit error rate (BER) at the receivers by optimizing RIS phase shifts, subject to a given discrete phase shift constraint. Unlike most existing works where channel state information (CSI) is required, only location information of the receivers is needed in our work, due to the great challenge of instantaneous CSI estimation in RIS-assisted communications and the reason that statistical CSI does not apply to the dynamic scenario. The involved optimization problem is hard to tackle, because the BERs at the receivers cannot be calculated by classical CSI-dependent analytical expressions for lack of CSI and exhaustive searching is computationally prohibitive to achieve the optimal discrete phase shifts. To address this issue, a deep reinforcement learning (DRL) approach is proposed to solve the problem by reformulating the optimization problem as a Markov decision process (MDP), where the BERs are measured by the Monte Carlo method. Furthermore, to tackle the issue of the high-dimensional action space in the MDP, a novel action-composition based proximal policy optimization (PPO) algorithm is proposed to solve the MDP. Simulation results verify the effectiveness of the proposed PPO-based DRL approach.