There is a growing demand for redirected walking (RDW) techniques and their application. To apply appropriate RDW methods and manipulation, the RDW controllers are predominantly used. There are three types of RDW controllers: direct scripted controller, generalized controller, and predictive controller. The scripted controller type pre-scripts the mapping between the real and virtual environments. The generalized controller type employs the RDW method and manipulation quantities according to a certain procedure depending on the user's position in relation to the real space. This approach has the potential to be reused in any environment; however, it is not fully optimized. The predictive controller type predicts the user's future path using the user's behavior and manages RDW techniques. This approach is highly anticipated to be very effective and versatile; however, it has not been sufficiently developed. This paper proposes a novel RDW controller using reinforcement learning (RL) with advanced plannability/versatility. Our simulation experiments indicate that the proposed method can reduce the number of reset manipulations, which is one of the indicators of the effectiveness of the RDW controller, compared to the generalized controller under real environments with many obstacles. Meanwhile, the experimental results also showed that the gain output by the proposed method oscillates. The results of a user study conducted showed that the proposed RDW controller can reduce the number of resets compared to the conventional generalized controller. Furthermore, no adverse effects such as cybersickness associated with the oscillation of the output gain were evinced. The simulation and user studies demonstrate that the proposed RDW controller with RL outperforms the existing generalized controllers and can be applied to users.