Multi-objective evolutionary algorithms (MOEAs) are widely employed to tackle multi-objective optimization problems (MOPs). However, the choice of different crossover operators significantly impacts the algorithm's ability to balance population diversity and convergence effectively. To enhance algorithm performance, this paper introduces a novel multi-state reinforcement learning-based multi-objective evolutionary algorithm, MRL-MOEA, which utilizes reinforcement learning (RL) to select crossover operators. In MRL-MOEA, a state model is established according to the distribution of individuals in the objective space, and different crossover operators are designed for the transition between different states. Additionally, in the process of evolution, the population still exhibits inadequate convergence in certain regions, leading to sparse areas within the regular Pareto Front (PF). To address this issue, a strategy for adjusting weight vectors has been devised to achieve uniform distribution of the PF. The experimental results of MRL-MOEA on several benchmark suites with a varying number of objectives ranging from 3 to 10, including WFG and DTLZ, demonstrate MRL-MOEA's competitiveness compared to other algorithms.