Multi-objective reinforcement learning (MORL) addresses the challenge of optimizing policies in environments with multiple conflicting objectives. Traditional approaches often rely on scalar utility functions, which require predefined preference weights, limiting their adaptability and efficiency. To overcome this, we propose the Dynamic Preference Inference Network (DPIN), a novel method designed to enhance sample efficiency by dynamically estimating the trajectory decision preference of the agent. DPIN leverages a neural network to predict the most favorable preference distribution for each trajectory, enabling more effective policy updates and improving overall performance in complex MORL tasks. Extensive experiments in various benchmark environments demonstrate that DPIN significantly outperforms existing state-of-the-art methods, achieving higher scalarized returns and hypervolume. Our findings highlight DPIN’s ability to adapt to varying preferences, reduce sample complexity, and provide robust solutions in multi-objective settings.
Read full abstract