Multi-agent reinforcement learning requires numerous interactions with the environment and other agents to learn an optimal policy. The teacher–student framework is one paradigm that can enhance the learning performance of reinforcement learning by allowing agents to seek advice from one another. However, recent studies show limitations in knowledge sharing between agents, as advisory learning is only peer-to-peer at a given time. Some methods enable a student to accept multiple pieces of advice, but they typically rely on pre-trained and/or policy-fixed teachers, rendering them unsuitable for agents with simultaneous advisory learning. Simultaneous learning with multiple pieces of advice has not been thoroughly investigated. Furthermore, most research has concentrated on the sharing of knowledge samples, a practice vulnerable to security breaches that could allow attackers to deduce details about the environment. To address these challenges, we propose a federated advisory framework that uses a federated learning structure to aggregate multiple sources of advice with deep reinforcement learning, ensuring that the shared advice is not sample-based. Our experimental comparisons with leading advisory learning techniques confirm that our approach significantly enhances learning performance.