Multi-agent collaborative navigation is prevalent in modern transportation systems, including delivery logistics, warehouse automation, and personalised tourism, where multiple moving agents (e.g., robots and people) must meet at a common destination from different starting points. However, traditional methods face challenges in efficiently and accurately optimising routes for multiple moving agents (e.g., robots). These include dynamically adjusting the common destination in response to changing traffic conditions, encoding the real-world map, and the training speed. Therefore, we propose a generic Multi-Agent Collaborative Navigation System (MACNS) to address those challenges. First, we formalise the solution of the problem and challenges into a Markov Decision Process (MDP), which is further developed as a training environment where a Deep Reinforcement Learning (DRL) agent can learn patterns efficiently. Second, the proposed framework integrates a Graph Neural Network (GNN) into the policy network of Proximal Policy Optimisation for the homogeneous decision-making of each individual agent, showing excellent generalisation, extensibility and convergence speed. Finally, we demonstrate how MACNS can be applied and implemented in a real-world use case: tour groups gathering and picking up. Extensive simulations and real-world tests validate the effectiveness of the MACNS-based use case, showcasing its superiority over other state-of-the-art PPO-related methods.