This paper proposes a novel robust differential game scheme to solve the collision avoidance problem for networked multi-agent systems (MASs), subject to linear dynamics, external disturbances and limited observation capabilities. Compared with the existing differential game approaches only considering obstacle avoidance objectives, we explicitly incorporate the trajectory optimization target by penalizing the deviation from reference trajectories, based on the artificial potential field (APF) concept. It is proved that the strategies of each agent defined by individual optimization problems will converge to a local robust Nash equilibrium (R-NE), which further, with a fixed strong connection topology, will converge to the global R-NE. Additionally, to cope with the limited observation for MASs, local robust feedback control strategies are constructed based on the best approximate cost function and distributed robust Hamilton–Jacobi–Isaacs (DR-HJI) equations, which does not require global information of agents as in the traditional Riccati equation form. The feedback gains of the control strategies are found via the ant colony optimization (ACO) algorithm with a non-dominant sorting structure with convergence guarantees. Finally, simulation results are provided to verify the efficacy and robustness of the novel scheme. The agents arrived at the targeted position collision-free with a reduced arrival time, and reached the targeted positions under disturbance.