A Global Audio Telepresence (GOAT) system requires a microphone array to capture the spatial audio signals in the far end and a loudspeaker array to reconstruct the sound field in the near end. This seamlessly immerses near-end users in remote audio scenes with full ambience. In this paper, we use a learning-based GOAT system (L-GOAT) based on the model-matching principle, where a deep neural network (DNN) acts as non-linear filters for the GOAT system. The network training attempts to minimize the matching error between the signals reproduced by the DNN and the desired signals filtered by the far-end acoustic transfer functions (ATFs). Extensive simulations were carried out for multi-source scenarios in two different rooms with different reverberation times. To implement the L-GOAT system, a five-microphone linear array was adopted in the far-end room, while a six-loudspeaker array was utilized in the near-end room. The objective evaluation matrices, including the Perceptual Evaluation of Speech Quality (PESQ), Short-Time Objective Intelligibility (STOI), and the matching errors, were conducted to validate the efficacy of the GOAT systems. The proposed learning-based approach has demonstrated superior performance compared to a conventional digital signal processing (DSP)-based method.