Gas Distribution Mapping (GDM) is a valuable tool for monitoring the distribution of gases in a wide range of applications, including environmental monitoring, emergency response, and industrial safety. While GDM is actively researched in the scope of gas-sensitive mobile robots (Mobile Robot Olfaction), there is a potential for broader applications utilizing sensor networks. This study aims to address the lack of deep learning approaches in GDM and explore their potential for improved mapping of gas distributions. In this paper, we introduce Gas Distribution Decoder (GDD), a learning-based GDM method. GDD is a deep neural network for spatial interpolation between sparsely distributed sensor measurements that was trained on an extensive data set of realistic-shaped synthetic gas plumes based on actual airflow measurements. As access to ground truth representations of gas distributions remains a challenge in GDM research, we make our data sets, along with our models, publicly available. We test and compare GDD with state-of-the-art models on synthetic and real-world data. Our findings demonstrate that GDD significantly outperforms existing models, demonstrating a 35% improvement in accuracy on synthetic data when measured using the Root Mean Squared Error over the entire distribution map. Notably, GDD appears to have superior capabilities in reconstructing the edges and characteristic shapes of gas plumes compared to traditional models. These potentials offer new possibilities for more accurate and efficient environmental monitoring, and we hope to inspire other researchers to explore learning-based GDM.