It is undoubtedly that test suites are crucial for fault localization techniques. Through the running of a single test case in the test suite, a fault localization technology is able to obtain the dynamic coverage information of the statements, which is a vector and could be analyzed to identify the corresponding coefficients between statements and program failures. This vector is commonly referred to as a model domain test sample. There are two types of model domain test samples: passing ones and failing ones. However, the number of passing test samples often far exceeds the number of failing ones, which undoubtedly causes the asymmetry phenomenon. Previous studies have proved that the imbalance of model domain test samples may hamper the effectiveness of fault localization techniques. In practice, it is difficult to create failing test cases in input domain. With the rapid development of deep learning techniques, Generative Adversarial Networks (GAN) have achieved promising results in many fields, which may bring a new perspective for augmenting model domain failing test samples. Thus, we try to provide a new method named MAG: Model-domain failing test Augmentation with Generative Adversarial Networks. MAG first constructs a Generative Adversarial Network, and then generates vectors with the abstract representation of execution information as input. At last, MAG utilizes influential global and local contexts to enhance the generated vectors, and to formulate model domain test samples with failing labels. MAG augments model domain failing test samples. Different from traditional methods of generating test cases directly from input domain, MAG seeks to augment failing test samples of model domain, which could improve the effectiveness of existing fault localization techniques. Experimental results illustrate that MAG significantly improves the effectiveness of 13 typical fault localization approaches and is better than two representative data optimization techniques.