Abstract

In the last few years, several works have been proposed on Generative Adversarial Networks (GAN). At the same time, there is a lack of investigation on their evaluation and the few proposed evaluation methods have not yet been rigorously studied. By way of example, the metrics to evaluate Generative Adversarial Networks (GAN) have been exclusively developed and tested for image data, but models operating on time series data have not been studied at all. In fact, it is still unclear what are the advantages and disadvantages of each approach and what is the difference between them in terms of performance i.e. which metric can easily detect common GAN problems such as mode collapse or mode dropping. Different tests have been introduced by [6] to evaluate GAN metrics for images. Inspired by this work, we extensively study the numerous evaluation metrics proposed in the literature that are designed for time series data and compare them to each other in a structured way. To the best of our knowledge, this represents the first work that studies the existing evaluation metrics of GAN for time series data and evaluates their performance against different evaluation criteria. Moreover, we introduce MiVo, a new evaluation metric that computes the similarity between a set of real and a set of generated data and finds for each real times series a synthetic one and for each synthetic time series a real one. We show that this bidirectional check will allow to easily detect different training problems such as the ones mentioned above. At the same time, this method is computationally much more efficient as it doesn't involve any machine learning model and hence no training is needed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call