Abstract

Despite recent breakthroughs in the domain of implicit generative models, the task of evaluating these models remains a challenging task. With no single metric to assess overall performance, various existing metrics only offer partial information. This issue is further compounded for unintuitive data types such as time series, where manual inspection is infeasible. This deficiency hinders the confident application of modern implicit generative models on time series data. To alleviate this problem, we propose two new metrics, the InceptionTime Score (ITS) and the Fréchet InceptionTime Distance (FITD), to assess the quality of class-conditional generative models on time series data. We conduct extensive experiments on 80 different datasets to study the discriminative capabilities of proposed metrics alongside two existing evaluation metrics: Train on Synthetic Test on Real (TSTR) and Train on Real Test on Synthetic (TRTS). Our evaluations reveal that the proposed assessment evaluation metrics, i.e., ITS and FITD in combination with TSTR, can accurately assess class-conditional generative model performance and detect common issues in implicit generative models. Our findings suggest that the proposed evaluation framework can be a valuable tool for confidently applying modern implicit generative models in time series analysis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.