Abstract

We present V2V, a novel deep learning framework, as a general-purpose solution to the variable-to-variable (V2V) selection and translation problem for multivariate time-varying data (MTVD) analysis and visualization. V2V leverages a representation learning algorithm to identify transferable variables and utilizes Kullback-Leibler divergence to determine the source and target variables. It then uses a generative adversarial network (GAN) to learn the mapping from the source variable to the target variable via the adversarial, volumetric, and feature losses. V2V takes the pairs of time steps of the source and target variable as input for training, Once trained, it can infer unseen time steps of the target variable given the corresponding time steps of the source variable. Several multivariate time-varying data sets of different characteristics are used to demonstrate the effectiveness of V2V, both quantitatively and qualitatively. We compare V2V against histogram matching and two other deep learning solutions (Pix2Pix and CycleGAN).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.