Abstract In recent years generative artificial intelligence has been used to create data to support scientific analysis. For example, Generative Adversarial Networks (GANs) have been trained using Monte Carlo simulated input and then used to generate data for the same problem. This has the advantage that a GAN creates data in a significantly reduced computing time. N training events for a GAN can result in NG generated events with the gain factor G being greater than one. This appears to violate the principle that one cannot get information for free. This is not the only way to amplify data so this process will be referred to as data amplification which is studied using information theoretic concepts. It is shown that a gain greater than one is possible whilst keeping the information content of the data unchanged. This leads to a mathematical bound, 2 log(Generated Events) ≥ 3log(Training Events), which only depends on the number of generated and training events. This study determined the conditions for both the underlying and reconstructed probability distributions to ensure this bound. In particular, the resolution of variables in amplified data is not improved by the process but the increase in sample size can still improve statistical significance. The bound was confirmed using computer simulation and analysis of GAN generated data from the literature. 
Read full abstract- All Solutions
Editage
One platform for all researcher needs
Paperpal
AI-powered academic writing assistant
R Discovery
Your #1 AI companion for literature search
Mind the Graph
AI tool for graphics, illustrations, and artwork
Unlock unlimited use of all AI tools with the Editage Plus membership.
Explore Editage Plus - Support
Overview
3800 Articles
Published in last 50 years
Articles published on Increasing Sample Size
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
3643 Search results
Sort by Recency