Abstract
AbstractThe paper discusses four paradigm shifts in statistics since the 1920s with a view to compare their similarities and differences, and evaluate their effectiveness in giving rise to ‘learning from data' about phenomena of interest. The first is Fisher's 1922 recasting of Karl Pearson's descriptive statistics into a model‐based [] statistical induction that dominates current statistics (frequentist and Bayesian). A crucial departure was Fisher's replacing the curve‐fitting perspective guided by goodness‐of‐fit measures with a model‐based perspective guided by the statistical adequacy: the validity of the probabilistic assumptions comprising . Statistical adequacy is pivotal in securing trustworthy evidence since it underwrites the reliability of inference. The second is the nonparametric turn in the 1970s aiming to broaden by replacing its distribution assumption with weaker mathematical conditions relating to the unknown density function underlying . The third is a two‐pronged development initiated in Artificial Intelligence (AI) in the 1990s that gave rise to Data Science (DS) and Graphical Causal (GC) modeling. The primary objective of the paper is to compare and evaluate the other competing approaches with a refined/enhanced version of Fisher's model‐based approach in terms of their effectiveness in giving rise to genuine “learning from data;” excellent goodness‐of‐fit/prediction is neither necessary nor sufficient for statistical adequacy, or so it is argued.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.