Abstract

This chapter is an invitation to the central themes of the book: confidence, likelihood, probability and confidence distributions. We sketch the historical backgrounds and trace various sources of influence leading to the present and somewhat bewildering state of ‘modern statistics’, which perhaps to the confusion of many researchers working in the applied sciences is still filled with controversies and partly conflicting paradigms regarding even basic concepts. Introduction The aim of this book is to prepare for a synthesis of the two main traditions of statistical inference: those of the Bayesians and of the frequentists. Sir Ronald Aylmer Fisher worked out the theory of frequentist statistical inference from around 1920. From 1930 onward he developed his fiducial argument, which was intended to yield Bayesian-type results without the often ill-founded prior distributions needed in Bayesian analyses. Unfortunately, Fisher went wrong on the fiducial argument. We think, nevertheless, that it is a key to obtaining a synthesis of the two, partly competing statistical traditions. Confidence, likelihood and probability are words used to characterise uncertainty in most everyday talk, and also in more formal contexts. The Intergovernmental Panel on Climate Change (IPCC), for example, concluded in 2007, “Most of the observed increase in global average temperature since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations” (Summary for Policymakers, IPCC, 2007). They codify ‘very likely’ as having probability between 0.90 and 0.95 according to expert judgment. In its 2013 report IPCC is firmer and more precise in its conclusion. The Summary for Policymakers states, “It is extremely likely that more than half of the observed increase in global surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together” (IPCC, 2013, p. 17). By extremely likely they mean more than 95% certainty. We would have used ‘confidence’ rather than ‘likelihood’ to quantify degree of belief based on available data. We will use the term ‘likelihood’ in the technical sense usual in statistics. Confidence, likelihood and probability are pivotal words in the science of statistics. Mathematical probability models are used to build likelihood functions that lead to confidence intervals. Why do we need three words, and actually additional words such as credibility and propensity, to measure uncertainty and frequency of chance events?

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call