In the vast landscape of statistical methodology, three prominent schools - 'frequentist', 'Bayesian', and the lesser known 'likelihoodist' - contribute to the diverse tapestry of statistical inference. The frequentist school claims dominance, with the Bayesian perspective close behind. Meanwhile, the 'likelihoodist' school, often referred to as 'Fisherian,' operates somewhat under the radar.
Ronald A. Fisher, a central figure in modern statistics, was vehemently opposed to Bayesian principles. Notably, Fisher, although not a professor of statistics, significantly influenced the field from the Department of Eugenics at University College.
Within the prevailing paradigm of statistical data analysis, probability-based inference takes precedence, even within the Bayesian school. Fisher's seminal work established the current concept of likelihood, leading to the widely used method of Maximum Likelihood Estimation (MLE). Neyman further enriched this framework with the development of the Likelihood Ratio Test (LRT), recognized as the most uniformly powerful test.
The basic elements of statistical testing, including Type I and Type II errors, alpha and beta values, and null and alternative hypotheses, culminate in the widely used Null Hypothesis Significance Test (NHST) scheme. However, this conventional probability-based scheme has been likened to a "two-headed monster," leading to debates such as the Fisher vs. Neyman-Pearson argument over the basis of the test.
While the majority of statisticians favor probability-based inference, there are scenarios where calculating probabilities becomes difficult. In such cases, likelihood-based inference shines, especially in complicated statistical models such as nonlinear regression or mixed effects models. In particular, Fisher's legacy extends beyond likelihood-based methods to include foundational contributions to least squares, t-tests, ANOVA, and linear regression, while post-Fisher MLE finds application in logistic regression, survival analysis, and mixed-effects models. The ongoing discourse surrounding these methodologies reflects the dynamic evolution of statistical thinking and the nuanced choices statisticians face in navigating the ever-expanding terrain of data analysis.
The author will present several examples of the application of likelihood-based inference in the field of clinical pharmacology, such as the revision of the cut-off value of 3.84 (the 95th percentile of the chi-squared distribution with 1 degree of freedom) to the better model.
[References]
1. Fisher RA. Statistical Methods and Scientific Inference. 3e. Hafner Press. 1973.
2. Lehmann EL. Fisher, Nayman, and the Creation of Classical Statistics. Springer. 2011.
3. Edwards AWF. Likelihood. Cambridge University Press. 1972.
4. Royall R. Statistical Evidence. Chapman & Hall. 1997.
5. Pawitan Y. In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford University Press. 2001.
6. Rohde CA. Introductory Statistical Inference with the Likelihood Function. Springer. 2014.
7. Held L, Bové DS. Likelihood and Bayesian Inference. Springer. 2020.
8. Bland M. An Introduction to Medical Statistics. 4e. Oxford University Press. 2015.
9. Cox DR, Donnelly CA. Principles of Applied Statistics. Cambridge University Press. 2011.
10. Jeffreys H. Theory of Probability. 3e. Oxford University Press. 1961.
11. Held L, Ott M. How the maximal evidence of p-values against null hypotheses depends on sample size. Am. Stat. 2016;70:335-41.