An R-squared measure of goodness of fit for some common nonlinear regression models
An R-squared measure of goodness of fit for some common nonlinear regression models
Key takeaways
Bibliography: Colin Cameron, A., Windmeijer, F.A.G., 1997. An R-squared measure of goodness of fit for some common nonlinear regression models. Journal of Econometrics 77, 329–342. https://doi.org/10.1016/S0304-4076(96)01818-0
Authors:: A. Colin Cameron, Frank A.G. Windmeijer
Collections:: Methods
First-page: 1
For regression models other than the linear model, R-squared type goodness-of-fit summary statistics have been constructed for particular models using a variety of methods. We propose an R-squared measure of goodness of fit for the class of exponential family regression models, which includes logit, probit, Poisson, geometric, gamma and exponential. This R-squared is defined as the proportionate reduction in uncertainty, measured by Kullback-Leibler divergence, due to the inclusion of regressors. Under further conditions concerning the conditional mean function it can also be interpreted as the fraction of uncertainty explained by the fitted model.
content: "@colincameronRsquaredMeasureGoodness1997" -file:@colincameronRsquaredMeasureGoodness1997
Reading notes
Imported on 2024-05-06 13:34
⭐ Important
- & This R-squared is defined as the proportionate reduction in uncertainty, measured by Kullback-Leibler divergence, due to the inclusion of regressors (p. 1)
- & This measure can be applied to a range of commonly-used nonlinear regression models: the normal for continuous dependent variable y ∈ (-∞,∞); exponential, gamma and inverse-Gaussian for continuous y ∈ (0,∞); logit, probit and other Bernoulli regression models for discrete y = 0, 1; binomial (m trials) for discrete y = 0, 1,..., m; Poisson and geometric for discrete y = 0, 1, 2, ... (p. 2)
- & The R-squared we propose is the proportionate reduction in this potentially recoverable information achieved by the fitted regression model: This measure can be used for fitted means obtained by any estimation method. In the following proposition we restrict attention to ML estimation (which minimizes K(y,)). (p. 5)
- This is not a good thing in my view. An R2 should penalise additional variables being added to the model:
- & and is nondecreasing as regressors are added. (p. 14)