Maximum Likelihood is Better than Multiple Imputation: Part II
Maximum Likelihood is Better than Multiple Imputation: Part II
Key takeaways
Bibliography: Allison, P., 2015. Maximum Likelihood is Better than Multiple Imputation: Part II. Statistical Horizons. URL https://statisticalhorizons.com/ml-is-better-than-mi/ (accessed 5.15.23).
Authors:: Paul Allison
Collections:: To Read, Methods
First-page:
In my July 2012 post, I argued that maximum likelihood (ML) has several advantages over multiple imputation (MI) for handling missing data: ML is simpler to implement (if you have the right software). Unlike multiple imputation, ML has no potential incompatibility between an imputation model and an analysis model. ML produces a deterministic result rather than […]
content: "@allisonMaximumLikelihoodBetter2015" -file:@allisonMaximumLikelihoodBetter2015
Reading notes
With MI, on the other hand, the only way to get asymptotic efficiency is to do an infinite number of imputations, something that is clearly not possible. You can get pretty close to full efficiency for the parameter estimates with a relatively small number of imputations (say, 10), but efficient estimation of standard errors and confidence intervals typically requires a much larger number of imputations.
Bottom line is that ML seems like the better way to go for handling missing data in both large and small samples. But there’s still a big niche for MI. ML requires a parametric model that can be estimated by maximizing the likelihood. And to do that, you usually need specialized software. Most structural equation modeling packages can do FIML for linear models, but not for non-linear models. As far as I know, Mplus is the only commercial package that can do FIML for logistic, Poisson, and Cox regression.