Files in this item

FilesDescriptionFormat

application/pdf

application/pdfTan_MingYangJeremy.pdf (762kB)
(no description provided)PDF

Description

Title:Critical study of AIC model selection techniques
Author(s):Tan, Ming Yang Jeremy
Director of Research:Oono, Yoshitsugu
Doctoral Committee Chair(s):Dahmen, Karin A.
Doctoral Committee Member(s):Oono, Yoshitsugu; Thaler, Jonathan J.; Wiss, J.E.
Department / Program:Physics
Discipline:Physics
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Akaike Information Criterion (AIC)
Cosmological Model Selection
Information Theory
Model Stability
Abstract:Akaike Information Criterion (AIC) has been used widely as a statistical criterion to compare the appropriateness of different parametric models underlying a particular dataset. Under suitable conditions, AIC is an unbiased estimator of the Kullback-Leibler divergence D(T||A) of a candidate model A with respect to the truth T , where we have defined T as an underlying process consisting of a signal with stochastic noise; a particular empirical data is a single realization of this process. Thus, a model with a smaller AIC is ranked as a better model, since it has a smaller Kullback-Leibler discrepancy with T . However, it is an important question whether the difference between the AIC values for two candidate models is statistically significant. This was partially addressed in terms of a probability of models by using Akaike weights. It is also important to remember that the AIC itself is statistically estimated with the aid of available data. We explored the impact of the possible errors of the estimated AIC in the context of comparing models underlying the late time acceleration of the universe, using SNIa, using a parametric bootstrap technique to study the reliability of the estimated AIC. From the specific example that we study, we find that the estimator uncertainty in the AIC differences can be significant. Therefore, AIC-based studies should not only pay attention to Akaike weight based criterion of reliability, but should also consider the impact of estimator uncertainty of AIC. We also examined the AIC model selection strategy and compared it to other model selection techniques. In our comparison, we showed that the AIC-based 2k cost term does not properly account for model complexity, in contrast to other model selection techniques. Besides complexity, an alternative model selection criterion based on model stability was proposed and studied.
Issue Date:2012-06-27
URI:http://hdl.handle.net/2142/31923
Rights Information:Copyright 2012 Ming Yang Jeremy Tan
Date Available in IDEALS:2012-06-27
2014-06-28
Date Deposited:2012-05


This item appears in the following Collection(s)

Item Statistics