Investigating the instructional sensitivity of distractors in items exhibiting DIF
Evans, John Andrew
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/21011
Description
Title
Investigating the instructional sensitivity of distractors in items exhibiting DIF
Author(s)
Evans, John Andrew
Issue Date
1996
Doctoral Committee Chair(s)
Ackerman, Terry A.
Department of Study
Education
Discipline
Education
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Education, Tests and Measurements
Statistics
Psychology, Psychometrics
Language
eng
Abstract
When examinees from identifiable subgroups have comparable ability but different probabilities of correctly responding to an item, the item is said to exhibit differential item functioning (DIF). If the amount of DIF is practically significant, and attributable to features that are irrelevant to the test construct, the item may bias the ability estimates of some group of examinees.
The presence of DIF is a necessary but not a sufficient condition for item bias. The simple conclusion of bias (i.e., observing a larger than expected proportion of examinees from a selected group responding incorrectly to a given item) fails to identify the component(s) of the item that are irrelevant to the examination.
Recognizing the need to identify the plausible item feature(s) to substantively explain DIF, this analysis coupled established and reliable, dimensionally sensitive IRT procedures, with log-linear modeling to provide additional practical interpretation of the dimensionality of an item and its distractors.
The purpose of this research is to propose an empirical methodology for identifying items that have a potential to impair the validity of a single unidimensional) construct measure. The purposed methodology is an out growth of the multidimensional DIF paradigm of Shealy and Stout (1993a). The use of statistical multidimensional assessment procedures to assist in the DIF detection process was proposed by Douglas, Roussos, and Stout (1995).
This work specifically considers the relationship between relevant (i.e., an examinee's ability and opportunity to learn) and non-relevant (i.e., race) examinee characteristics and their interaction with item option selection. This methodology utilized both item response theory (IRT) techniques and hierarchical log-linear analysis. This combination of techniques presents the practitioner with methodology to evaluate the validity of an item in order to ascertain inclusion or rejection of the item from the exam as a whole.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.