Files in this item

FilesDescriptionFormat

application/pdf

application/pdfNeo_TeckYongLawrence.pdf (316kB)
(no description provided)PDF

Description

Title:Effective cognitive diagnostic assessment with computerized adaptive testing
Author(s):Neo, Teck Yong Lawrence
Advisor(s):Chang, Hua-Hua
Department / Program:Educational Psychology
Discipline:Educational Psychology
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:M.A.
Genre:Thesis
Subject(s):Congitive Diagnostic Assessment
Computerized Adapative Testing
Abstract:Educators are seeking out mechanisms for reporting diagnostic information about the strengths and weaknesses of each student. Cognitive diagnostic assessment (CDA) is a form of assessment that could facilitate the educators in discovering their students' strengths and weaknesses. In CDA, the examinees are classified according to the specific attributes (ability, skills, and knowledge) that the examinees possess. There are numerous models for cognitive diagnostic assessment. A higher-order latent trait model was used in the study. This model takes into account the local dependencies of the attributes by using a higher-order latent trait to model the mastery of the attributes. Another benefit for using a higher-order latent trait model is that it allows for concurrent estimation of the examinees' higher-order latent trait as well as his or her mastery of the attributes. Equipping educators with reports on individual examinee's general ability together with his or her mastery states with respect to the attributes could lead to better tailored remediation. For example, more scaffolding might need to be provided to examinees with lower general ability, whereas those with higher general ability might require a different pedagogical approach. Computerized adaptive testing (CAT) is a mode of testing that has gained popularity in recent years, due to its advantages. It tailors the test according to the examinees' ability, that is, each examinee will receive items that are neither too easy nor too difficult. Consequently, CAT is a more efficient test mode compared to paper-and-pencil testing. With the advancement of technology, CAT became a viable option for test administration. The advantages that CAT have over paper-and-pencil testing make it an attractive option of administering CDA. The key to adaptive testing is the item selection rules. The item selection rules should be able to pick the item that closely matches the examinee's ability. Most CATs are based on item response theory (IRT) models. These IRT-based CAT and CDA adopt different evaluation framework for the examinees, which implies that IRT-based CAT methods cannot be directly applied for administration of CDA. Thus, new methods must be developed for a cognitive diagnostic computerized adaptive testing (CD-CAT). The efficiency of adaptive testing is known to be highly dependent on the ability of the item selection rules to pick the most appropriate item for the examinee at every stage of the testing. However, problems might arise if the most appropriate items are selected at every stage, without consideration for non-statistical constraints like item exposure rate and content-balancing. In practical situations, tests usually cover several content areas and have balanced content coverage. Thus content-balancing constraints are important for test construction. In addition, items that are known to many examinees could lose their powers for distinguishing examinees in terms of their abilities. Examinees with low ability might be able to answer these over-exposed items correctly because they could prepare the answers to these items beforehand. In short, for an item selection rule to have practical applications, it should be able to handle non-statistical constraints as well. Thus, the focus of this study is on item selection rules with mechanism for managing non-statistical constraints. This study examines the efficiency of two new item selection rules. A higher-order latent trait model was used for the study. Besides, being able to account for the local dependencies between the attributes, the model also allows for simultaneous estimations of the examinee's mastery state with respect to specific attributes and his or her higher-order latent trait. Providing educators with reports on individual examinee's general ability (a higher-order latent trait) together with his or her mastery states with respect to the attributes could lead to better tailored remediations. For example, more scaffolding might need to be provided to examinees with lower general ability, whereas those with higher general ability might require a different pedagogical approach. Two new item selection rules (VAS and SHAS) which are based on attribute-specific item discrimination index are proposed in chapter 2. The study suggests that the adapted Hybrid Kullback-Leibler index and the adapted versions of the new indices (A_VAS and A_SHAS) are be better suited for providing diagnostic feedback if a short test is used. These three adapted indices were able to recover the individual attributes with high degrees of accuracy. After 24 items had been administered, the three indices had correctly classified (examinees' classification matching their latent classes) about 73% of the examinees; while about 87% of examinees were classified correctly or "almost" correctly (examinees with at least seven out of eight attributes correctly classified). In terms of the accuracy in the general ability estimation, the three indices had produced a high level of bias and mean square errors. A longer test would be needed to obtain more accurate estimation of the general ability.
Issue Date:2011-08-26
URI:http://hdl.handle.net/2142/26324
Rights Information:Copyright 2011 Teck Yong Lawrence Neo
Date Available in IDEALS:2013-08-27
Date Deposited:2011-08


This item appears in the following Collection(s)

Item Statistics