Files in this item

FilesDescriptionFormat

application/pdf

application/pdfWEI-DISSERTATION-2016.pdf (4MB)Restricted Access
(no description provided)PDF

Description

Title:Bridging the gap between research and practice: construction and validation of a CDA-informed English reading test for China's twelfth graders
Author(s):Wei, Junli
Director of Research:Bowles, Melissa; Davidson, Frederick
Doctoral Committee Chair(s):Bowles, Melissa
Doctoral Committee Member(s):Christianson, Kiel; Jang, Eunice Eunhee; Zhang, Jinming
Department / Program:Educational Psychology
Discipline:Educational Psychology
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Cognitive diagnostic assessment
CDA-informed test
Cognitive diagnostic models
Second language reading
Test development
Validation
Abstract:Assessment should be made more useful for promoting students' learning success. Although recent research advances in cognitive diagnostic assessment (CDA) move in this direction, most CDA studies of second language (L2) reading tend to identify examinees' mastery of skills by using the existing tests that were not originally designed for diagnostic purposes (e.g., Buck, Tatsuoka, & Kostin, 1997; Kasai, 1997).This has led to inaccurate and/or unsatisfactory diagnostic inferences, as many skills are rarely, if ever, measured in such tests (e.g., Alderson, 2010; Jang, 2009). By constructing a CDA-informed test, this study aims to diagnose Chinese twelfth graders' strengths and weaknesses in reading comprehension, and thus contribute to enhanced instruction and learning. Approximately 1,311 students and their English teachers from one high schools in China participated in this research . Using Cognitive Design System (CDS) approach (Embretson, 1994, 1998), this study integrated a cognitive framework from the outset into the whole test development primarily concerning three key issues: what to diagnose, how to diagnose, and how to use the diagnostic information. An integrated mixed methods research design (Greene, 2007) was developed over the following four phases to address the issues. In the first stage, this study built a cognitive model, identified eight skills, and specified their hierarchical relationships. The cognitive model was built and iteratively refined by integrating information from a thorough literature review, students’ think-aloud protocols, and opinions from content experts. The results from the statistical analysis also demonstrate that the cognitive model is appropriate for this study, since the value of hierarchy consistency index (HCI) is .66, and about 60% of the variance on item difficulty in the regression analysis can be explained by the skills specified in the model. In the second stage, the test specifications were designed to provide a generative requirement for creating test tasks, and the Q matrix was for guiding the writing of each specific item. Thirty multiple-choice items were initially created and refined iteratively through pilot tests. Then, at the third and fourth stage, the test response data were analyzed, and both quantitative and qualitative evidence was used to support test inferences. Quantitatively, the response data was first analyzed in the conventional Reparametrized Unified Model (RUM), then in the reduced Re-parameterized Unified Model (r-RUM) by incorporating the attribute hierarchy in data analysis (i.e., the new model was referred to as the rRUM-AH). As presented in the empirical and simulation studies, both the MCMC chain and burn-in period were shorter in the rRUM-AH than the RUM, indicating that it is comparatively easier to get converged in the rRUM-AH than the RUM. In addition, the model-data fit and skill classification accuracy in the rRUM-AH were compared to several other cognitive diagnostic models (CDMs) with attribute hierarchy to determine which model was best for this study. Furthermore, qualitative evidence (e.g., interviews, classroom observations, and surveys) was collected for the evaluation of diagnostic feedback on learning. Situated at the intersection of theories of L2 reading, cognition, and measurement, this dissertation project narrates a validation process for developing a CDA-informed English reading test. Thus it strengthens validity arguments and enhances understanding of the complexity of CDA test construction. As one of the first to develop a CDA-informed L2 reading test, this study is also unique in incorporating the hierarchical structures of the reading skills in test design, development and validation.
Issue Date:2016-07-13
Type:Thesis
URI:http://hdl.handle.net/2142/92943
Rights Information:Copyright 2015 Junli Wei
Date Available in IDEALS:2016-11-10
Date Deposited:2016-08


This item appears in the following Collection(s)

Item Statistics