Files in this item
|(no description provided)|
|Title:||Retellings and Semi-Structured Interviews for Assessing Reading Comprehension of Standardized Test Passages and of the Illinois Inventory of Educational Progress Passages as Compared to Scores on Multiple-Choice Test Items|
|Doctoral Committee Chair(s):||Pearson, P. David|
|Department / Program:||Education|
|Degree Granting Institution:||University of Illinois at Urbana-Champaign|
|Subject(s):||Education, Language and Literature
Education, Tests and Measurements
|Abstract:||The literature in reading comprehension points to a disparity between current knowledge about the reading process and current assessment practices. Prevalent theoretical views favor a constructivist cognitive framework in which text meanings are determined by a composite of reader-related factors and text-related factors. In contrast, current assessment practices maintain single, pre-specified correct text interpretations with literal level understanding as their primary focus. In addition, the literature points to the irrelevancy of test scores in making instructional decisions.
Twenty-eight third graders and 25 sixth graders, all of whom had been part of the Reading Assessment Initiatives in the State of Illinois project, participated in the study. They had been tested with both a traditional and a novel multiple-choice test. For the present study, subjects re-read narrative and expository passages, retold the passages, and answered semi-structured interview questions individually. Protocols were scored wholistically and analytically. The resulting ratings and rankings were contrasted to those obtained in multiple-choice tests. Qualitative (case studies) and quantitative (correlations and frequencies) results indicated that subjects' ratings and rankings differed with the type of assessments used. Results also pointed to discrepancies in the use of text characteristics and salient text information by subjects and by test questions. The results support the notion that different assessment formats either assess different processes or different components of one process. In a survey, subjects indicated having a preference for novel assessment formats (interviews or multiple-choice test items with more than one correct answer) over traditional multiple-choice formats with a single correct answer. The study raises crucial issues such as the importance of task demands, text characteristics, reader factors, and source of responses. Future research should continue to explore alternative assessment procedures in order to bridge the present gap between assessment, theory, and practice.
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1988.
|Date Available in IDEALS:||2014-12-15|