Files in this item

FilesDescriptionFormat

application/pdf

application/pdf8511589.pdf (6MB)Restricted to U of Illinois
(no description provided)PDF

Description

Title:Techniques for Assessing the Quality of Ratings
Author(s):Burns, Mary Daugherty
Department / Program:Education
Discipline:Education
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Education, Tests and Measurements
Abstract:This thesis identified and studied nine analytical techniques used to compute individual rater reliability estimates. These techniques were analyzed to compare reliability estimates, to assess validity, and to determine the effects achieved by modifying the underlying distribution.
To study these analytical techniques, two categories of data were analyzed. The first category consisted of simulated rating data such that individual rater reliabilities were predetermined to serve as the criterion and the underlying distribution was predetermined. Data were simulated for three underlying distributions, normal, bimodal, and negatively skewed. The second category consisted of empirical rating data for "novice" raters and for "expert" raters who served as the criterion.
Analyses of the simulated rating data indicated that the individual rater reliability estimates, computed by each analytical technique, were not comparable to the simulated reliabilities for each rater. This was true regardless of the underlying distribution. The underlying distribution affected the magnitude of the individual rater reliability estimates and the agreement between each analytical technique and the criterion. Generally, the analytical techniques were more in agreement with one another than any one technique was in agreement with the criterion. In terms of rank ordering raters, none of the analytical techniques ranked the raters identically to the predetermined simulation. However, all the techniques, regardless of the underlying distribution were able to identify the two most reliable raters, but were unable to identify the rater with the lowest reliability.
Due to the limited reliability achieved by the "expert" rater criterion, no conclusion was drawn regarding the validity of the individual rater reliability techniques as applied to the empirical rating data.
Issue Date:1985
Type:Text
Description:232 p.
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1985.
URI:http://hdl.handle.net/2142/68972
Other Identifier(s):(UMI)AAI8511589
Date Available in IDEALS:2014-12-15
Date Deposited:1985


This item appears in the following Collection(s)

Item Statistics