Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

The validity of the Kappa coefficient of chance-corrected agreement has been questioned when the prevalence of specific rating scale categories is low and agreement between raters is high. The researchers proposed the Lambda Coefficient of Rater-Mediated Agreement as an alternative to Kappa to address these concerns. Lambda corrects for chance agreement based on specific assumptions about raters and the rater-mediated assessment process including raterspecific tendencies for strict or lenient ratings. Actual ratings of teacher profiles from an interrater reliability exercise confirmed the shortcomings of Kappa when used within the teacher performance evaluation process. Rater data also demonstrated the robustness of Lambda and Gwet’s AC-1 to the data conditions known to be problematic for Kappa. All alternative chancecorrected agreement coefficients evaluated showed less variability across the 57 raters than Kappa. Simulation results demonstrated the robustness of the Lambda Coefficient of RaterMediated Agreement to the data conditions that are problematic for Kappa.

Details

PDF

Statistics

from
to
Export
Download Full History