Files
Abstract
The validity of the Kappa coefficient of chance-corrected agreement has been questioned when the prevalence of specific rating scale categories is low and agreement between raters is high. The researchers proposed the Lambda Coefficient of Rater-Mediated Agreement as an alternative to Kappa to address these concerns. Lambda corrects for chance agreement based on specific assumptions about raters and the rater-mediated assessment process including raterspecific tendencies for strict or lenient ratings. Actual ratings of teacher profiles from an interrater reliability exercise confirmed the shortcomings of Kappa. The rater data also demonstrated the robustness of Lambda-1, Lambda-2, Gwet’s AC1, and Gwet’s AC2 to the data conditions that are problematic for Kappa. All four alternative chance-corrected agreement coefficients showed less variability across the 45 raters than Kappa. However, AC-2 was undetermined for 39 of the 45 raters. Simulation data demonstrated the robustness of the Lambda Coefficient of Rater-Mediated Agreement to the data conditions that are problematic for Kappa.