In this study, the performance of the Lambda Coefficient of Rater-Mediated Agreement was evaluated with other chance-corrected agreement coefficients. Lambda is grounded in rater-mediated assessment theory and was developed as an alternative to Kappa (Cohen, 1960) and other chance-corrected agreement coefficients. Lambda has two variations, a general form that is calculated similarly to how most chance-corrected agreement coefficients are calculated, such as Kappa (Lambert et al., 2021). The general form of Lambda is referred to as Lambda-1. Lambda-2 differs from Lambda-1 in the calculation of the proportion of expected chance agreement. Lambda-2 uses known population proportions when available and applies those proportions in the calculation of expected chance agreement. In total, six coefficients were calculated using generated data by varying the amount and location of agreement and disagreement between ratings across two-, three-, and four-point rating scales. The exact agreement specifications ranged from 75% to 95% across 115 planned data conditions. The simulations adjusted prevalence indices according to exact agreement specifications (Xie, 2013). Results demonstrated the robustness of Lambda-1 and Lambda-2 to data conditions that are problematic for other coefficients. Both variations of Lambda produced benchmark agreement results that maintained meaning that may be diminished by other coefficients.