S6E02 Judging Inter-Rater Reliability

S6E02_graphic

In this week’s episode Greg and Patrick talk about different ways of assessing inter-rater agreement and reliability among two or more raters and the importance of doing so. Along the way they also discuss the summer Olympics, underdogs, monologue face-offs, Quincy Wilson, Boomers, the Soviet judge, biopsy subjectivity, the secret to college admissions reliability, skipping conference dinners, ripping a dive, Patrick’s silver medal, the trifactor model, the Good Cop parent, temper tantrums, and intellectual Sugar Daddies.

Related Episodes

  • S2E11: The Replication … Dilemma with Samantha Anderson
  • S1E09: Grumpy Old Man & Village Idiot Argue About Reliability

Recommended Readings

Bauer, D. J., Howard, A. L., Baldasaro, R. E., Curran, P. J., Hussong, A. M., Chassin, L., & Zucker, R. A. (2013). A trifactor model for integrating ratings across multiple informants. Psychological Methods18, 475.

Cohen, J.A. (1960). Coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Curran, P. J., Georgeson, A. R., Bauer, D. J., & Hussong, A. M. (2021). Psychometric models for scoring multiple reporter assessments: Applications to integrative data analysis in prevention science and beyond. International Journal of Behavioral Development45, 40-50.

Gisev, N., Bell, J. S., & Chen, T. F. (2013). Interrater agreement and interrater reliability: key concepts, approaches, and applications. Research in Social and Administrative Pharmacy9, 330-338.

Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in Quantitative Methods for Psychology8, 23.

Tinsley, H. E., & Weiss, D. J. (2000). Interrater reliability and agreement. In Handbook of applied multivariate statistics and mathematical modeling (pp. 95-124). Academic Press.

Warrens, M. J. (2015). Five ways to look at Cohen’s kappa. Journal of Psychology & Psychotherapy5.

 

15585

join our
email list

Scroll to Top