Measurement of Agreement for Categorical Data

Open Access
- Author:
- Yang, Jingyun
- Graduate Program:
- Statistics
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- June 11, 2007
- Committee Members:
- Vernon Michael Chinchilli, Committee Chair/Co-Chair
Runze Li, Committee Chair/Co-Chair
Damla Senturk, Committee Member
Tonya Sharp King, Committee Member - Keywords:
- multivariate kappa
kappa
Agreement - Abstract:
- Measurements of agreement are used to assess the reproducibility of a new assay or instrument, the acceptability of a new or generic process, methodology or method comparison. Examples include the agreement when two methods or two raters simultaneously assess a response or when one rater makes the same assessment at two times, the agreement of a newly developed method with a gold standard method, and the agreement of observed values with predicted values. Traditionally, kappa and weighted kappa coefficients are used for measurements of agreement when the responses are categorical. The concordance correlation coefficient is used when the responses are continuous. Cohen¡¯s kappa and weighted kappa coefficients have received many criticisms since they were proposed and they may fail to work well under certain situations. As a result, researchers have suggested the investigation of alternative methods when measuring agreement. In this paper, we investigate several different alternatives to Cohen¡¯s kappa and weighted kappa coefficients. Their properties and asymptotic distributions are presented. Simulation performances are provided to compare with the performances of Cohen¡¯s kappa and weighted kappa coefficients.