Hsu LM, Field R (2003) Interrater agreement measures: comments on ({rm kappa}_{{rm n}}), Cohen`s kappa, Scott`s (pi ) and Aickin `s (alpha ). First Stat 2:205-219 Warrens MJ (2013) Cohen`s weighted kappa with additive weights. Adv Data Anal Classif 7:41-55 If (32) holds, we have (kappa _0=kappa _1) and all new Kappa coefficients produce the same value. Kundel HL, Polansky M (2003) Measurement of observer agreement. Radiology 288:303-308, 5th sentence, indicates that the Kappa coefficient in (11) is a weighted average of the Kappa coefficients in (18) and (19). The proof of Theorem 5 derives from the simplification of the expression to the right of (26). The values of the new Kappa coefficients can be strictly ordered in two different ways. In practice, one command is more likely to occur than the other. Tables 1 and 2 and the corresponding figures in Table 3 give examples of the likely order. In this probable order, cohens Kappa gives the minimum value, and the values of the new Kappa coefficients increase when the disagreement between the „presence” categories is greater. The strict order of their values suggests that the new kappa coefficients measure the same concept, but to varying degrees. At first glance, how the kappa coefficient can be interpreted in (19) is unclear.

An interpretation of the coefficient (19) is presented in the second sentence of the next section. Table 3 contains estimates of points and intervals of (12) for the data in Tables 1 and 2 as well as for five values of parameter u. The values in Table 3 show that the value of the coefficient in (12) in parameter u for the data in Tables 1 and 2 increases. This property is formally proved in Theorem 6 in the next section. Click here for a list of statistical links, including links to pages that relate to the analysis of agreements. If (kappa _0kappa _1 ), then (kappa _u ) is descending and câbav downwards (uin [0,1]). Vanbelle S (2016) A new interpretation of weighted Kappa coefficients. Psychometrika 81:399-410 Kraemer HC (1979) Ramifications of a population model for (kappa) as a reliability coefficient. Psychometrics 44:461-472 The weighting scheme in (8) makes sense if we expect the disagreement between classifiers on „presence” categories A_1 to be similar for all pairs A_s of categories, and if the disagreement between (A_1,ldots,A_ {c-1}) is less severe than between (A_1,ldots, A_ {c-1) on one side and (A_c) on the other.

. . .