Fleiss' kappa

Fleiss' kappa

Fleiss' kappa is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. This contrasts with other kappas such as Cohen's kappa, which only work when assessing the agreement between two raters. The measure calculates the degree of agreement in classification over that which would be expected by chance and is scored as a number between 0 and 1. There is no generally agreed on measure of significance, although guidelines have been given.

Introduction

Fleiss' kappa is a generalisation of Scott's pi statistic,ref|Scott1955 a statistical measure of inter-rater reliability.ref|Fleiss1971 It is also related to Cohen's kappa statistic. Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings (see nominal data), to a fixed number of items. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly.

Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, kappa,, can be defined as,

(1):kappa = frac{ar{P} - ar{P_e{1 - ar{P_e

The factor 1 - ar{P_e} gives the degree of agreement that is attainable above chance, and, ar{P} - ar{P_e} gives the degree of agreement actually achieved above chance. If the raters are in complete agreement then kappa = 1~. If there is no agreement among the raters (other than what would be expected by chance) then kappa le 0.

An example of the use of Fleiss' kappa may be the following: Consider fourteen psychiatrists are asked to look at ten patients. Each psychiatrist gives one of possibly five diagnoses to each patient. The Fleiss' kappa can be computed from this matrix (see example below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance.

Equations

Let "N" be the total number of subjects, let "n" be the number of ratings per subject, and let "k" be the number of categories into which assignments are made. The subjects are indexed by "i" = 1, ... "N" and the categories are indexed by "j" = 1, ... "k". Let "n""ij" represent the number of raters who assigned the "i"-th subject to the "j"-th category.

First calculate "p"j, the proportion of all assignments which were to the "j"-th category:

(2):p_{j} = frac{1}{N n} sum_{i=1}^N n_{i j},quadquad 1 = frac{1}{n} sum_{j=1}^k n_{i j}

Now calculate P_{i},, the extent to which raters agree for the "i"-th subject:

(3):P_{i} = frac{1}{n(n - 1)} sum_{j=1}^k n_{i j} (n_{i j} - 1)

:: = frac{1}{n(n - 1)} sum_{j=1}^k (n_{i j}^2 - n_{i j})

:: = frac{1}{n(n - 1)} sum_{j=1}^k n_{i j}^2 - n

Now compute ar{P}, the mean of the P_i,'s, and ar{P_e} which go into the formula for kappa,:

(4):ar{P} = frac{1}{N} sum_{i=1}^N P_{i}

:: = frac{1}{N n (n - 1)} sum_{i=1}^N sum_{j=1}^k n_{i j}^2 - N n

(5):ar{P_e} = sum_{j=1}^k p_{j} ^2

Worked example

In the following example, fourteen raters (n) assign ten "subjects" (N) to a total of five categories (k). The categories are presented in the columns, while the subjects are presented in the rows.

Data

See table to the right.

N = 10, n = 14, k = 5

Sum of all cells = 140 Sum of P_{i}, = 3.780

Calculations

For example, taking the first column,

:p_1 = frac{ 0+0+0+0+2+7+3+2+6+0 }{140} = 0.143

And taking the second row,

:P_2 = frac{1}{14(14 - 1)} left(0^2 + 2^2 + 6^2 + 4^2 + 2^2 - 14 ight) = 0.253

In order to calculate ar{P}, we need to know the sum of P_i,

::~ = 1.000 + 0.253 + cdots + 0.286 + 0.286 = 3.780

Over the whole sheet,

:ar{P} = frac{1}{(10) ((14) (14 - 1))} (3.780) (14) (14-1) = 0.378

:ar{P}_{e} = 0.143^2 + 0.200^2 + 0.279^2 + 0.150^2 + 0.229^2 = 0.210

:kappa = frac{0.378 - 0.211}{1 - 0.211} = 0.211

ignificance

Landis and Koch (1977) gave the following table for interpreting kappa values.ref|Landis1977 This table is however "by no means" universally accepted; They supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful,ref|Gwet2001 as the number of categories and subjects will affect the magnitude of the value. The kappa will be higher when there are fewer categories.ref|Sim2005

ee also

* Cohen's kappa
* Pearson product-moment correlation coefficient
* Joseph L. Fleiss

Notes

# Fleiss, J. L. (1971) pp. 378–382
# Scott, W. (1955) pp. 321–325
# Landis, J. R. and Koch, G. G. (1977) pp. 159–174
# Gwet, K. (2001)
# Sim, J. and Wright, C. C. (2005) pp. 257–268

References

* Fleiss, J. L. (1971) "Measuring nominal scale agreement among many raters." "Psychological Bulletin", Vol. 76, No. 5 pp. 378–382
* Gwet, K. (2001) "Statistical Tables for Inter-Rater Agreement". (Gaithersburg : StatAxis Publishing)
* Landis, J. R. and Koch, G. G. (1977) "The measurement of observer agreement for categorical data" in "Biometrics". Vol. 33, pp. 159–174
* Scott, W. (1955). "Reliability of content analysis: The case of nominal scale coding." "Public Opinion Quarterly", Vol. 19, No. 3, pp. 321–325.
* Sim, J. and Wright, C. C. (2005) "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements" in "Physical Therapy". Vol. 85, No. 3, pp. 206–282

Further reading

* Fleiss, J. L. and Cohen, J. (1973) "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability" in "Educational and Psychological Measurement", Vol. 33 pp. 613–619
* Fleiss, J. L. (1981) "Statistical methods for rates and proportions". 2nd ed. (New York: John Wiley) pp. 38–46

External links

* [http://ourworld.compuserve.com/homepages/jsuebersax/kappa.htm Kappa: Pros and Cons] contains a good bibliography of articles about the coefficient.

* [http://justus.randolph.name/kappa Online Kappa Calculator] calculates a variation of Fleiss' kappa.


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Fleiss' Kappa — Cohens Kappa ist ein statistisches Maß für die Interrater Reliabilität von Einschätzungen von (in der Regel) zwei Beurteilern (Ratern), das Jacob Cohen 1960 vorschlug. Die Gleichung für Cohens Kappa lautet wobei p0 der gemessene… …   Deutsch Wikipedia

  • Fleiss' kappa — Cohens Kappa ist ein statistisches Maß für die Interrater Reliabilität von Einschätzungen von (in der Regel) zwei Beurteilern (Ratern), das Jacob Cohen 1960 vorschlug. Die Gleichung für Cohens Kappa lautet wobei p0 der gemessene… …   Deutsch Wikipedia

  • Fleiss — may refer to:People* Heidi Fleiss, former madam * Joseph L. Fleiss, professor of biostatistics * Mike Fleiss, television producer * Nika Fleiss, skier * Noah Fleiss, actor * Paul M. Fleiss, pediatriciantatistics* Fleiss kappa, a statistical… …   Wikipedia

  • Kappa — (uppercase Kappa;, lowercase kappa; or Unicode|ϰ; el. Κάππα) is the 10th letter of the Greek alphabet, used to represent the voiceless velar stop, or k , sound in Ancient and Modern Greek. In the system of Greek numerals it has a value of 20. It… …   Wikipedia

  • Kappa de Cohen — En statistiques, le test du Kappa mesure l’accord entre observateurs lors d un codage qualitatif en catégories. Le calcul du Kappa se fait de la manière suivante : Où Pr(a) est l accord relatif entre codeurs et Pr(e) la probabilité d un… …   Wikipédia en Français

  • Cohen's kappa — coefficient is a statistical measure of inter rater agreement or inter annotator agreement[1] for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation since κ takes into… …   Wikipedia

  • Cohens Kappa — ist ein statistisches Maß für die Interrater Reliabilität von Einschätzungen von (in der Regel) zwei Beurteilern (Ratern), das Jacob Cohen 1960 vorschlug. Dieses Maß kann aber auch für die Intrarater Reliabiliät verwendet werden, bei dem derselbe …   Deutsch Wikipedia

  • Joseph L. Fleiss — Infobox Scientist name = Joseph L. Fleiss image size = 150x159 birth date = birth date|1937|11|13 birth place = Brooklyn, New York death date = death date and age|2003|6|12|1937|11|13 death place = New Jersey fields = Biostatistics workplaces =… …   Wikipedia

  • Test du Kappa — Kappa de Cohen En statistiques, le test du Kappa mesure l’accord entre observateurs lors d un codage qualitatif en catégories. Le calcul du Kappa se fait de la manière suivante : Où Pr(a) est l accord relatif entre codeurs et Pr(e) la… …   Wikipédia en Français

  • Inter-rater reliability — In statistics, inter rater reliability, inter rater agreement, or concordance is the degree of agreement among raters. It gives a score of how much homogeneity, or consensus, there is in the ratings given by judges. It is useful in refining the… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”