Kappa statistics

3,171 views 17 slides Aug 13, 2021
Slide 1
Slide 1 of 17
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17

About This Presentation

Cohens Kappa


Slide Content

DATA-BASEMANAGEMENT
Kappa statistics
DR. AMEY DHATRAK

Kappa statistics
◦The kappa statistic is frequently used to test interrater reliability.(Measurement of the extent to
which data collectors (raters) assign the same score to the same.)
◦Why we need Kappa???
◦Some studies involve the need for some degree of subjective interpretation by observers and
often measurement differs with different ‘raters.
◦intraobserver variation
◦intrasubject variation
◦For example:
◦Interpreting x-ray results = Two radiologists reading same chest x-ray for signs of pneumoconiosis
◦Two laboratory scientists counting radioactively marked cells from liver tissue
◦Often same rater differs when measuring the same thing on a different occasion

Kappa-agreement
◦Without good agreement results are difficult to interpret
◦Measurements are unreliable or inconsistent
◦Need measures of agreement -kappa
◦Remember-
Extent to which the observed agreement exceeds that which would be expected by
chance alone (i.e., percent agreement observed − percent agreement expected by
chance alone) [numerator] relative to the maximum that the observers could hope to
improvetheir agreement (i.e., 100% − percent agreement expected by chance alone)
[denominator].

Formula 

++
++


=
ii
iiii



1

Examples:-1
44*60/100= 26.4
31*40/100= 12.4

Two raterswith binary measure
15 5
4 35
No
Biomarker
present
No
Biomarker
present
Rater1
Rater
2
Examples:-2 ( slightly different method)

Cohen’s Kappa Statistic (κ)
Measures agreement between ratersmore than expected by chance

++
++


=
ii
iiii



1
represents the marginal probabilities and i= 1,2 the score

Two raterswith binary measure
15 5 20
4 35 39
19 40 59
No
Biomarker
present
No
Biomarker
present
Rater1
Rater
2
Marginal Total
Marginal
Total

Two raterswith binary measure
15 520
4 3539
194059

ii= (15 + 35)/59
= 0.847

i+
+i= (20x19 + 40x39)/59
2
= 0.557

Two raterswith binary measure
15 520
4 3539
194059

ii= 0.847

i+
+i= 0.557

++
++


=
ii
iiii



1 557.01
557.0847.0


= 654.0=

Confidence intervals for kappa
◦Given that the most frequent value desired is 95%, the formula uses
1.96 as the constant.
◦The formula for a confidence interval = κ –1.96 x SEκto κ + 1.96 x Seκ
◦To obtain the standard error of kappa (SEκ) the following formula
should be used:

Thank You
Tags