RESEARCH METHODOLOGY - TYPES OF RELIABILITY

JeevaRathi 51 views 13 slides Feb 02, 2025
Slide 1
Slide 1 of 13
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13

About This Presentation

DIFFERENT TYPES OF RELIABILITY


Slide Content

TYPES OF REALIABILITY A.JEEVARATHINAM Assistant Professor Department of Home Science V.V.Vanniaperumal College for Women Virudhunagar

Types of Reliability Test-retest reliability Test-retest reliability measures the stability of the scores of a stable construct obtained from the same person on two or more separate occasions.  How it's measured?  A group of participants complete a  questionnaire  designed to measure personality traits. They repeat the questionnaire days, weeks or months apart Calculate the correlation coefficient between the two sets of scores Interpret the correlation coefficient

Test-retest reliability For example, if 10 students took the test and retest, then N would be 10. Following the N is the Greek symbol sigma , which means the sum of . xy means we multiply x by y , where x and y are the test and retest scores. If 10 students took the test and retest, then we would sum all 10 pairs of the test scores ( x ) and multiply them by the sum of retest scores ( y ).

Test-retest reliability How it's interpreted ? A correlation coefficient of 1 indicates a perfect positive correlation, while -1 indicates a perfect negative correlation  A correlation coefficient above 0.9 indicates excellent reliability  A correlation coefficient between 0.8 and 0.9 indicates good reliability  A correlation coefficient between 0.7 and 0.8 indicates acceptable reliability 

I nter-rater reliability In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on. Inter-rater reliability measures the agreement between subjective ratings by multiple raters, inspectors, judges, or appraisers. It measures how likely two or more judges are to give the same ranking to an individual event or person.

I nter-rater reliability -Example

I nter-rater reliability -steps

I nter-rater reliability -steps

I nter-rater reliability –inference

Parallel forms reliability   Parallel forms reliability  (also called  equivalent forms reliability ) uses one set of questions divided into two equivalent sets (“forms”), where both sets contain questions that measure the same construct, knowledge or skill. The two sets of questions are given to the same sample of people within a short period of time and an estimate of reliability is calculated from the two sets.

Parallel forms reliability  - example The Sound Recognition Test is a test for a condition known as auditory agnosia, or a person’s ability to recognize familiar environmental sounds, such as a bell, a whistle, or crowd sounds. There are two forms of the test, A and B, with 13 items per test. Scoring is based on allowing up to 3 points per item, making 39 the highest possible score. A group of normal, five-year-old children was selected and given form A. Then, the next day, they were given form B. The accompanying table shows the data and the scheme for calculating the reliability coefficient.

Parallel forms reliability  - steps

THANK YOU
Tags