RELIABILITY AND VALIDITY OF RESEARCH TOOLS.pptx

SupriyaBatwalkar 872 views 34 slides Feb 29, 2024
Slide 1
Slide 1 of 34
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34

About This Presentation

Nursing research -Reliability and validity of research


Slide Content

RELIABILITY AND VALIDITY OF RESEARCH TOOLS Supriya

Reliability Reliability is defined as the ability of an instrument to create reproducible results, because without the ability to reproduce results no truth can be known.

Reliability “ the reliability of an instrument is the degree of consistency with which the instrument measures the target attribute ” -Pilot and Hungler “reliability constitutes the ability of a measuring instrument to produce the same answer on successive occasions when no change has occurred in the thing being measured” -Burroughs

Features

Approaches

Stability Stability of an instrument is the extent to which similar results or responses are obtained on two or more separate occasions . The assessment of an instrument’s stability involves a procedure that evaluates its test-retest reliability . Investigators administer the same measure to a sample twice and then compare the scores . Usually, an interval of two weeks is kept between the two assessments, because too short a period (less than two weeks) would influence the second scorings and too far apart in time would result in loss of subjects or unexpected change in the variable under study.

Stability Test-retest: Test-retest reliability is the degree to which scores are consistent over time. It indicates score variation that occurs from testing session to testing session as a result of errors of measurement. Same test- different Times Only works if phenomenon is unchanging Example : Administering the same questionnaire at 2 different times

Stability The statistical test coefficient of correlation is computed to confirm the test’s reliability . The possible values for a correlation coefficient range from −1.00 to +1.00. A reliability coefficient above 0.80 is usually considered good; however, a score of 0.70 can be considered acceptable .

Reliability There are a number of statistical formulae that are used to compute reliability. Stability (Test-retest) method Pearson’s correlation coefficient formula for estimation of reliability

Equivalence The focus equivalence is the comparison of two versions of the same paper and pencil instrument or of the two observers measuring the same events.

Inter-item reliability: (internal consistency) The association of answers to set of questions designed to measure the same concept Cronbach’s alpha is a statistic commonly used to measure inter-item reliability which is based on the average of all the possible correlations of all the split 1/2 s of set of questions on a questionnaire.

Parallel form of Reliability Split-Half Reliability: Especially appropriate when the test is very long. The most commonly used method to split the test into two is using the odd-even strategy. Since longer tests tend to be more reliable, and since split-half reliability represents the reliability of a test only half as long as the actual test.

Inter observer Reliability Correspondence between measures made by different observers Inter - Rater or  Inter - Observer Reliability : Used to assess the degree to which different raters / observers  give consistent estimates of the same phenomenon. 

Homogeneity (Internal Consistency) An instrument may be said to be internally consistent or homogeneous to the extent that its items measure the same trait. Scales that are designed to measure an attribute are ideally composed of items that measure that attribute and nothing else . On a scale to measure married women’s attitude towards family planning, it would be inappropriate to include few items that measure their attitude towards breast feeding.

Homogeneity (Internal Consistency) The most widely used method for evaluating internal consistency is coefficient alpha ( Cronbach’s alpha ). By using split-half method, the internal consistency of a tool can be calculated with the help of Spearman–Brown correlation formula . Most statistical software can be used to calculate alpha. Coefficient alpha can be interpreted like other reliability coefficients. The normal range is between 0.00 and +1.00 and higher values reflect higher internal consistency . A high value would mean that all the items in the instrument will consistently measure the construct.

Reliability interpretation

Validity In general, validity is an indication of how sound your research is. Validity applies to both the design and methods of research.

VALIDITY “Any research can be affected by different kinds of factors which, while extraneous to the concerns of the research, can invalidate the findings” ( Seliger and Shohamy , 1989)

Validity “the degree to which an instrument measures, what it is supposed to measure.” The accuracy with which a test measures whatever it is intended/supposed to measure.

Taxonomy of validity

Features of validity

Types of validity

Face validity This is concerned with how a measure or procedure appears. Face validity does not depend upon established theories for support.

Content validity It is based on the extent to which a measurement reflects the specific intended domain of content. All major aspects of the content must be adequately covered by the test items and in correct positions.

Criterion validity It is also referred to as instrumental validity. It is used o demonstrate the accuracy of a measure or procedure by comparing it with another measure or procedure, which has been demonstrated to be valid.

Construct validity It is the extent to which the test may be set to measure a theoretical construct or trait. For example, of construct are anxiety, intelligence, verbal fluency, dominance. It refers to the extent to which a test reflects and seems to measure a hypothesized trait.

Predictive validity A test is corrected against the criterion to be made available in future. The extent to which a test can predict the future performance of subjects.

Concurrent validity It can be determined by establishing the relationship or discrimination. The relationship between scores on measuring tool and criteria available at same time in the present situation.

Scientific Reliability Validity
Tags