Reliability and Validity of questionnaire_cb8865a2a30e75c7c177ba2b2d2622a0.pdf
ataliyaafina
5 views
24 slides
Jun 07, 2024
Slide 1 of 24
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
About This Presentation
RESEARCH METHODOLOGIES
Size: 1.07 MB
Language: en
Added: Jun 07, 2024
Slides: 24 pages
Slide Content
RNB31103 BIOSTATISTICS
VALIDITY AND
RELIABILTIY OF
QUESTIONNAIRE
Y.S. PEK
NURSING PROGRAMME
3
VALIDITY
•How well a survey measures what it is
expected to measure.
4
Not Valid Valid
ASSESSMENT OF VALIDITY
•Validity of a questionnaire is measured
commonly in three ways
–Face validity
–Content validity
–Criterion validity
5
Face validity
•Hasty and not detailed review of survey items by
untrained judges
–Example : Distributing the questionnaires to
untrained individuals to see whether they think
the items look okay
–Very casual and unprofessional method
–Many don’t really consider this as a measure of
validity at all
6
Content Validity
•Subjective measure of how appropriate the
items are, to a set of reviewers who have
knowledge of the subject matter
–Usually consists of an organized review of the
survey’s contents to ensure that it contains
everything it should and doesn’t include
anything that it shouldn’t
–Still very qualitative
–Not an objective method
7
Criterion Validity
•Measure of how well one instrument stacks up
against another instrument or predictor
–Concurrent validity: assess your questionnaire,
against a “gold standard”
–Predictive validity: assess the ability of your
instrument to forecast future events, behaviour,
attitudes, or outcomes
8
RELIABILITY
•The degree of stability exhibited when a measurement
is repeated under identical conditions.
9
Not ReliableReliable
Assessment of Reliability
•Reliability is assessed in 3 forms
–Test-retest reliability
–Alternate-form reliability
–Internal consistency reliability (Cronbach’s Alpha)
10
Test-retest Reliability
•Most common form in surveys.
•Measured by having the same respondents complete a
survey at two different points in time to see how stable
the responses are.
•Usually quantified with a correlation coefficient (rvalue).
•In general, rvalues are considered good if
r0.70
11
Internal consistency reliability
•Applied not to one item, but to groups of items that are
thought to measure different aspects of the same
concept
•Cronbach’scoefficient alpha
–Measures internal consistency reliability among a
group of items combined to form a single scale
–It is a reflection ofhow well the different items
complement each other in their measurement of
different aspects of the same variable or quality
–Interpret like a correlation coefficient (0.70 is good)
12
•It is most commonly usedwhen you have multiple Likert
questions in a survey/questionnaire that form a scale or
subscale.
•Cronbach's alpha simply provides you with an overall
reliability coefficient for a set of variables (e.g.
questions).
Cronbach’s Coefficient Alpha
13
9 May 2022 14
15
9 May 2022 16
RESULTS
17
Kline, P. (2000). The handbook of psychological testing (2nd ed.). London:
Routledge, page 13
George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide
and reference. 11.0 update (4th ed.). Boston: Allyn & Bacon.
18
RESULTS
19
Inter-observer Reliability
•How well two evaluators agree in their assessment
of a variable.
•Use correlation coefficient to compare data between
observers.
•If correlation is statistically significant, the inter
observer reliability is good.
20
Inter-observer reliability
•Cohen’s Kappa can be used when you have two or more
ratters (also known as "judges" or "observers") who are
responsible for measuring a variable on a categorical
scale.
•Cohen's kappa (κ) is such a measure of inter-rater
agreement for categorical scales when there are two
ratters.
•Kappa (κ) values increasingly greater that 0 (zero)
represent increasing better-than-chance agreement for
the two ratters, to a maximum value of +1, which
indicates perfect agreement (i.e., they agreed on
everything).
21
22
From Altman (1999) -adapted from Landis & Koch (1977)