Presentation validity and reliability of instruments.pptx
HaiderALI851201
29 views
14 slides
Jun 29, 2024
Slide 1 of 14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
About This Presentation
Presentation
Size: 64.57 KB
Language: en
Added: Jun 29, 2024
Slides: 14 pages
Slide Content
Validity And reliability Of An instrument By haider ali Bs anaesthesia and intensive care sciences 8 th semester Royal institute of science and technology kotli ajk
What is Reliability? The extent to which a research instrument or research method consistently has the same results if it is used in the same situation on repeated occasions.
Reliability means research instrument or research methods produce stable and consistent results.
For example, if a person weighs themselves repeatedly, the weight machine is expected to produce similar reading each time. We can say that weight machine is reliable.
Test-Retest Reliability Test retest means performing a test on the group of people at one time, performing the same test on the same group of people at the later time, then looking at test-retest correlation between the two sets of scores.
For example to measure the intelligence of people, intelligence doesn’t change frequently it remains constant.
This technique is used on the attributes which remains stable over time e.g. Aptitude. It is not suitable for attribute changes over time e.g. Mood.
Inter-rater The extent to which different observers are consistent with their judgements.
Judges give ordinal scores of 1 – 10 for ice skaters.
In combat sports e.g. Boxing in which the usually three judges’ scorecards are consulted to determine the winner.
Internal Consistency Internal consistency is a method of estimating whether different parts of a test are measuring the same variable.
Two methods are used for Internal Consistency Split Half
Cronbach’s Alpha
Split Half: this involves splitting the items into two halves e.g. Split the questionnaire in two halves. Questions on the even number are in one half and questions on the odd number are in second half. There should be strong correlation of two sets, if the results of two halves are different, it means low internal consistency.
Cronbach’s Alpha The most common measure of internal consistency used by researchers. A is the mean of all possible split-half correlations for a set of items. It shows whether all the items in a questionnaire measuring the same variable. Value of a ≥ 0.7 is acceptable.
What is Validity? Degree to which an instrument measures what it is intends to measure. OR how accurately a method measures what it is intended to measure.
For example, thermometer is designed to measure body temperature, it cannot be used to check blood pressure. If thermometer is used to measure body temperature then we say that it is a valid instrument. But if thermometer is used to check blood pressure then we say that it is a invalid instrument.
A test is designed to measure job satisfaction, it is supposed to measure job satisfaction not employees performance.
Types of Validity Validity has four types: 1. Face Validity
2. Content Validity
3. Criterion Validity • Concurrent • Predictive 4. Construct Validity • Convergent
• Discriminant
Face Validity Face validity refers to the extent to which a test appears to measure what it claims to measure based on face value. Example, a researcher develop a questionnaire to measure depression level in employees working in private organizations. Researcher’s colleague then look over the questions and believe the questionnaire is valid purely on face value.
Face validity means the content of the test is relevant and appropriate in its appearance.
It is the weakest & simplest form of validity.
Content Validity The extent to which the measurement covers all aspects of the concept being measured. Example, a researcher aims to measure English language ability of college level students. Researcher develop a test which contains reading, writing & speaking components, but no listening component.
Listening is an essential aspect of language ability, so the test lacks content validity to measure English language ability.
Criterion Validity Criterion validity evaluates how accurately a test measures the outcome it was designed to measure.
Outcome could be a behaviour, performance etc. Example: A researcher wants to know whether a college entrance exam is able to predict future academic performance of newly enrolled students. Then 1 st semester GPA can serve as the criterion variable, as it is an accepted measure of academic performance.
After completing 1 st semester the researcher can compare their college entry exam scores with GPA. If the scores of the two tests are close, then the college entry exam has criterion validity.
Types of Criterion Validity 1. Concurrent Validity Concurrent validity is used when the scores of a test and the criterion variables are obtained at the same time.
Score of new test correlates with another test that is already considered valid. High correlation between new test and criterion variable indicates existence of concurrent validity. 2. Predictive Validity Predictive validity is used when the criterion variables are measured after the scores of the test.
Example, a researcher examine how the of results of a job recruitment test can be used to predict future performance of employees.
Construct Validity Construct is a theoretical concept or idea that’s usually not directly measurable. For example Self-esteem, motivation, anxiety etc.
Construct validity concerns the extent to which your test or measure accurately assesses what it’s supposed to. Convergent Validity: The extent to which measures of the same/ similar constructs actually correspond to each other. Discriminant Validity: Two measures of unrelated constructs (e.g. Anxiety & self-esteem) that should be unrelated, very weakly related, or negatively related actually are in practice.