LESSON 10 STABLISHING VALIDITY AND REALBILITY OF RESEARCH INSTRUMENT- DAGAMI.pptx
dagamijessamaedagle
14 views
51 slides
Mar 02, 2025
Slide 1 of 51
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
About This Presentation
Instrument is the general term that researchers use for a measurement device (survey, test, questionnaire, etc.). To help distinguish between instrument and instrumentation, consider that the instrument is the device and instrumentation is the course of action (the process of developing, testing, an...
Instrument is the general term that researchers use for a measurement device (survey, test, questionnaire, etc.). To help distinguish between instrument and instrumentation, consider that the instrument is the device and instrumentation is the course of action (the process of developing, testing, and using the device).
Instruments fall into two broad categories, researcher-completed and subject-completed, distinguished by those instruments that researchers administer versus those that are completed by participants. Researchers chose which type of instrument, or instruments, to use based on the research question. Examples are listed below:
Usability refers to the ease with which an instrument can be administered, interpreted by the participant, and scored/interpreted by the researcher. Example usability problems include:
Students are asked to rate a lesson immediately after class, but there are only a few minutes before the next class begins (problem with administration).
Students are asked to keep self-checklists of their after school activities, but the directions are complicated and the item descriptions confusing (problem with interpretation).
Teachers are asked about their attitudes regarding school policy, but some questions are worded poorly which results in low completion rates (problem with scoring/interpretation).
Validity and reliability concerns (discussed below) will help alleviate usability issues. For now, we can identify five usability considerations:
How long will it take to administer?
Are the directions clear?
How easy is it to score?
Do equivalent forms exist?
Have any problems been reported by others who used it?
It is best to use an existing instrument, one that has been developed and tested numerous times, such as can be found in the Mental Measurements Yearbook. We will turn to why next.
Part II: Validity
Validity is the extent to which an instrument measures what it is supposed to measure and performs as it is designed to perform. It is rare, if nearly impossible, that an instrument be 100% valid, so validity is generally measured in degrees. As a process, validation involves collecting and analyzing data to assess the accuracy of an instrument. There are numerous statistical tests and measures to assess the validity of quantitative instruments, which generally involves pilot testing. The remainder of this discussion focuses on external validity and content validity.
External validity is the extent to which the results of a study can be generalized from a sample to a population. Establishing eternal validity for an instrument, then, follows directly from sampling. Recall that a sample
Size: 829.47 KB
Language: en
Added: Mar 02, 2025
Slides: 51 pages
Slide Content
Stablishing Validity and Reliability of Research Instrument 1
` Topic Learning Outcomes Describe the concept of validity Explain the different types of validity Describe the concept of reliability Explain factors affecting the reliability of a research instrument Illustrate methods of determining the reliability of an instrument Differentiate validity and reliability 2
` Topic Learning Outcomes Describe the concept of validity Explain the different types of validity Describe the concept of reliability Explain factors affecting the reliability of a research instrument Illustrate methods of determining the reliability of an instrument Differentiate validity and reliability 2
` Topic Learning Outcomes Describe the concept of validity Explain the different types of validity Describe the concept of reliability Explain factors affecting the reliability of a research instrument Illustrate methods of determining the reliability of an instrument Differentiate validity and reliability 2
` Topic Learning Outcomes Describe the concept of validity Explain the different types of validity Describe the concept of reliability Explain factors affecting the reliability of a research instrument Illustrate methods of determining the reliability of an instrument Differentiate validity and reliability 2
Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a method, technique. or test measures something. Reliability is about the consistency of a measure, and validity is about the accuracy of a measure. 3
Questionnaire is one of the most widely used tools to collect data especially in social science research. The main objective of questionnaire in research is to obtain relevant information in most reliable and valid manner. 4
It’s important to consider reliability and validity when you are creating your research design, planning your methods, and writing up your results, especially in quantitative research. Failing to do so can lead to several types of research bias and seriously affect your work. https://www.scribbr.com/methodology/reliability-vs-validity/ 5
What is Validity ? 6
What is Validity ? 6
What is Validity ? 6
What is Validity ? 6
Validity refers to how accurately a method measures what it is intended to measure. If research has high validity, that means it produces results that correspond to real properties, characteristics, and variations in the physical or social world. 7
Validity explains how well the collected data covers the actual area of investigation. Validity basically means “measure what is intended to be measured” 8
Types Of Validity (Quantitative Research) 9
Face and Content Validity Concurrent and Predictive Validity Construct Validity Types Of Validity 10
Face Validity : It's how a test looks like it measures what it's supposed to measure, based on a surface-level examination. Content Validity : It's about whether a test actually measures the content or material it's supposed to measure, based on expert judgment and analysis of the test items. Face and Content Validity 11
Face Validity : It's how a test looks like it measures what it's supposed to measure, based on a surface-level examination. Content Validity : It's about whether a test actually measures the content or material it's supposed to measure, based on expert judgment and analysis of the test items. Face and Content Validity 11
Face Validity : It's how a test looks like it measures what it's supposed to measure, based on a surface-level examination. Content Validity : It's about whether a test actually measures the content or material it's supposed to measure, based on expert judgment and analysis of the test items. Face and Content Validity 11
Face Validity : It's how a test looks like it measures what it's supposed to measure, based on a surface-level examination. Content Validity : It's about whether a test actually measures the content or material it's supposed to measure, based on expert judgment and analysis of the test items. Face and Content Validity 11
II. Concurrent and Predictive Validity Predictive validity is judged by the degree to which an instrument can forecast an outcome. Concurrent validity is judged by how well an instrument compares with a second assessment concurrently done. 12
III. Construct Validity Is a more sophisticated technique for establishing the validity of an instrument - based upon statistical procedures. Determined by ascertaining the contribution of each construct to the total variance observed in a phenomenon. The greater variance attributable to the constructs, the higher the validity of the instrument 13
What is Reliability? 14
Reliability refers to how consistently a method measures something. If the same result can be consistently achieved by using the same methods under the same circumstances, the measurement is considered reliable. 15
16
16
16
16
The Wording of Questions A slight ambiguity in the wording of questions or statements can affect the reliability of a research instrument as respondents may interpret the questions differently sometimes resulting in different responses. The Physical Setting Any change in the physical setting at the time of the repeat interview may affect the responses given by a respondent, which may affect reliability. 17
The Respondent’s Mood A change in a respondent's mood when responding to questions or writing answers in a questionnaire can change and may affect the reliability of that instrument. The Interviewer’s Mood As the mood of a respondent could change from one interview to another so could the mood, motivation and interaction of the interviewer, which could affect the responses given by respondents thereby affecting the reliability of the research instrument. 18
The Regression Affect of an Instrument When a research instrument is used to measure attitudes towards an issue , some respondents, after having expressed their opinion, may feel that they have been either too negative or too positive towards the issue. The second time they may express their opinion differently, thereby affecting reliability. 19
The Nature of Interactions In an interview situation, the interaction between the interviewer and the interviewee can affect responses significantly. During the repeat interview the responses given may be different due to a change in interaction , which could affect reliability. 20
The Nature of Interactions In an interview situation, the interaction between the interviewer and the interviewee can affect responses significantly. During the repeat interview the responses given may be different due to a change in interaction , which could affect reliability. 20
The Nature of Interactions In an interview situation, the interaction between the interviewer and the interviewee can affect responses significantly. During the repeat interview the responses given may be different due to a change in interaction , which could affect reliability. 20
The Nature of Interactions In an interview situation, the interaction between the interviewer and the interviewee can affect responses significantly. During the repeat interview the responses given may be different due to a change in interaction , which could affect reliability. 20
Methods of D etermining the R eliability of an R esearch I nstrument ( Q uantitative R esearch ) 21
External C onsistency P rocedures Test / R etest Commonly used method for establishing the reliability of a research tool . In the test/retest an instrument is administered once . The greater the value of the ratio, the higher the reliability of an instrument As an equation : Test score / retest = 1 Test score - retest= 0 22
Parallel Forms of The Same Test Can construct two instruments that are intended to measure the same phenomenon Two instrument administered to two similar populations. The result obtained from one test are compared with those obtained from the other. If similar, assumed that the instrument is reliable. 23
Internal C onsistency P rocedures The S plit-half T echnique Is designed to correlate half of the items with other half appropriate for instruments that are designed to measure attitudes towards an issue The question are divided in half in such a way that any two questions intended to measure the same aspects fall into different halves The Scores obtained by administering the two halves are correlated Reliability is calculated by using the product moment correlation. Because the product moment correlation is calculated on the bases of only half the instrument, it needs to be corrected to assess reliability for the whole, known as stepped up reliability (called the Spearman-Brown formula) 24
Understanding R eliability vs V alidity 25
Understanding R eliability vs V alidity 25
Understanding R eliability vs V alidity 25
Understanding R eliability vs V alidity 25
26
26
26
26
26
Reliability vs V alidity Reliability Validity What does it tell you? The extent to which the results can be reproduced when the research is repeated under the same conditions. The extent to which the results really measure what they are supposed to measure. How is it assessed? By checking the consistency of results across time, across different observers, and across parts of the test itself. By checking how well the results correspond to established theories and other measures of the same concept. How do they relate? A reliable measurement is not always valid: the results might be reproducible but they’re not necessarily correct. A valid measurement is generally reliable: if a test produces accurate results, they should be reproducible. 27
Reliability and validity are closely related, but they mean different things. A measurement can be reliable without being valid. However, if a measurement is valid, it is usually also reliable. 2 8
THANK YOU FOR LISTENING MAY GOD CONTINUE TO BLESS US ALL ….