Reliability

10,031 views 11 slides May 12, 2021
Slide 1
Slide 1 of 11
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11

About This Presentation

Reliability by Dr.Shazia Zamir


Slide Content

RELIABILITY BY DR.SHAZIA ZAMIR

RELIABILITY According to Merriam Webster Dictionary: “Reliability is the extent to which an experiment, test, or measuring procedure yields the same results on repeated trials.” According to Hopkins & Antes (2000): “Reliability is the consistency of observations yielded over repeated recordings either for one subject or a set of subjects.” The more general definition of the reliability is: The degree to which a score is stable and consistent when measured at different times (test-retest reliability), in different ways (parallel-forms and alternate-forms), or with different items within the same scale (internal consistency).

RELIABILITY Means "repeatability" or "consistency". It is an ability of an instrument to consistently measure, what is suppose to measure. A measure is considered reliable if it would give us the same result over and over again (assuming that what we are measuring isn't changing!). www.vipinpatidar.wordpress.com

Types of Reliability Test-Retest Reliability: To assess the consistency of a measure from one time to another, when a same test is administered twice and the results of both administrations are similar, this constitutes the test-retest reliability. Students may remember and may be mature after the first administration creates a problem for test-retest reliability.

We estimate test-retest reliability when we administer the same test to the same sample on two different occasions. This approach assumes that there is no substantial change in the construct being measured between the two occasions. The amount of time allowed between measures is critical. We know that if we measure the same thing twice that the correlation between the two observations will depend in part by how much time elapses between the two measurement occasions. The shorter the time gap, the higher the correlation; the longer the time gap, the lower the correlation. This is because the two observations are related over time -- the closer in time we get the more similar the factors that contribute to error. Since this correlation is the test-retest estimate of reliability, you can obtain considerably different estimates depending on the interval.

Parallel-form reliability To assess the consistency of the results of two tests constructed in the same way from the same content domain. Here the test designer tries to develop two tests of the similar kinds and after administration the results are similar then it will indicate the parallel form reliability.

Internal Consistency Reliability To assess the consistency of results across items within a test, it is correlation of the individual items score with the entire test. Inter-Rater or Inter-Observer Reliability Used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon.

Factors Affecting Reliability Reliability of the test is an important characteristic as we use the test results for the future decisions about the students’ educational advances and for the job selection and many more. The methods to assure the reliability of the tests have been discussed. Here we shall focus upon the different factors that may affect the reliability of the test. The degree of the affect of each factor varies from the situation to situation. Controlling the factor may improve the reliability and otherwise it may lower the consistency of production of scores. Some of the factors that directly or indirectly affect the test reliability are given as under.

Test Length As a rule, adding more homogeneous questions to a test will increase the test's reliability. The more observations there are of a specific trait, the more accurate the measure is likely to be. Adding more questions to a psychological test is similar to adding finer distinctions on a measuring tape. Heterogeneity of Scores Heterogeneity is referred as the differences among the scores obtained from class. You may say that there are some students who got high scores and some students who got low scores or intelligent students who got high scores and other one got low scores or the difference could be due to any reason may be income level, intelligence of the students, parents qualification etc. Whichever is the reason for the variability of the scores the greater the variability (range) of test scores, the higher the reliability. Increasing the heterogeneity of the examinee sample increases variability (individual differences) thus reliability increases.

Difficulty A test that is too difficult or too easy reduces the reliability (e.g., Fewer test-takers get the answers correctly or vice-versa). A moderate level of difficulty increases test reliability. The test itself: the overall look of the test may affect the students score. Normally a test is written in well readable font size and style, the language of the test should be simple and understandable. The test administration: After the development of the test, the test developer may have to prepare the manual of the test administration, the time, environment, invigilation, and the anxiety also affects students’ performance while attempting the test. Therefore the uniform administration of the test leads to the increased reliability. The test scoring: Marking of the test is another factor towards the variation in the scores of the students. Normally there are many raters to rate the students’ responses/answers on the test. Objective type test items and the marking rubric for essay type/ supply type test items help to get the consistent scores.

Activity Develop a test of English for sixth grade students, administer it twice with a gap of six weeks, find the relationship between the scores of students between 1st and 2nd administration.
Tags