DEFINTION In research, the term reliability means “repeatability” or “consistency”. Reliability refers to whether or not you get the same answer by using an instrument to measure something more than once.
RELIABILITY TYPES There are four types of reliability that you can explore: Inter-rater reliability: In instances where there are multiple scorers or 'raters' of a test, the degree to which the raters' observations and scores are consistent with each other Parallel forms reliability: In instances where two different types of a measurement exist, the degree to which the test results on the two measures is consistent Test-retest reliability: The degree to which the measurement's results are consistent over time Internal consistency: The degree to which every test item measures the same construct
METHODS OF ESTIMATING RELIABILITY There are number of ways of estimating reliability of an instrument. Various procedures can be classified into two groups: External consistency procedures Internal consistency procedures
External Consistency Procedures External consistency procedures compare findings from two independent process of data collection with each other as a means of verifying the reliability of the measure.
Internal consistency procedures This form of reliability is used to judge the consistency of results across items on the same test .