Principles of assessment

9,091 views 27 slides Jul 11, 2020
Slide 1
Slide 1 of 27
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27

About This Presentation

Types Of Validity, Types Of reliability


Slide Content

Principles of Assessment Presented By : Muhammad Munsif [email protected] Presented to : Dr Anwer M.Phil Education ( Eve ) University of Education

The word ‘assess’ comes from the Latin verb ‘ assidere ’ meaning ‘ to sit with ’. Assessment is a systematic process of gathering, interpreting, and acting upon data related to student learning and experience for the purpose of developing a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experience. Assessment

In  education , the term  assessment  refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or  educational  needs of students.( Google). Assessment in education is the process of gathering, interpreting, recording, and using information about pupils’ responses to an educational task. ( Harlen , Gipps , Broadfoot , Nuttal,1992) Assessment

The term  assessment  is generally used to refer to all activities teachers use to help students learn and to gauge student progress .   Assessment  can be divided for the sake of convenience using the following categorizations: Placement , Formative S ummative Diagnostic  assessment. Types of Assessment

Validity ensures that assessment tasks and associated criteria effectively measure student attainment of the intended learning outcomes at the appropriate level. Validity refers to the evidence base that can be provided about appropriateness of the inferences, uses, and consequences that come from assessment (McMillan, 2001a). Principle 1 - Assessment should be valid

Validity refers to whether the test is actually measuring what it claims to measure ( Arshad , 2004). Validity is “the extent to which inferences made from assessment results are appropriate, meaningful, and useful in terms of the purpose of the assessment” ( Gronlund , 1998). Principle -1 ( Validity)

Face validity: Content validity: Construct Validity: Concurrent Validity: Predictive Validity: Types Of Validity

Face Validity : Mousavi (2009) refers face validity as the degree to which a test looks right, and appears to measure the knowledge or abilities it claims to measure, based on the subjective judgment of the examinees who take it, the administrative personnel who decide on its use, and other psychometrically unsophisticated observers. Quick overview ( types of Validity)

Content validity “is concerned with whether or not the content of the test is sufficiently representative and comprehensive for the test to be a valid measure of what it is supposed to measure” (Henning, 1987 ). The most important step in making sure of content validity is to make sure all content domains are presented in the test . Content Validity

Construct Validity, Construct is a psychological concept used in measurement. Construct validity is the most obvious reflection of whether a test measures what it is supposed to measure as it directly addresses the issue of what it is that is being measured. Construct Validity

Concurrent validity is the use of another more reputable and recognized test to validate one’s own test. For example, suppose you come up with your own new test and would like to determine the validity of your test. If you choose to use concurrent validity, you would look for a reputable test and compare your students’ performance on your test with their performance on the reputable and acknowledged test. Concurrent validity

Predictive validity is closely related to concurrent validity in that it too generates a numerical value. For example, the predictive validity of a university language placement test can be determined several semesters later by correlating the scores on the test to the GPA of the students who took the test. Predictive validity

There is a need for assessment to be reliable and this requires clear and consistent processes for the setting, marking, grading and moderation of assignments . According to Brown (2010 ) A reliable test can be described as follows: Consistent in its conditions across two or more administrations. Gives clear directions for scoring / evaluation , Has uniform rubrics for scoring / evaluation , Lends itself to consistent application of those rubrics by the scorer , Contains item / tasks that are unambiguous to the test-taker Principle 2 - Assessment should be reliable and consistent

Reliability means the degree to which an assessment tool produces stable and consistent results. Reliability essentially denotes ‘consistency, stability, dependability, and accuracy of assessment results’ (McMillan, 2001a, p.65 in Brown, G. et al, 2008). Reliability

Test-Retest Reliability: Test-Retest Reliability The same test is re-administered to the same people. It is expected the correlation between the two scores of the two tests would be high . Parallel/ Equivalent Reliability: Two similar tests are administered to the same sample of persons. Unlike test-retest, this is protected from the influence of memory as the same questions are not asked in the second of the two tests. Types of Reliability Assessment

Inter-Rater Reliability: Inter-Rater Reliability two or more judges or raters are involved in grading. A score is more reliable and accurate measure if two or more raters agree on it . Intra-Rater Reliability: Intra-rater reliability is the consistency of grading by a single rater at the same time. When a rater grades tests at different time, he/she may become inconsistent in grading for various reasons.

Split Half Reliability: Split Half Reliability is a test administered once to a group, is divided into two equal halves after the students have returned the test, and the halves are then correlated. Halves are often determined based on the number assigned to each item with one half consisting of odd numbered items and the other half even numbered items. Types of Reliability Assessment

Test Administration Reliability: This involves the condition in which the test is administered. Unreliability occurs due to outside interference like noise, variations in photocopying, temperature variations, the amount of light in various parts of the room, and even the condition of desk and chairs. Test Administration Reliability :

Clear, accurate, consistent and timely information on assessment tasks and procedures should be made available to students, staff and other external assessors or examiners. Principle 3 - Information about assessment should be explicit, accessible and transparent

As far as is possible without compromising academic standards, inclusive and equitable assessment should ensure that tasks and procedures do not disadvantage any group or individual. Principle 4 - Assessment should be inclusive and equitable

Principle 5 - Assessment should be an integral part of programmed design and should relate directly to the programmed aims and learning outcomes Assessment tasks should primarily reflect the nature of the discipline or subject but should also ensure that students have the opportunity to develop a range of generic skills and capabilities.

The scheduling of assignments and the amount of assessed work required should provide a reliable and valid profile of achievement without overloading staff or students. Principle 6 - The amount of assessed work should be manageable

Formative and summative assessment should be incorporated into programs to ensure that the purposes of assessment are adequately addressed. Many programs may also wish to include diagnostic assessment. Principle 7 - Formative and summative assessment should be included in each program

Students are entitled to feedback on submitted formative assessment tasks, and on summative tasks, where appropriate. The nature, extent and timing of feedback for each assessment task should be made clear to students in advance. Principle 8 - Timely feedback that promotes learning and facilitates improvement should be an integral part of the assessment process

All those involved in the assessment of students must be competent to undertake their roles and responsibilities. Principle 9 - Staff development policy and strategy should include assessment
Tags