FRANCISRYANDokFRAO
18 views
78 slides
Mar 12, 2025
Slide 1 of 78
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
About This Presentation
Various definitions of assessment and the role it plays in teaching and learning:
Assessment involves the use of empirical data on student learning to refine programs and improve student learning. (Assessing Academic Programs in Higher Education by Allen 2004)
Assessment is the process of gathering...
Various definitions of assessment and the role it plays in teaching and learning:
Assessment involves the use of empirical data on student learning to refine programs and improve student learning. (Assessing Academic Programs in Higher Education by Allen 2004)
Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning. (Learner-Centered Assessment on College Campuses: shifting the focus from teaching to learning by Huba and Freed 2000)
Assessment is the systematic basis for making inferences about the learning and development of students. It is the process of defining, selecting, designing, collecting, analyzing, interpreting, and using information to increase students' learning and development. (Assessing Student Learning and Development: A Guide to the Principles, Goals, and Methods of Determining College Outcomes by Erwin 1991)
Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. (Assessment Essentials: planning, implementing, and improving assessment in higher education.ormulating Statements of Intended Learning Outcomes - statements describing intentions about what students should know, understand, and be able to do with their knowledge when they graduate.
Developing or Selecting Assessment Measures - designing or selecting data gathering measures to assess whether or not our intended learning outcomes have been achieved. Includes
Direct assessments - projects, products, papers/theses, exhibitions, performances, case studies, clinical evaluations, portfolios, interviews, and oral exams - which ask students to demonstrate what they know or can do with their knowledge.
Indirect assessments - self-report measures such as surveys - in which respondents share their perceptions about what graduates know or can do with their knowledge.
Creating Experiences Leading to Outcomes - ensuring that students have experiences both in and outside their courses that help them achieve the intended learning outcomes.
Discussing and Using Assessment Results to Improve Teaching and Learning - using the results to improve individual student performance.Westminster has translated these fundamental components into an assessment cycle that includes four stages: Plan-Do-Check-Act.
Plan - What do I want students to learn?
This stage includes the first fundamental component of assessment: Formulating Statements of Intended Learning Outcomes
Do - How do I teach effectively?
This stage includes the second and third fundamental components: Developing or Selecting Assessment Measures & Creating Experiences Leading to Outcomes.
Check - Are my outcomes being met?
This stage invol
Size: 184.9 KB
Language: en
Added: Mar 12, 2025
Slides: 78 pages
Slide Content
By Cyril l. austria t-1, Tagbac elem.school ragay district PRINCIPLES OF HIGH - QUALITY ASSESSMENT
identify what constitutes high quality assessments list down the productive and unproductive uses of tests classify the various types of test
ASSESSMENT used to determine what a student knows or can do Assessment requires the gathering of evidence of student performance over a period of time to measure learning and understanding
Evaluation is used to determine the worth or value of a course or program Evaluation uses assessment information to support decisions on maintaining, changing, or discarding instructional or programmatic practices.
The Overall Goal of Assessment to improve student learning. Assessment provides students, parents/guardians, and teachers with valid information concerning student progress and their attainment of the expected curriculum
Characteristics of High - Quality Assessment Provide results that demonstrate and improve targeted student learning. Convey instructional decision making
Characteristics of High - Quality Assesment The criteria that must be considered to ensure the quality of test 1. Clear and Appropriate Learning Targets
Characteristics of High - Quality Assesment Learning Target a clear description of what students know and able to do.
Characteristics of High - Quality Assesment Five Learning Targets as Categorized by Stiggins and Conklin a. Knowledge learning target the ability of the student to master a substantive subject matter
b. Reasoning learning target the ability to use knowledge and solve problems
c. Skill learning target the ability to demonstrate achievement related skills like conducting experiments, playing basketball, and operating computers
d. Product learning target the ability to create achievement –related products such as written reports, oral presentations, and art products.
e . Affective learning target the attainment of affective traits such as attitudes, values, Interests, and self - efficacy
The criteria that must be considered to ensure the quality of test 2. Appropriateness of Assessment Methods
The criteria that must be considered to ensure the quality of test 3 . Validity Test validity is the degree to which a test measures what it is suppose to measure
Factors that influence the validity of the test Appropriateness of test items Directions Reading vocabulary Sentence structure
How Validity is Determined Different types of evidences to use in determining validity i . Content –related Validity a test or exam can be built with high content validity by: Identifying the subject matter topics and behavioral outcomes to be measured Building a table of specifications Constructing a test that tests closely fits the specifications
ii. Criterion -related validity determines the relationship between an assessment and another measure of the same trait.
iii. Construct –related validity determines which assessment is a meaningful measure of an observable trait or characteristics like intelligence, reading comprehension, honesty, motivation, attitude, learning style,
v. Instructional-related Validity determines to what extent the domain of contain in the test is taught in class.
b. Test Validity Enhancers The following are suggestions for enhancing the validity of classroom assessments: i . Prepare the table of specifications (TOS)
ii. Construct appropriate test items. iii. Formulate directions that are brief, clear and concise
iv. Consider the reading vocabulary of the examinees. v. Make the sentence structure of your test items simple. vi. Never have an identifiable pattern of answer.
vii. Arrange test items from easy to difficult. viii. Provide adequate time for student to complete the assessment
ix. Use different methods to assess the same thing. x. Use the test only for intended purpose .
The criteria that must be considered to ensure the quality of test 4. Reliability Refers to the consistency of a measure. Test is consider reliable if the same result get repeatedly. the extent to which a test is dependable, self –consistent, and stable.
Types of Reliability Test – Retest Method or Test of Stability Used to assess the consistency of a measure The test is administered twice at two different points in time Assumes that there will be no change in the quality of or construct being measured Best used for things that are stable over time, such as intelligence.
Types of Reliability Split - half Method The test is administered once The items are divided into two halves The two halves of the test must be similar but not identical in content and difficulty .
Factors that affect test reliability: Factors that affect test reliability: 1. scorer’s inconsistency because of his/her subjectivity 2. limited sampling because of incidental inclusion and accidental exclusion of some materials in the test.
Factors that affect test reliability: 3. changes in the individual examinee himself/ herself and his/her instability during the examination 4. testing environment
a. How Reliability is Determined Ways of establishing test reliability: - length of the test - difficulty of the test - objectivity of the scorer
b. The Concept of Error in Assesment The observed score is the product of what true score or real ability or skill plus some degree of error r = true/total obtained or Observed Score = True score + Error True amount Error Measurement or value “ obtained “
Sources of Error Common Error Internal Error External Error Health Mood Motivation Test-taking skills Anxiety Fatigue General Ability Directions Luck Item Ambiguity Heat in the room Lighting Sample of items Observer Differences and Test Interpretation Scoring
Test Reliability Enhancers things that should be consider in enhancing the reliability of classroom assessments Use sufficient number of items or tasks. A longer test is more reliable.
ii. Use independent raters or observers who can provide similar scores to the same performances iii. Make sure the assessment procedures and scoring are objective iv. Continue assessment until the results are consistent.
v. Eliminate or reduce the influence of extraneous events or factors vi. Asses the difficulty level of the test vii. Use shorter assessments more frequently rather than a few long assessments.
The criteria that must be considered to ensure the quality of test 5. Fairness this pertain to the intent that each question should be made as clear as possible to the examinees and the test is absent of any biases
The criteria that must be considered to ensure the quality of test 6. Positive consequences Learning assessments provide students with effective feedback and potentially improve their motivation and/or self-esteem.
The criteria that must be considered to ensure the quality of test 7 . Practicality and Efficiency Administrability requires that a test must be administered with ease, clarity and uniformity. Directions must be specific so that students and teachers will understand what they must do exactly.
c. Scorability demands that a good test should be easy to score . d. Test results should readily be available to both students and teachers for remedial and follow-up measures. e. The teachers familiarity with the method, the time required and the cost is also considered
Productive Uses of Test Learning Analysis test are used to identify the reasons or causes why students do not learn and the solutions to help them learn. Improvement of Teacher in a reliable grading system, the class average is the grade the teacher has earned
Productive Uses of Test Improvement of Curriculum Poor performance in a test indicate that the teacher is - not explaining the material effectively - the textbook is not clear - the students are not properly taught - the students do not see meaningfulness of the materials
Productive Uses of Test If the entire class does poorly, the curriculum, the curriculum needs to be revised or special units need to be developed for the class to be continue . Improvement of Instructional Materials . tests measure how effective instructional materials are in bringing about intended
Productive Uses of Test Individualization. effective test always indicate differences in students’ learning. These can serve as bases for individual help. Selection. when enrollment opportunity or any other opportunity is limited, a test can be used to screen those who are more qualified
Productive Uses of Test Placement. test can be used to determine to which category a student belongs. Guidance and Counseling . help teachers and counselors guide students in assessing future academic and career posibilities
Productive Uses of Test Researh . Tests can be feedback tool to find effective methods of teachings and learn more about students, their interest, goals and achievements Identification of Exceptional Children . tests can reveal exceptional students inside the classroom
Productive Uses of Test Selling and Interpreting the School to the Community. Effective test help the community understand what the students are learning It also used to diagnose general schoolwide weaknesses and strengths that require community or government support
Productive Uses of Test Evaluation of Learning Program
Unproductive Uses of Test Grading – tests should not be used as the only determinants in grading a test. Ridiculing – this means using tests to deride students Threatening – tests losts their validity when used as disciplinary measures
Unproductive Uses of Test Unannounced Testing Allocating funds Labeling Tracking - students are grouped according to difficiences
Classifications of Tests Administration a. Individual – given orally and requires the examinees’ constant attention since the manner of answering maybe as important as the score. Examples: Wechsler Adult Intelligence Scale Power Point Presentation used as a performance tests in Speech Class
b. Group – for measuring cognitive skills to measure achievement. Most tests in school are considered group test where different test takers take the test as a group
Classifications of Tests 2. Scoring Objective – independent scores agree on the number of points the answer should receive. e.g. multiple choice and true or false. Subjective - answers can be scored through various ways, e.g., essays and performance test
Classifications of Tests 3. Sort of Response being Emphasized a. Power – allows examinees a generous time limit to be able to answer every item. b. Speed – with severely limited time constraints but the item are easy and only a few examinees are expected to make errors.
Classifications of Tests 4. Types of Response the Examinees must take a. performance – requires students to perform a task . Administered individually Measure the time the examinee has performed in each task b. Paper and Pencil Test – examinees are asked to write on paper.
Classifications of Tests 5 . What is Measured a. Sample – limited representative test designed to measure the total behavior of the examinee b. Sign Test – diagnostic test designed to obtain diagnostic signs to suggests that some form of remediation is needed .
Classifications of Tests 6 . Nature of the Group Being Compared a. Teacher - made test Contains the subject being taught by the same teacher who constructed the test b. Standardized test Constructed by test specialists working with curriculum experts and teachers
Other Types of Tests Mastery tests - measure the level of learning of a given set of materials and the level attained. Discriminatory tests – it distinguish the differences between students or groups of students . it indicates the areas where students need help.
Other Types of Tests 3. Recognition test -require students to choose the right answer from their memory. 4. Recall test -require students to supply the correct answer from their memory. 5. Specific recall tests -require short responses that are fairly objective .
Other Types of Tests 6. Free recall tests -require students to construct their own complex responses. There are no right answers but a given answer might be better than the other. 7. Maximum performance tests – require student to obtain the best score possible .
Other Types of Tests 8. Typical performance tests –measure the typical or usual or average performance. 9. Written tests –depend on the ability of the students to understand, read and write.
Other Types of Tests 10. Oral examinations depend on the examinee’s ability to speak. Logic is also required. 11. Language tests –require instructions and questions to be presented in words.
Other Types of Tests 12. Non-language tests are administered by means of pantomime, painting or signs and symbols. 13. Structured test have very specific, well- defined instructions and expected outcomes .
Other Types of Tests 14. Projective tests present ambiguous stimulus or questions designed to elicit highly individualized responses. 15. Product tests emphasize only the final answer.
Other Types of Tests 16. Process tests focus on how the examinees attack, solve, or work out a problem. 17. External reports are tests where a ratee is evaluated by another person.
Other Types of Tests 18. Internal reports are self evaluated. 19. Open books, tests depend on one’s understanding and ability to express one’s ideas and evaluate concept.
Other Types of Tests 20. Closed book test depend heavily on the memory of the examinees. 21. Non-learning format tests determine how much information the students know.
Other Types of Tests 22. Learning format test require the students to apply previously learned materials. 23. Convergent format tests purposely lead the examinees to several possible answer.
Other Types of Tests 24. Divergent format tests lead the examinees to several possible answer . 25. Scale measurements distribute ratings along a continuum.
Other Types of Tests Test measurements refer to the items being dichotomous or either right or wrong, but not both. Pre-tests measure how much is known about a material before it is presented.
Other Types of Tests 28. Post-tests measure how much has been learned after a learning material has been given. 29. Sociometrics reveal the interrelationship among members or the social structure of a group.
Other Types of Tests 30. Anecdotal records reveal episodes of behavior that may be indicate a profile of the students.
COMPARISON BETWEEN TEACHER MADE TESTS AND STANDARDIZED TESTS
CHARACTERISTIC TEACHER MADE TESTS STANDARDIZED TESTS Directions for administration and scoring Usually, no uniform directions are specified. Specific instructions standardized the administration and scoring procedures Sampling content Both content and sampling are determined by the classroom teacher. Content is determined by curriculum and subject matter experts construction May be hurriedly done because of time constraints; often no Test blueprints, item revision; quality of tests may be quite poor. It uses meticulous construction procedures that include objectives and test blueprints, item analysis and item revisions norms Only local classroom norms are available. In addition to local norms, it makes national, school, district, and school building norms. Purpose and use Best suited for measuring particular objective set by the teacher and interclass comparisons Best suited for measuring broader curriculum objectives and for interclass, school, and national comparisons
1. Explain why validity implies reliability? 2. Generate some other qualities that you believe contribute to making good assessments. 3.List down your personal experiences of unfair assessments .
THANK YOU
Learning Assessment 1 - Ronald S.P. Elicay , Ph.D., Arnulfo Aaron R. Reganit Ed.D ., Cresencia C. Laguerta M.S.