evaluation methods in clinical teaching .pptx

navyavijayan10 197 views 113 slides Sep 04, 2024
Slide 1
Slide 1 of 113
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113

About This Presentation

various evaluation methods in clinical teaching


Slide Content

Evaluation in clinical teaching Mrs. Navya Vijayan Lecturer St.Thomas college of nursing,kattanam

A clinical teaching also known as bedside teaching is a unique learning strategy in nursing . Clinical teaching is providing the opportunity for acquisition and demonstration of instructional competence by beginning professional educators.

Define clinical teaching Clinical Teaching is  a individualized or group teaching to the nursing students in the clinical area by the Nurses Educators , Staff Nurse and Clinical Nurse Managers .

Importance of clinical teaching It can effectively help learners to improve clinical skills, judgement , communication and professionalism . It can also integrate theoretical knowledge, practical and communication competencies of learners

Process of clinical teaching

Define evaluation Purpose of clinical evaluation Principles of clinical evaluation Participants in clinical evaluation Clinical evaluation process Methods of clinical evaluation Various types of evaluation tools and methods

  MEANING of evaluation ‘To evaluate’  means ‘to ascertain the value or amount of appraisal carefully’ It is concerned with provision of learning experience, increasing the capabilities to perform certain functions Evaluation is a vital process in delivering a teaching activity

Evaluation of clinical teaching involves collecting evidence, from learners and evaluators , for the purpose of improving the effectiveness of the teaching-learning process. A successful evaluation generates outcomes by the learners that are valid, reliable and indicate directions and action for development. Evaluation is carried out from the perspective of the learner, teacher, the patient and the institutional administrators in the health care system

Define evaluation The process of determining to what extent the educational objectives are being realized. --Ralph Tyler The  process of judging or calculating the quality, importance,amount or value of something --Crow  

Importance of Evaluation in a Clinical Teaching Maintain and promote the academic quality of a clinical teaching, It is a tool to measure systematically the standards of teaching and overall benefits for the students in line with their learning objectives. Evaluation effectively helps to train and teach learners to become competent professionals for future healthcare services.

Objectives of clinical teaching evaluation Provide a reliable form for assessing the quality of clinical teaching. Provide a method for providing feedback to faculty and students. Provide a self-assessment tool for clinical teachers to identify opportunities for faculty development.

PURPOSES OF EVALUATION 1.  Essential for sound educational decision-making. 2. To attain educational goals and ascertain have been reached or not. 3. For adequate Teaching-Learning situation. 4. It clarifies aims of education 5. Helps in improvement of curriculum. 6. Appraises the status and changes in pupil behavior.

PURPOSES OF EVALUATION 7. Familiarizes the Teacher with the nature of pupil learning, development and progress. 8. It appraises the Teacher ‘s/Supervisor’s competence. 9. Serves as a method of improvement. 10. Encourage students learning by measuring their achievement and informing their success. 11. Determine how far the objectives of teaching in particular subject are being realized or to see whether the teacher’s method and the experiences, which he organizes for children.

Philosophy of evaluation Each learner should receive education that allows to develop maximal potential Education should contribute to the society and also should receive personal satisfaction in doing so. Fullest recognition of the individual requires recognition of essential individuality along with some rational appraisal by self and others.

The evaluation process is complex in composition ,difficult to make and filled with error, which can be reduced but never be eliminated. Hence evaluation can never be considered as final. Composite assessment by a group of individuals is likely to be in error than assessment made by a single person . Every form of apprasial will have critics, which is a spur to change and improvement A conscientious group of individuals has to put efforts to develop more reliable and valid appraisal methods of evaluation and reduce errors.

Psychology of evaluation Student will be ready to have the evaluation of his/her abilities when he understands and accepts the values and objectives involved Learner tends to carry on, those activities which have success associated with their results, i.e., law of effect. Individuals learn better when they are 'constantly' appraised in a meaningful manner as to how well they are doing. Motivation of the student is important. A person's top performance on a test is directly related to his/her motivation Learning is most efficient when there is activity on the part of the learner.

Domains of evaluation The establishment of formalized education programs in clinical teaching created the need for formalized evaluation programs.  More systematic and scientific procedures were required to evaluate these programs Seels and Richey define the domain of evaluation as “the process of determining the adequacy of instruction and learning”   The domain of evaluation examines the instruction; making changes where needed (formative evaluation) and determines the effectiveness of the instruction and the impact upon learners (summative evaluation). 

1.Problem Analysis “Problem analysis involves determining the nature and parameters of the problem by using information-gathering and decision-making strategies”   Instructional designers must collect data using appropriate data collection instruments to identify the problem and causes in order to provide solutions to the problem.

2.Criterion-referenced Measurement “Criterion-referenced measurement involves techniques for determining learner mastery of pre-specified content” The purpose of criterion-referenced measurement is to determine if the student learned the intended information and to determine if the goal of the materials has been reached.

3.Formative Evaluation “Formative evaluation involves gathering information on adequacy and using this information as a basis for further development” The purpose is to identify errors in the instructional materials, identify issues affecting learning outcomes, diagnose learning problems of users, and revise and improve the quality of materials and learning.  Formative evaluation is conducted during the development or improvement of a program or product.  It may be implemented by an internal or external evaluator or by both

Formative evaluation consists of three phases:  one-to-one evaluation, small-group evaluation, and field trial Each phase is described below.  One-to-one evaluation- The instructional designer works directly with individual learners to obtain data for revision of the materials. Small-group evaluation- The instructional designer will randomly select eight to twenty learners who represent the target population to review the effectiveness of teaching Field trial -The instructor will use a learning context that resembles the intended setting to determine whether the instruction is administratively possible to implement.

4.Summative Evaluation “Summative evaluation involves gathering information on adequacy and using this information to make decisions about effectiveness of teaching It is conducted after the implementation of the program   The principle behind summative evaluation is to determine whether to continue use of the current teaching method

There are four levels of summative evaluation according to Kirkpatrick:  reaction, learning, transfer and results (Kirkpatrick, 1994). 

Kirkpatrick’s Four Levels of Training Evaluation Is a key tool for evaluating the efficacy of training within an organization. This model is globally recognized as one of the most effective evaluations of training. The Kirkpatrick model consists of 4 levels:  Reaction ,  learning ,  behavior , and  results . It can be used to evaluate either formal or informal learning and can be used with any style of training.

Level 1: Reaction The first level is learner-focused. It measures if the learners have found the training to be relevant to their role, engaging, and useful. There are three parts to this: Satisfaction : Is the learner happy with what they have learned during their training? Engagement : How much did the learner get involved in and contribute to the learning experience? Relevance : How much of this information will learners be able to apply on the job? Reaction is generally measured with a survey, completed after the training has been delivered. This survey is often called a ‘ smile sheet ’ and it asks the learners to rate their experience within the training and offer feedback.

Steps for implementing level 1: reaction Use an online questionnaire. Set aside time at the end of training for learners to fill out the survey. Provide space for written answers, rather than multiple choice. Pay attention to verbal responses given during training.

Create questions that focus on the learner’s takeaways. Use information from previous surveys to inform the questions that you ask. Let learners know at the beginning of the session that they will be filling this out. This allows them to consider their answers throughout and give more detailed responses. Reiterate the need for honesty in answers –learners need not have to give polite responses rather than their true opinions!

Level 2: Learning This level focuses on whether or not the learner has acquired the knowledge, skills, attitude, confidence, and commitment that the training program is focused on. These 5 aspects can be measured either formally or informally. For accuracy in results, pre and post-learning assessments should be used.

Steps for implementing level 2: learning Conduct assessments before and after for a more complete idea of how much was learned. Questionnaires and surveys can be in a variety of formats, from exams, to interviews, to assessments. In some cases, a control group can be helpful for comparing results.

The scoring process should be defined and clear and must be determined in advance in order to reduce inconsistencies. Make sure that the assessment strategies are in line with the goals of the program. Don’t forget to include thoughts, observations, and critiques from both instructors and learners – there is a lot of valuable content there.

Level 3: Behavior This step is crucial for understanding the true impact of the training. It measures behavioral changes after learning and shows if the learners are taking what they learned in training and applying it as they do their job. The results of this assessment will demonstrate not only if the learner has correctly understood the training, but it also will show if the training is applicable in that specific workplace. This is because, often, when looking at behavior within the workplace, other issues are uncovered. If a person does not change their behavior after training, it does not necessarily mean that the training has failed.

Steps for implementing level 3: behavior The most effective time period for implementing this level is 3 – 6 months after the training is completed. Any evaluations done too soon will not provide reliable data. Use a mix of observations and interviews to assess behavioral change. Be aware that opinion-based observations should be minimized or avoided, so as not to bias the results.

Level 4: Results This level focuses on whether or not the targeted outcomes resulted from the training program, alongside the support and accountability of organizational members.

Steps for implementing level 4: results Before starting this process, know exactly what is going to be measured throughout, and share that information with all participants. If possible, use a control group. Don’t rush the final evaluation – it’s important that participants are given enough time to effectively fold in the new skills.

5.Confirmative Evaluation Confirmative evaluation is a continuous type of evaluation used to determine the long-term organizational impact of the implementation Confirmative evaluation is conducted by a team of unbiased evaluators.  The evaluators use tools such as interviews, surveys, performance assessments, and knowledge tests to gather information   Confirmative evaluation should take place at least six months to a year after initial implementation of teaching method

Characteristics of good evaluation instrument It should show how far the educational objectives have been achieved It has to measure the knowledge and overall personality development of the individual learner It is a continuous process, therefore it should have formative, summative and terminal evaluation Evaluation technique should be reliable and valid to identify how far changes have taken place among the students in the teaching learning process. Validity: The accuracy with which a test measures whatever it is intended/supposed to measure Reliability :The efficiency with which a test measures what it attempts to measure

Characteristics of good evaluation instrument Objectivity: A test is objective when the scorer's personal judgment does not affect the scoring. It eliminates fixed opinion or judgment of the person who scores it. Practicability (usability): The overall simplicity of use of a test for both test constructor and for learner. Practicability depends upon various factors, such as ease administrability, scoring, interpretation and economy  Ease of interpretation: The raw scores of a test should easily converted into meaningful derived scores. Economy: Economy refers to the cost well as the time required for administering and scoring Comparability: A test possesses comparability if scores resulting from its use can be interpreted in terms of a common base that has a natural or accepted 

Characteristics of good evaluation instrument Relevance: The degree to which the criteria established for selecting the item so that they conform to the aims of the measuring instrument. Equilibrium/Equity: Achievement of the correct proportion among questions allotted to each of the objectives and teaching content. Specificity: The items in a test should be specific to the objectives. Discrimination: The discriminating power of a test item refers to the degree to which it discriminates between good and poor students in a given group or a variable. This suggests that learners with superior ability should answer the item corrective more often than learners who do not have such ability. Efficiency: It ensures the greater possible number of independent answers per unit of time

Characteristics of good evaluation instrument Time: The required time to answer items should be provided to avoid hurry, guessing, taking risks or chances, etc. Length: The number of items in the test should depend upon the objectives and content of the topic. Test usefulness: Grading or ranking of the student can be possible with items in the test. Precise and clear: Items should be precise and clear so that students can answer well and score marks. Comprehensiveness: The total content and objectives has to be kept in mind while preparing items for the test. Adequacy: A measuring instrument should be adequate to measure the desired outcome as well abide to the objectives .

Characteristics of good evaluation instrument Balanced and fair. The test should include items, measuring both the objectives and the content. Ease of administrability: Provision should be made for the preparation, distribution and collection of test materials. Instruction should be simple, clear and concise. Practice exercises will be illustrated, Illustrations should be clear cut and easily tied up with the appropriate test items. Ease of scoring: Simple scoring is good. Algebraic AI calculations should not be used to get the scores,

Process of clinical evaluation

Phase I: Preparation Determines objectives and competencies to be evaluated Identify evaluation methods and tools Choose clinical site Orient students to the evaluation plan Focus on objectivity in evaluation

Orient student and staff to the student role Provide students with the clinical opportunities Ensure patient safety Observe and collect evaluation data

Phase II: Clinical Activity Orient student and staff to the student role Provide students with the clinical opportunities Ensure patient safety Observe and collect evaluation data

Provide student feedback to enhance learning Document findings and maintain privacy of records Contact with students regarding any deficiencies

Phase III: Final Data interpretation and presentation Interpret data in fair, consistent and reasonable manner Assign grade Provide summative evaluation conference (ensure privacy and respect confidentiality)

Roles of an evaluator during the evaluation process 1. Preparation phase Choosing the clinical setting and patient assignment as a part of evaluation process Determining the standards and measurement tools Reasonable Consistent and applied equally Established and communicated before implementation.

II. Clinical Activity Phase In both obtaining and analyzing clinical evaluation data, faculty need to make professional judgement about the performance of the students. Because of the subjective nature of evaluation, there may be concern that evaluation is biased.

III. Final Data Interpretation and presentation Clinical Evaluation Conference Student response Working with students with questionable performance and supporting at risk students Unsatisfactory performance Student reactions Dismissing an unsafe student from the clinical practice

1. Observational techniques : Checklists Rating scales Anecdotal Records Critical incidents

OBSERVATIONAL METHOD

1. Observational techniques : Checklists Rating scales Anecdotal Records Critical incidents

The advantages of observation Provides a continuous check on progress of student. The errors and problems can be immediately directed and corrective action taken quickly. Observation techniques are not so time consuming. Observational data provides teachers with valuable data which could not be obtained in other way.

Tips for making valid observations Plan in advance what is to be observed. The observer must be aware of the sampling errors. Coordinate the observation with your teaching otherwise there is a danger that invalid observations will result. Record and summarize the observations immediately after it has occurred. Make no interpretation regarding the observation until later on to prevent interference in objectivity of gathering observational data.

Types of observational tools CHECKLIST A check list consists of steps activities or behaviour which the observer records when an incident occurs. A check list enables the observer to note whether or not a trait or characteristic is present. They can be used when components of competence specified. A two column format is often used.

Advantages of Checklist They are adaptable to most subject matter areas. They are useful in evaluating those activities that involve a procedure process and some aspects of personal social adjustment. They are useful for evaluating those procedures which can be divided into separate actions.

When properly prepared they constrain the observer to direct his attention to clearly specified traits. They allow interindividual comparison to be made on a common set of traits. It is a simple method of recording

Construction of Checklist Express each item in clear, simple language Avoid lifting verbatim from the text. Avoid negative statements Make sure each item is clearly yes or no. Review each item independently.

Disadvantage of checklist Quality of the observed trait or the degree to which the attribute is present cannot be assessed.

2. ANECDOTAL RECORD Definition : (by Randall) “Anecdotal record is a record of some significant item of conduct, a record of an episode in the life of student, a word picture of a student in action, a word snapshot at the moment of the incident, ant narration of events which may be significant about his personality.”

Characteristics of Anecdotal records They should contain factual description of what happened. The interpretation and recommendation should be noted independently. Each anecdotal record should contain a single incident. The incident recorded should be that which is significant to the student’s growth and development.

Merits of Anecdotal records Help in clinical service practices Provide a factual record of our observation of a simple significant incident in Student’s life. They stimulate teacher to use records contribute to them. They provide specific and exact description of personality and minimize generalizations.

Demerits of Anecdotal records They tend to less reliable than other observational tools as they are less formal. They are time consuming to write. Difficult to maintain objectivity. The observer tends to record only undesirable incidents. They present only verbal description of incident. They do not reveal causes.

3. CRITICAL INCIDENT TECHNIQUE It is a method of assessing the student’s analytic and problem solving competencies. Rivers and Gosnell defined Critical Incident as “one that makes a significant difference in the outcome of an activity.” The critical incident technique is effective for formative evaluation; it enables the learner and the teacher to assess the learner’s behaviours in relation to their impact an outcome of an action.

4. RATING SCALE A rating scale is a method by which we systematize the expression of opinion concerning a trait. A rating scale resembles a checklist but in this instead of merely indicating the presence or absence of a trait or characteristic, it enables us to indicate the degree to which the trait is present.

Advantages of Rating Scale It is a standard device for recording qualitative and quantitative judgements about observed performance. They measure specified outcomes or objective which are significant. They evaluate procedures. They evaluate products They evaluate personal social development.

They help teachers rate their students periodically. They can be used with large number of students. They are adaptive and flexible They are efficient and economical.

Disadvantages of rating Scale Since the scales are standardized items, it sometimes may not be consistent with the objectives. There is a lack of uniformity with which terms are interpreted by the evaluators. There are several common sources of errors in rating scales. Errors may be due to Ambiguity Personality of the rater : halo effect, personal bias, logic error; attitude of rater Opportunity for adequate observation.

Types of rating scales Numerical Rating Scale Graphic Rating Scale Ranking

Written communication methods Nurses notes Problem oriented records Nursing care studies Process recording

Nurses Notes According to Phaneauf quality of care provided to the patient is assessed by using Quality of care provided to the patient is assessed by using the patient’s charts as a source of information. The reason for this are; The chart is a service instrument essential for the safety of the patient. The chart is a service instrument essential for the safety of the patient. Nurses Notes

It serves as a major means of communication between the various professionals involved in the care. It provides legal documentation of the care provided. Recording is one essential function of the nurse. The chart is readily available to the authorized nurses for purpose of auditing.

Problem Oriented Records Problem oriented record is a systematic record of patient’s health problem. According to Weeds it has four components as follows Database – all appropriate information about the patient for assessing his condition.

Problem list – listing the conditions, systems or circumstances identified from the database, which have implication for the patient’s health. Initial plans – diagnostic and therapeutic orders for each problem listed. Progress notes

Nursing Care studies Schweer defines a nursing care study as “a problem solving activity whereby the student undertakes the comprehensive assessment of a particular patient’s problems leading to planning, implementing, and evaluation of nursing care measures.”

The student’s written description of actions implicit in meeting patient needs enables the evaluator to determine ability in cognitive and affective domains and also the ability to establish meaningful relationships among the steps of the process.

Process Recording Schweer defines process recording as “the verbatim serial reproduction of the verbal and nonverbal communication between two individuals for the purpose of assessing interaction on a continuum leading towards mutual understanding and interpersonal relationships.”

There are four main components Client communication(Subjective data) Nurse communication(objective data) Nurse’s interpretation of patient communication.(nursing diagnosis) Implication of communication for nursing action.(Intervention)

Oral communication methods Nursing patient care conference Team conference

Nursing Patient Care conference They are problem solving group discussions about some facet of clinical practice. In this the student represents a patient situation to the peer group for critical analysis of the plan or action or implications.

The peers evaluate the action, raise relevant questions and propose alternatives as appropriate. In some instances the presentation may be preceded by nursing rounds in which the participants have the opportunity to observe the patients whose nursing care will be discussed.

Nursing Team Conference The team conference is a small group activity that serves as an effective medium for evaluating clinical practice. It involves managerial decision making. Managerial decision making as defined by Feinstein are “decisions for therapeutic interventions or to prevent or alter disease.”

The most common team activity is problem solving in which through group process, plans for patient care are developed. The learner is evaluated in terms of his her participation in the group in reporting observations, making relationships among data, making proposals for action, evaluating actions as they are reported

Standardized patient examination: According to Borbasi and Koop standardised patient examination referred to as objective structured clinical examination (OSCEs) can be described as “pretend patients” in an artificial environment designed to stimulate actual clinical condition.

Standardised patients can provide feedback to students and ensure competence before students begin practice in real world. Multiple evaluators can observe and test students in the performance of numerous skills during brief examination periods.

OTHER METHODS ALSO USED IN EVALUATION OF CLINICAL TEACHING

Survey A survey is a method of gathering information from students and instructors regarding satisfaction in clinical teaching using a set of questions. Gathers quantitative data on attitudes, beliefs, opinions, and behaviors of students learning experience as well as teaching . They tend to be relatively cost-effective to administer and allow to systematically collect both quantitative data (using closed-ended questions) and qualitative data (using open-ended questions) from the learners .

Questionnaire A questionnaire is a specific set of written questions which aims to extract specific information from the chosen respondents. The questions and answers are structured in order to gather information not only about attitudes, preferences but also regarding knowledge and skill acquired by the students

interview Interviews conducted for clinical teaching evaluation are typically qualitative but may also include some quantitative questions Includes interviewing with the instructor, learners ,health care personnel ,family and concerned ones

Focus groups Gather information from group of students having same clinical experience Get more in-depth information on perceptions, insights, attitudes, experiences, or beliefs. Gets qualitative information than quantitative

TEACHER AND EVALUATION Evaluate student understanding of clinical scenario Assess whether students are able to bring theory and practice Make adjustments to the teaching and learning process

Student and evaluation Engaging with learning goals Providing feedback to peers, receiving feedback from teachers and peers

Conclusion Educators /clinical instructors can identify strengths and weaknesses in the clinical teaching education system and develop strategies to enhance student learning. Evaluation is also important in ensuring accountability in the training process and promoting innovation.