PROGRAM EVALUATION

3,578 views 40 slides Oct 07, 2023
Slide 1
Slide 1 of 40
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40

About This Presentation

AMRITA SCHOOL OF DENTISTRY


Slide Content

Seminar no:13

CONTENTS

Program evaluation  is a way to evaluate the specific projects and activities community groups may take part in, rather than to evaluate an entire organization or comprehensive community initiative. An evaluation study addresses the quality of medical care, utilization and coverage of health services , benefits to community health in terms of morbidity and mortality reduction and improvement in the health status of the recipients of care. INTRODUCTION

Program Program is any set of related activities undertaken to achieve an intended outcome; any organized public health action. Evaluation: A systematic process to assess the achievement of the stated objectives of a Programme ,its adequacy, efficiency, and its acceptance by all parties involved. Monitoring: A planned, systematic process of observation that closely follows a course of activities, and compares what is happening with what is expected to happen DEFINITIONS

DIFFERENCE BETWEEN MONITORING AND EVALUATION

To review the implementation of services provided by health programmes so as to identify problems and recommend necessary revisions of the programme . To assess progress towards desired health status at national or state levels and identify reasons for gap, if any. To contribute towards better health planning To document results achieved by a project funded by donor agencies. To know whether desired health outcomes are being achieved and identify remedial measures. To improve health programmes and the health infrastructure. Allocation of resources in current and future programme . To render health activities more relevant, more efficient and more effective. NEED FOR EVALUATION OF HEALTH SERVICES

Evaluation is done by

TYPES OF EVALUATION

S TEPS OF EVALUATION

Determine what is to be evaluated: There are 3 types of evaluation:

2. Establishment of standard and criteria: Standards and criteria must be established to determine how well the desired objectives have been attained.

Effectiveness: It is the extent to which the underlying problem is prevented or alleviated. The ultimate measures of effectiveness will be the reduction in morbidity and mortality rates. Efficiency: It is a measure of how well resources (money, men, material and time) are utilized to achieve a given effectiveness. Components of Evaluation

Relevance: Relevance or requisiteness relates to the appropriateness of the service, whether it is needed at all Adequacy: It implies that sufficient attention has been paid to certain previously determined courses of action. Accessibility: It is the proportion of the given population that can be expected to use a specified facility, service, etc. The barriers to accessibility may be physical ( e.g ., distance, travel, time); economic (e.g., travel cost, fee charged); or social and cultural (e.g. , caste or language barrier). Acceptability : The service provided may be accessible, but not acceptable to all, e.g., screening for rectal cancer. Impact: It is an expression of the overall effect of a programme , services or institution on health status and socio-economic development. For example, as a result of malaria control in India, not only the incidence of malaria dropped down , but all aspects of life - agricultural, industrial and social-showed an improvement.

Some Possible Endpoints for Measuring Success of a Vaccine Program 1. Number (or proportion) of people immunized 2. Number (or proportion) of people at (high) risk who are immunized 3. Number (or proportion) of people immunized who show serologic response 4. Number (or proportion) of people immunized and later exposed in whom clinical disease does not develop 5. Number (or proportion) of people immunized and later exposed in whom clinical or subclinical disease does not develop Example evaluating the effectiveness of a vaccine program:

3. Planning the methodology: A format must be prepared for gathering the desired information 4. Gathering information: the type and amount of information required will depend on the purpose of the evaluation Epidemiological evaluation: Independent Variable- Health Service Dependent Variable- R eduction in adverse health effects

Evaluation using individual data:

Example: Evaluation of Multiphasic screening in South East London, led to withholding of vast outlay of resources required to mount a national programme Randomized design: Eliminates problem of selection bias. For ethical and practical reasons, randomizing patients to receive no care is not considered. Assign different types of care and then evaluate.

Demerits of randomized designs: RCT trials are logistically complex and extremely expensive. Ethical problems Long time for completion, so relevance is questionable Alternative approach- outcome research.

Non- randomized design: Before- After Design (Historic controls): Data obtained in each of two periods are not comparable in terms of quality and completeness. Difference is due to programme or due to other factors which changed over time like housing, nutrition, and lifestyle. Problem of selection exists Simultaneous Nonrandomized Design (Program- No Program): A cohort study in which the type of health care being studied represents “exposure” Problem arises as in how to select exposed and non-exposed group for study

Comparison of utilizers and non utilizers: To compare a group of people who use a health service with a group who do not. Problem of self selection exists Address this problem by characterizing the prognostic profile of people in both groups. Inability to say someone not to utilize the programme .

Comparison of eligible and non eligible populations: Assumption being made that eligibility and non-eligibility is not related to either prognosis or outcome, So no selection bias is being introduced For eg : employer or census tract of residence May relate to socioeconomic status

Combination Designs: Combination of both the designs viz. before-after design ,program-no program. To compare the morbidity level in people who receive care and who do not. Case Control studies: The case-control design has been applied primarily to etiologic studies, when appropriate data are obtainable, this design can serve as a useful, but limited, surrogate for randomized trials. This design requires definition and specification of cases, it is most applicable to studies of prevention of specific diseases. The “exposure” is then the specific preventive or other health measure that is being assessed. Most health services research, stratification by disease severity and by other possible prognostic factors is essential for appropriate interpretation of the findings.

Outcomes Research Denotes studies comparing the effects of two or more health care interventions or modalities- such as treatment, forms of health care organization, or type and extent of insurance coverage and provider reimbursement on health or economic outcomes. Uses data from large data sets that were derived from large population. Advantages: Refers to real world population and issue of representativeness or generalizability is minimized As the data already exists, analysis can be completed and results generated rapidly Sample size is not a problem except when smaller sub-groups are examined Cost effective. Evaluation using group data:

Disadvantages: Data gathered for fiscal and administrative purpose may not suit research questions addressed in study New questions (as knowledge is more complete now) wouldn’t have been framed Data on independent and dependent variable may be limited Data relating to possible confounders may be inadequate or absent Certain variables that are relevant today, were not included in original data set Investigator may create surrogate variable or may change original question which he wanted to address Investigator becomes progressively more removed from the individual being studied

5. Analysis of results: the analysis and interpretation of data should take place within shortest time feasible, provided discuss the evaluation results 6. Taking action: for evaluation to be truly productive, actions designed to support, strengthen or otherwise modify the services involved, need to be taken 7. Re-evaluate: evaluation is an ongoing process aimed mainly at rendering health activities more relevant, more efficient and more effective

Two indices used in ecologic studies of health services: *Avoidable deaths are defined as preventable, amenable, or both, where each death is counted only once.

The 5 year RCH phase II was launched in 2005 with a vision to bring about outcomes as envisioned in the Millennium Development Goals, the National Population Policy 2000 (NPP 2000), the Tenth Plan, the National Health Policy 2002 and Vision 2020 India, minimizing the regional variations in the areas of RCH and population stabilization through an integrated, focused, participatory programme meeting the unmet needs of the target population, and provision of assured, equitable, responsive quality services. Goal: “Health For All” Objective: Population stabilization by 2045 Programme: Comprehensive R.C.H services Monitoring & Evaluation: RCH indicators/feedback data Example:

Example cont …:

The CDC assembled an Evaluation Working Group comprised of experts in the fields of public health and evaluation which resulted in a community toolbox. " Recommended Framework for Program Evaluation in Public Health Practice," by Bobby Milstein, Scott Wetterhall , and the CDC Evaluation Working Group in 1997.

Steps of framework taken in any evaluation are: Step 1: Engage stakeholders. Step 2: Describe the program. Step 3: Focus the evaluation design. Step 4: Gather credible evidence. Step 5: Justify conclusions. Step 6: Ensure use and share lessons learned. The second element of the framework is a set of 30 standards for assessing the quality of evaluation activities, organized into the following four groups: Standard 1: utility, Standard 2: feasibility, Standard 3: propriety, and Standard 4: accuracy.

The Guidelines have been updated to reflect feedback from trainings and interviews, and recent changes in UNDP, bringing them into line with the new UNDP Evaluation Policy and the United Nations Sustainable Development Cooperation Framework (UNSDCF). The following documents are of particular importance for the UNDP evaluation architecture: UNDP, 2019, Revised UNDP Evaluation Policy. UNDP, 2020, Social and Environmental Standards. UNDP, 2018, Gender Equality Strategy 2018-2021. The Sustainable Development Goals(SDGs), 2030 Agenda and UNDP Strategic Plan, 2018-2021

CONCLUSION Monitoring and evaluating transitions in global health programs can bring conceptual clarity to the transition process, provide a mechanism for accountability, facilitate engagement with local stakeholders, and inform the management of transition through learning. Further investment and stronger methodological work are needed. Monitoring and evaluation of projects can be a powerful means to measure their performance, track progress towards achieving desired goals, and demonstrate that systems are in place that support organisations in learning from experience and adaptive management.

REFERENCES Gordis Leon. Epidemiology. 4th edition. Philadelphia: Elsevier Saunders:2009 Park K. Park’s Textbook of Preventive and Social Medicine. 22nd ed. CDC. Framework for program evaluation in public health. MMWR 1999;48(RR-11). WHO: UNFPA. Programme Manager’s Planning Monitoring & Evaluation Toolkit. Division for oversight services, August 2004, UNICEF. “A UNICEF Guide for Monitoring and Evaluation: Making a Difference?”, Evaluation Office, New York, 1991. https://www.ruralhealthinfo.org/toolkits/health-promotion/4/types-of-evaluation https://stacks.cdc.gov/view/cdc/5204 https://mainweb-v.musc.edu/vawprevention/research/programeval.shtml https://www.cdc.gov/std/program/pupestd/types%20of%20evaluation.pdf