5. Judjment of causality and causal ifference.pptx
melessejenbolla1
19 views
41 slides
Jun 26, 2024
Slide 1 of 41
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
About This Presentation
About the Job
POSITION: Newborn and Child Health Advisor
REPORTS TO: RMNCAH-N Service Delivery Director
DUTY STATION: Addis Ababa
PROGRAM: USAID-Empowered Community Activity
MINIMUM HOURS OF WORK:40 hours per week
...
About the Job
POSITION: Newborn and Child Health Advisor
REPORTS TO: RMNCAH-N Service Delivery Director
DUTY STATION: Addis Ababa
PROGRAM: USAID-Empowered Community Activity
MINIMUM HOURS OF WORK:40 hours per week
Code of Conduct
It is our shared responsibility and obligation to treat each other with respect, take affirmative steps to prevent matters involving Sexual Exploitation & Abuse and Trafficking in Persons, and to disclose all potential and actual violations of our Code of Conduct, which may include Conflicts of Interest, Fraud, Corruption, Discrimination or Harassment. Together, we can reinforce a culture of respect, integrity, accountability, and transparency.
Activity background
Project HOPE is an international NGO of more than 500 engaged employees and hundreds of volunteers who work in more than 25 countries, responding to the world’s most pressing global health challenges.
Empowered Communities Activity is a USAID funded activity implemented by Project HOPE led consortium that includes JSI, Amhara Women’s Association (AWA), Mothers and Children Multisectoral Development Organization(MCMDO), Progynist, and Family Guidance Association of Ethiopia (FGA-E), . The activity will be implemented in the next four years between January 18,2024 to 17 January 2028 in five regions of Ethiopia (Oromia, Amhara, Afar, South Ethiopia and central Ethiopia regions). The goal of this activity is to strengthen community systems and platforms to improve health and nutrition outcomes. USAID ECA aims to improving community level health literacy and engagement, capacity strengthening of non-state actors (NSAs) and accountability systems as well as increasing service delivery through NSA managed health facilities.
Project HOPE strives to achieve a higher level of engagement by the community whereby members develop influence and take ownership in the decision-making processes that can improve their own health status. One of the expected result areas is to increase delivery of quality RMNCAH-N services through local NSA health facilities through Improved enabling environment for NSA engagement in the delivery of RMNCAH-N services; increased access to quality RMNCAH-N services in NSA managed health facilities and RMNCAH-N program learning, and innovative ventures enhanced for NSA managed health facilities.
Job Summary
The Newborn and Child Health Advisor is responsible for providing technical leadership and guidance to ensure the effective planning, implementation, and evaluation of newborn and child health programs in Non-state Actor Managed health facilities. This role involves collaborating with various stakeholders, including healthcare providers, government agencies, and non-governmental organizations, to improve health outcomes for newborns and children.
Specific roles
• Design, implement,
Size: 231.28 KB
Language: en
Added: Jun 26, 2024
Slides: 41 pages
Slide Content
By Tilaye Workineh (BSc, MPH/Epid, Asst. Prof. of Epidemiology) A H M C Lecture 6 Judgment of causality and causal inference
Outline Introduction Role of chance, bias and confounding Bradford Hill criteria to establish causality Summary 23-Jun-22 Judgement of causality 2
Introduction Definition of Epidemiology “…. and determinants of diseases and other health related problems in human population and the application …..” One of the major purposes of epidemiological studies is to, Discover the causes of disease Analytic Epidemiology focuses on Evaluating whether an association of an exposure and disease is causal 23-Jun-22 Judgement of causality 3
Evaluating Causation If we observe an exposure Vs outcome association, we have to ask, Is the association valid? do the study findings reflect true relationship b/n the exposure and disease? Is the association causal? Is there sufficient evidence to infer that a causal association exist b/n the exposure and the disease?) 23-Jun-22 Judgement of causality 4
Evaluating causation . . . To say the association is valid, Internal Validity: The results of observation are correct for particular group being studied /for sample/ e.g. mean is not valid MCT if the datum had extreme observation What about “ external validity ”? ……. could inference possible ? Do the results of the study apply (“generalizable”) to people who were not in it (e.g. the target pop n )? e.g. externalization not allowed is ample size/sampling not right 23-Jun-22 Judgement of causality 5
Evaluating causation . . . Internal validity must always be the primary objective since invalid result cannot be generalized Thus, internal validity should never be compromised in an attempt to achieve generalizability Error that distort validity can occur randomly / random error / or because of system error / bias / or confounding effect 23-Jun-22 Judgement of causality 6
Evaluating causation . . . In epidemiologic study, there are at least 3 possible explanations for invalid association between exposure and outcome : Chance Bias Confounding These explanations are not mutually exclusive more than one can be present in the same study 23-Jun-22 Judgement of causality 7
Assessing association b/n exposure and outcome No Could the observed association be due to: Stage I Bias Confounding Chance Could it be causal? No Probably Not Stage II Apply Guidelines for Causal Inference 23-Jun-22 Judgement of causality 8
23-Jun-22 Judgement of causality 9 Steps of Research and errors committed
Evaluating causation . . . Non Causal Associations Artefactual associations Failure to link exposure with disease in particular occasion Non causal/indirect relation E.g. Cholera and altitude, Goiter is common in highland area 23-Jun-22 Judgement of causality 10
Evaluating association . . . 2. Bias Is systematic error in epidemiological study that results an incorrect estimate of the association b/n exposure and Disease It is unidirectional error , deflect towards system/measurement error Undesirable, can’t be adjusted for once happened Types of bias Selection bias Measurement bias Information bias 23-Jun-22 Judgement of causality 11
Types of bias A. Selection bias: Failure to represent the source pop n Inadequacy of observed sample Inappropriate selection of study subjects Differential loss to follow-up/ Differential attrition e.g. if lost patient are from rural setting/failure to represent rural pop n / Self selection Unrepresentative nature of sample 23-Jun-22 Judgement of causality 12
A. Selection bias: In cross sectional study 23-Jun-22 Judgement of causality 13 Introduced into the study due to: Non-response bias is the major concern Unrepresentative sampling Some people listed may not be reached Replacement of selected study participant/affect randomness/ It might affect observed prevalence rates Solution Maximize response rates (face to face better than mail or telephone interview) Think about sampling techniques (SRS, Systematic Random Sampling) Gather inf n about ‘missing (non-respondent)’ groups, don’t replace study subjects Be careful about inferences
A. Selection bias: In case-control study 23-Jun-22 Judgement of causality 14 Criteria for selection of cases and controls should be similar except for the outcome variable! Selection bias occurs if selection of cases or controls is dependent on their exposure status or different criteria is used for cases and controls Solution Selection shouldn’t be based on exposure status Careful selection of control groups (create comparable xics )
A. Selection bias: In cohort study 23-Jun-22 Judgement of causality 15 Loss to follow-up” /‘attrition Misclassification bias /outcome misclassification/ Selection bias . . .selection of exposed and non exposed group Solution Maximize follow-up /exclude those who had high risk of follow up/ Intermediate follow-up examinations /measure surrogate indicator/
Types of bias B. Measurement bias: Introduced when measurement (ascertainment) of exposure/ outcome or both is not done correctly Distortion of exposure - disease relation by measurement procedure/protocol Inappropriate data collection method (tool, …) 23-Jun-22 Judgement of causality 16
Bias . . . C . Information bias Interviewer bias Recall bias Social desirability bias (wants to tell what he/she wants to hear) Hawthorne effect (change in performance due to attention) Placebo effect Health worker bias Lead time bias Length time bias 23-Jun-22 Judgement of causality 17
Minimizing Bias Use same method of assessment for exposed & outcome “ Blind ” interviewer / examiner Choose study design carefully Choose “hard” objective, rather than “soft” subjective outcomes Use well defined criteria for identification of “cases” and closed ended answer whenever possible Collect data on “ dummy” variable => variable which you do not expect to differ b/n groups e.g. source of water (open source: river vs deep hole) 23-Jun-22 Judgement of causality 18
Minimizing Bias Use Standardized measurement instruments Use multiple sources of information Questionnaires Direct measurements/observation/ Registries Case records Use multiple controls If using multiple interviewers, Investigate if systematic differences 23-Jun-22 Judgement of causality 19
Evaluating causation . . . Chance Rarely we can study an entire population, inference is attempted from a sample of the popn-no chance error There will always be random variation from sample to sample In general, smaller samples have less precision , reliability , and statistical power (more sampling variability, random error likely) 23-Jun-22 Judgement of causality 20
Chance . . . Chance can not always be excluded The solution to sampling variation are: Increase sample size Increase the efficiency of the measurement The conventional sequence would be to first assess (rule out) the presence of systematic error/bias In other words, it makes no sense to evaluate chance as a possible explanation when a study is biased from the start (i.e. the “experiment” we have set up is flawed, and hence, we should discard it) 23-Jun-22 Judgement of causality 21
Latin word – “ to for some/all mix together ” A third factor which is related to both exposure and outcome, and which accounts of the observed r/ sh b/n exposure and outcome Confounder is not a result of the exposure e .g. , association b/n child’s birth rank (exposure) and Down syndrome (outcome); mother’s age a confounder? e .g., association b/n mother’s age (exposure) and Down syndrome (outcome); birth rank a confounder? 4. Confounding 23-Jun-22 Judgement of causality 22
Confounding . . . Confounding is a confusion of effects that is a nuisance and should be controlled for if possible Age is a very common source of confounding 23-Jun-22 Judgement of causality 23 Maternal age is correlated with birth order & a risk factor even if birth order is low
Confounding . . . Smoking is correlated with alcohol consumption and a risk factor even for those who do not drink alcohol 23-Jun-22 Judgement of causality 24
Controlling Confounding Three possible stage to control confounding effect At the time of design: Randomisation, matching and restriction can be tried at the time of designing a study to reduce the risk of confounding. At the time of data collection Use standard measuring instrument, standard step, …… At the time of analysis: Stratification and multivariable (adjusted) analysis can achieve the same It is preferable to try something at the time of designing the study NB: adequate knowledge need to control, adequate literature review 23-Jun-22 Judgement of causality 25
Effect of Confounding 1. Totally or partially accounts for the apparent effect 2. Mask the underlying true association 3. Reverse actual direction of association 23-Jun-22 Judgement of causality 26
Effect of a factor as a causation 1. Independent effect When a factor showed its effect directly Effect seen without being distorted by a confounder 23-Jun-22 Judgement of causality 27
2. Mediation Like a confounder, it is associated to both the exposure and the outcome, but found in a path of action It is distinguished by careful consideration of causal pathways Knowledge of biological plausibility about the mediator is necessary 23-Jun-22 Judgement of causality 28
3. Interaction (effect modification) Two or more factors acting together to cause, prevent or control a disease (synergistic effect) The effect of two or more causes acting together is often greater than would be expected on the basis of summing the individual effects. Example Smocking and asbestos dust Vs Lung cancer. 23-Jun-22 Judgement of causality 29
Judgment of Causality Epidemiological studies are conducted in human beings, difficult to achieve total control of study subjects like laboratory based studies Scientific proof, is difficult to obtain; because:- 1. No ‘clean’ experimental env’t: Difficult to test hypothesis with absolute certainty 2. Principally observational studies and interventional studies are limited substantially through ethical considerations difficult to institute proof 23-Jun-22 Judgement of causality 30
Establishing a Causal Association Once we found that bias, confounding and chance are all determined to be unlikely , then we can conclude that a valid statistical association exists We should then apply for Bradford Hill criteria to establish causal associations 23-Jun-22 Judgement of causality 31
Bradford Hill criteria . . . for causal judgment It is the statement of epidemiological criteria of a causal association formulated in 1965 by Austin Bradford Hill (1897-1991) This criteria include; 1. Strength of the Association : The stronger the association the more likely that it is a causal. strong --- The more it is far from unit ---RR > 2 Weak < 2 23-Jun-22 Judgement of causality 32
2. Consistency of relationship The same association should be demonstrated by other studies both with different methods, settings and different investigators. Special methods of combining of a number of well designed studies exist, Meta Analysis 23-Jun-22 Judgement of causality 33
3. Specificity of the association Single exposure Single disease This works more to living organisms as causes. Plasmodium Species Malaria HIV AIDS 4. Temporal relationship It is crucial that the cause must precede the outcome This is usually problematic in cross-sectional and case-control designs. 23-Jun-22 Judgement of causality 34 Exposure Disease
5. Dose response relationship The risk of disease increases with increasing exposure to a causal agent. e.g. Cigarette smocking dose response example. 23-Jun-22 Judgement of causality 35 Non-smokers 1-14 cigarettes/day 15-24 cigarettes/day 25+ cigarettes/day…… more likely to develop lung cancer
6. Biological Plausibility Hypothesis should be coherent with what is known about the disease; both biologically and using laboratory. Knowledge about physiology, biology and pathology should support the cause-effect relationship 23-Jun-22 Judgement of causality 36
7. Study design: It is most important to consider. 23-Jun-22 Judgement of causality 37
8. Reversibility Removal of a possible cause results in a reduced disease risk e.g. Cessation of cigarette smocking is associated with reduction in risk of Lung cancer relative to those who continue If the cause leads to rapid irreversible changes (as in HIV infection), then reversibility cannot be a condition for causality 23-Jun-22 Judgement of causality 38
Judging the evidence There are no completely reliable criteria for determining whether an association is causal or not In judging the different aspects of causation, The correct temporal relationship is essential, Once this has been found, weight should be given to Plausibility, Consistency, dose-response relationship and Strength of the association 23-Jun-22 Judgement of causality 39
Summary Causal inference is an intelligent scientific interpretation exercise to know whether observed r/ sh is real or not and it is to be done without errors. All judgements of cause and effect are tentative. Be alert for error, the play of chance and bias. Causal models broaden causal perspectives. Apply criteria for causality as an aid to thinking. Look for corroboration of causality from other scientific frameworks . 23-Jun-22 Judgement of causality 40