5. Judjment of causality and causal ifference.pptx

melessejenbolla1 19 views 41 slides Jun 26, 2024
Slide 1
Slide 1 of 41
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41

About This Presentation

About the Job
POSITION: Newborn and Child Health Advisor
REPORTS TO: RMNCAH-N Service Delivery Director
DUTY STATION: Addis Ababa
PROGRAM: USAID-Empowered Community Activity
MINIMUM HOURS OF WORK:40 hours per week
...


Slide Content

By Tilaye Workineh (BSc, MPH/Epid, Asst. Prof. of Epidemiology) A H M C Lecture 6 Judgment of causality and causal inference

Outline Introduction Role of chance, bias and confounding Bradford Hill criteria to establish causality Summary 23-Jun-22 Judgement of causality 2

Introduction Definition of Epidemiology “…. and determinants of diseases and other health related problems in human population and the application …..” One of the major purposes of epidemiological studies is to, Discover the causes of disease Analytic Epidemiology focuses on Evaluating whether an association of an exposure and disease is causal 23-Jun-22 Judgement of causality 3

Evaluating Causation If we observe an exposure Vs outcome association, we have to ask, Is the association valid? do the study findings reflect true relationship b/n the exposure and disease? Is the association causal? Is there sufficient evidence to infer that a causal association exist b/n the exposure and the disease?) 23-Jun-22 Judgement of causality 4

Evaluating causation . . . To say the association is valid, Internal Validity: The results of observation are correct for particular group being studied /for sample/ e.g. mean is not valid MCT if the datum had extreme observation What about “ external validity ”? ……. could inference possible ? Do the results of the study apply (“generalizable”) to people who were not in it (e.g. the target pop n )? e.g. externalization not allowed is ample size/sampling not right 23-Jun-22 Judgement of causality 5

Evaluating causation . . . Internal validity must always be the primary objective since invalid result cannot be generalized Thus, internal validity should never be compromised in an attempt to achieve generalizability Error that distort validity can occur randomly / random error / or because of system error / bias / or confounding effect 23-Jun-22 Judgement of causality 6

Evaluating causation . . . In epidemiologic study, there are at least 3 possible explanations for invalid association between exposure and outcome : Chance Bias Confounding These explanations are not mutually exclusive more than one can be present in the same study 23-Jun-22 Judgement of causality 7

Assessing association b/n exposure and outcome No Could the observed association be due to: Stage I Bias Confounding Chance Could it be causal? No Probably Not Stage II Apply Guidelines for Causal Inference 23-Jun-22 Judgement of causality 8

23-Jun-22 Judgement of causality 9 Steps of Research and errors committed

Evaluating causation . . . Non Causal Associations Artefactual associations Failure to link exposure with disease in particular occasion Non causal/indirect relation E.g. Cholera and altitude, Goiter is common in highland area 23-Jun-22 Judgement of causality 10

Evaluating association . . . 2. Bias Is systematic error in epidemiological study that results an incorrect estimate of the association b/n exposure and Disease It is unidirectional error , deflect towards system/measurement error Undesirable, can’t be adjusted for once happened Types of bias Selection bias Measurement bias Information bias 23-Jun-22 Judgement of causality 11

Types of bias A. Selection bias: Failure to represent the source pop n Inadequacy of observed sample Inappropriate selection of study subjects Differential loss to follow-up/ Differential attrition e.g. if lost patient are from rural setting/failure to represent rural pop n / Self selection Unrepresentative nature of sample 23-Jun-22 Judgement of causality 12

A. Selection bias: In cross sectional study 23-Jun-22 Judgement of causality 13 Introduced into the study due to: Non-response bias is the major concern Unrepresentative sampling Some people listed may not be reached Replacement of selected study participant/affect randomness/ It might affect observed prevalence rates Solution Maximize response rates (face to face better than mail or telephone interview) Think about sampling techniques (SRS, Systematic Random Sampling) Gather inf n about ‘missing (non-respondent)’ groups, don’t replace study subjects Be careful about inferences

A. Selection bias: In case-control study 23-Jun-22 Judgement of causality 14 Criteria for selection of cases and controls should be similar except for the outcome variable! Selection bias occurs if selection of cases or controls is dependent on their exposure status or different criteria is used for cases and controls Solution Selection shouldn’t be based on exposure status Careful selection of control groups (create comparable xics )

A. Selection bias: In cohort study 23-Jun-22 Judgement of causality 15 Loss to follow-up” /‘attrition Misclassification bias /outcome misclassification/ Selection bias . . .selection of exposed and non exposed group Solution Maximize follow-up /exclude those who had high risk of follow up/ Intermediate follow-up examinations /measure surrogate indicator/

Types of bias B. Measurement bias: Introduced when measurement (ascertainment) of exposure/ outcome or both is not done correctly Distortion of exposure - disease relation by measurement procedure/protocol Inappropriate data collection method (tool, …) 23-Jun-22 Judgement of causality 16

Bias . . . C . Information bias Interviewer bias Recall bias Social desirability bias (wants to tell what he/she wants to hear) Hawthorne effect (change in performance due to attention) Placebo effect Health worker bias Lead time bias Length time bias 23-Jun-22 Judgement of causality 17

Minimizing Bias Use same method of assessment for exposed & outcome “ Blind ” interviewer / examiner Choose study design carefully Choose “hard” objective, rather than “soft” subjective outcomes Use well defined criteria for identification of “cases” and closed ended answer whenever possible Collect data on “ dummy” variable => variable which you do not expect to differ b/n groups e.g. source of water (open source: river vs deep hole) 23-Jun-22 Judgement of causality 18

Minimizing Bias Use Standardized measurement instruments Use multiple sources of information Questionnaires Direct measurements/observation/ Registries Case records Use multiple controls If using multiple interviewers, Investigate if systematic differences 23-Jun-22 Judgement of causality 19

Evaluating causation . . . Chance Rarely we can study an entire population, inference is attempted from a sample of the popn-no chance error There will always be random variation from sample to sample In general, smaller samples have less precision , reliability , and statistical power (more sampling variability, random error likely) 23-Jun-22 Judgement of causality 20

Chance . . . Chance can not always be excluded The solution to sampling variation are: Increase sample size Increase the efficiency of the measurement The conventional sequence would be to first assess (rule out) the presence of systematic error/bias In other words, it makes no sense to evaluate chance as a possible explanation when a study is biased from the start (i.e. the “experiment” we have set up is flawed, and hence, we should discard it) 23-Jun-22 Judgement of causality 21

Latin word – “ to for some/all mix together ” A third factor which is related to both exposure and outcome, and which accounts of the observed r/ sh b/n exposure and outcome Confounder is not a result of the exposure e .g. , association b/n child’s birth rank (exposure) and Down syndrome (outcome); mother’s age a confounder? e .g., association b/n mother’s age (exposure) and Down syndrome (outcome); birth rank a confounder? 4. Confounding 23-Jun-22 Judgement of causality 22

Confounding . . . Confounding is a confusion of effects that is a nuisance and should be controlled for if possible Age is a very common source of confounding 23-Jun-22 Judgement of causality 23 Maternal age is correlated with birth order & a risk factor even if birth order is low

Confounding . . . Smoking is correlated with alcohol consumption and a risk factor even for those who do not drink alcohol 23-Jun-22 Judgement of causality 24

Controlling Confounding Three possible stage to control confounding effect At the time of design: Randomisation, matching and restriction can be tried at the time of designing a study to reduce the risk of confounding. At the time of data collection Use standard measuring instrument, standard step, …… At the time of analysis: Stratification and multivariable (adjusted) analysis can achieve the same It is preferable to try something at the time of designing the study NB: adequate knowledge need to control, adequate literature review 23-Jun-22 Judgement of causality 25

Effect of Confounding 1. Totally or partially accounts for the apparent effect 2. Mask the underlying true association 3. Reverse actual direction of association 23-Jun-22 Judgement of causality 26

Effect of a factor as a causation 1. Independent effect When a factor showed its effect directly Effect seen without being distorted by a confounder 23-Jun-22 Judgement of causality 27

2. Mediation Like a confounder, it is associated to both the exposure and the outcome, but found in a path of action It is distinguished by careful consideration of causal pathways Knowledge of biological plausibility about the mediator is necessary 23-Jun-22 Judgement of causality 28

3. Interaction (effect modification) Two or more factors acting together to cause, prevent or control a disease (synergistic effect) The effect of two or more causes acting together is often greater than would be expected on the basis of summing the individual effects. Example Smocking and asbestos dust Vs Lung cancer. 23-Jun-22 Judgement of causality 29

Judgment of Causality Epidemiological studies are conducted in human beings, difficult to achieve total control of study subjects like laboratory based studies Scientific proof, is difficult to obtain; because:- 1. No ‘clean’ experimental env’t: Difficult to test hypothesis with absolute certainty 2. Principally observational studies and interventional studies are limited substantially through ethical considerations  difficult to institute proof 23-Jun-22 Judgement of causality 30

Establishing a Causal Association Once we found that bias, confounding and chance are all determined to be unlikely , then we can conclude that a valid statistical association exists We should then apply for Bradford Hill criteria to establish causal associations 23-Jun-22 Judgement of causality 31

Bradford Hill criteria . . . for causal judgment It is the statement of epidemiological criteria of a causal association formulated in 1965 by Austin Bradford Hill (1897-1991) This criteria include; 1. Strength of the Association : The stronger the association the more likely that it is a causal. strong --- The more it is far from unit ---RR > 2 Weak < 2 23-Jun-22 Judgement of causality 32

2. Consistency of relationship The same association should be demonstrated by other studies both with different methods, settings and different investigators. Special methods of combining of a number of well designed studies exist, Meta Analysis 23-Jun-22 Judgement of causality 33

3. Specificity of the association Single exposure Single disease This works more to living organisms as causes. Plasmodium Species Malaria HIV AIDS 4. Temporal relationship It is crucial that the cause must precede the outcome This is usually problematic in cross-sectional and case-control designs. 23-Jun-22 Judgement of causality 34 Exposure Disease

5. Dose response relationship The risk of disease increases with increasing exposure to a causal agent. e.g. Cigarette smocking dose response example. 23-Jun-22 Judgement of causality 35 Non-smokers 1-14 cigarettes/day 15-24 cigarettes/day 25+ cigarettes/day…… more likely to develop lung cancer

6. Biological Plausibility Hypothesis should be coherent with what is known about the disease; both biologically and using laboratory. Knowledge about physiology, biology and pathology should support the cause-effect relationship 23-Jun-22 Judgement of causality 36

7. Study design: It is most important to consider. 23-Jun-22 Judgement of causality 37

8. Reversibility Removal of a possible cause results in a reduced disease risk e.g. Cessation of cigarette smocking is associated with reduction in risk of Lung cancer relative to those who continue If the cause leads to rapid irreversible changes (as in HIV infection), then reversibility cannot be a condition for causality 23-Jun-22 Judgement of causality 38

Judging the evidence There are no completely reliable criteria for determining whether an association is causal or not In judging the different aspects of causation, The correct temporal relationship is essential, Once this has been found, weight should be given to Plausibility, Consistency, dose-response relationship and Strength of the association 23-Jun-22 Judgement of causality 39

Summary Causal inference is an intelligent scientific interpretation exercise to know whether observed r/ sh is real or not and it is to be done without errors. All judgements of cause and effect are tentative. Be alert for error, the play of chance and bias. Causal models broaden causal perspectives. Apply criteria for causality as an aid to thinking. Look for corroboration of causality from other scientific frameworks . 23-Jun-22 Judgement of causality 40

The end!
Tags