BIAS_Lecture_updated explaining different types of bias

jacklinemahemba16 10 views 48 slides Mar 06, 2025
Slide 1
Slide 1 of 48
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48

About This Presentation

This PPT explains bias in different study designs


Slide Content

Michael J Mahande Bias & Chance Department of Epidemiology & Biostatistics, KCMUCo

2 Learning objectives U nderstand concept of bias B e familiar with types of bias Be familiar with prevention of biases Understand the role of chance U nderstand need to keep a critical mind

overview Most epidemiological studies aim to identify exposures that may increase or decrease risk of developing disease (outcome under investigation) Unfortunately, errors in the design, conduct and analysis can distort the results of any epidemiological study ( i.e. alternative explanation ) Such alternative explanations may be due to the effects of chance (random error), bias or confounding Produce spurious results, leading us to conclude the existence of a valid statistical association when one does not exist & viz. versa 3

NOTE Observational studies are particularly susceptible to the effects of chance, bias and confounding These need to be considered at both the design and analysis stage of an epidemiological study so that their effects can be minimized 4

5 Are the results believable? (internal validity) Can results from study participants be extrapolated to the broader population? (external validity) Why do we need to talk about bias?

6 Should I believe my results ? Owning a dog/cat Giardiasis O R = 7.3 Bias? Chance? C onfounding? True association causal non-cau s al

7 Errors Random error Systematic error (  low precision ) (  low validity )

Random vs. Systematic Error Random error : the variation of an observed value from the true population value due to chance Systematic error : is a type of error that deviates by a fixed amount from the true value of measurement 8

Quality of an estimate Precision & validity No precision Random error ! Precision but no validity Systematic error (Bias) !

10 Example Measuring height: Measuring tape hold differently by different investigators → loss of precision error Tape shrunk/wrong → systematic error bias (cannot be corrected afterwards!) 179 177 178 175 176 173 174 180

11 Definition of bias Any systematic error in an epidemiological study that results in an incorrect estimate of the association between exposure and disease Systematic variation of measurements from true value (Last J, 2009)

12 T ypes of bias Selection bias Information bias (measurement bias)

13 Selection bias

14 Selection bias Systematic errors in the process of Identifying/selection of study participants Allocation of individuals to different study groups This occurs due to preferential selection of subjects related to their case/control status exposure status

15 Selection bias Sampling bias Ascertainment bias Surveillance Referral, admission Diagnostic Participation bias Self-selection (volunteerism) Non-response, refusal Healthy worker effect (occupational cohort vs. general population), survival

Selection bias in case-control studies Can occur in selection of cases if they are not representative of all cases within the population OR In the selection of controls if they are not representative of the population that produced the cases 16

17 Selection bias How representative are hospitalised trauma patients of the population which gave rise to the cases? OR=6

18 OR=6 OR=36 Selection bias

19 Diagnostic bias OC use  breakthrough bleedin g  increased chance of detecting uterine cancer D iagnostic approach related to knowing exposure status Overestimation of “a”  overestimation of OR

20 Professor, he ad of respiratory department, world capacity on asbestos exposure, 145 publications on subject Admission bias Exposed cases have a different chance of admission than controls Overestimation of “a”  overestimation of OR Lung cancer cases exposed to asbestos not representative of lung cancer cases

21 Survival bias Contact with risk factor “lethal” leads to rapid death Only survivors of a highly lethal disease enter study Underestimation of “a”  under estimation of OR

22 Non-response bias Controls chosen among women at their homes : 13000 homes contacted  1060 controls Underestimation of “ d ”  under estimation of OR Controls mainly housewives with lower chance of having test than women gainfully employed

Selection bias in cohort studies May occur if the exposed and unexposed groups are not truly comparable Unexposed group is not correctly selected and differs from the exposed groups in other, unrelated, factors in addition to exposure of interest E.g. comparison of occupational cohort vs. general population 23

24 Non-response bias Smoker 90 910 1000 Non-smoker 10 990 1000 lung cancer yes no

25 Smoker 9 91 100 Non-smoker 10 990 1000 lung cancer yes no 10% of smokers dare to respond Non-response bias

26 Smoker 45 910 955 Non-smoker 10 990 1000 lung cancer yes no 50% of cases that smoked lost to follow up Non-response bias

27 Loss to follow-up Bias due to differences in completeness of follow-up between comparison groups Example: Study of disease risk in migrants Migrants more likely to return to place of origin when having disease lost to follow-up Lower disease rate among exposed (=migrant)

Selection bias in RCT Example: If subjects are allowed to choose between a new drug vs. established drug Health-conscious individuals might like to try the new drug Less well informed individuals may opt for the established drug Difference in effects of the 2 drugs may be explained by the baseline difference between groups 28

29 Minimising selection bias Clear definition of study population Explicit case and control definitions Cases and controls from same population Randomly assign study participants to treatment or control group

30 I nformation bias

31 I nformation bias Systematic error in the measurement of information on exposure or outcome Differences in accuracy measurent/classification of: Exposure data between cases and controls Outcome data between different exposure groups Study subjects are classified in the wrong category (exposure or outcome)

32 Misclassification (error in measurement) Measurement error leads to assigning wrong exposure or outcome category (observer or recall bias) Non- differential Random error Unrelated to exposure or outcome status Not a bias Weakens measure of association Differential Systematic error Related to exposure or outcome status Bias Measure of association distorted in any direction

33 Differential misclassification Occurs when one group of participants is more likely to be misclassified than the other Cohort study : if exposure makes individuals more or less likely to be classified as having the disease Case control study : if cases are more or less likely to be classified as being exposed than controls May lead to over/ underestimation of an association between exposure and outcome

Non-differential misclassification Occurs when both groups (cases or controls; exposed or unexposed) are equally likely to be misclassified It is independent of exposure or outcome status Leads to underestimation of association between exposure and outcome 34

35 Two main types of information bias Reporting bias Recall bias Observer bias Interviewer bias Biased follow-up

36 Mothers of children with malformations will remember past exposures better than mothers with healthy children Recall bias Cases remember exposure differently than controls Overestimation of “a”  overestimation of OR

37 Investigator may probe listeriosis cases about consumption of soft cheese Interviewer bias I nvestigator asks cases and controls differently about exposure Overestimation of “a”  overestimation of OR

38 Biased follow-up Unexposed are less likely diagnosed for disease than exposed Example Cohort study to investigate risk factors for mesothelioma Difficult histological diagnosis Histologist more likely to diagnose specimen as mesothelioma if asbestos exposure known

39 Nondifferential misclassification Misclassification does not depend on values of other variables Exposure classification unrelated to disease status, or Disease classification unrelated to exposure status Consequence Weakening of measure of association (“bias towards the null”)

40 Nondifferential misclassification Cohort study: Alcohol  laryngeal cancer

41 Cohort study: Alcohol  laryngeal cancer Nondifferential misclassification

42 Bias in prospective cohort studies Loss to follow up The major source of bias in cohort studies Assume that all do / do not develop outcome? Ascertainment and interviewer bias Some concern: Knowing exposure may influence how outcome determined Non-response, refusals Little concern: Bias arises only if related to both exposure and outcome Recall bias No problem : E xposure determined at time of enrolment

43 Bias in retrospective cohort & case-control studies Ascertainment bias, participation bias, interviewer bias Exposure and disease have already occurred  differential selection / interviewing of compared groups possible Recall bias Cases (or ill) may remember exposures differently than controls (or healthy)

44 Minimising information bias Standardise measurement instruments Administer instruments equally to cases and controls (exposed/ un exposed) Use multiple sources of information Questionnaires Direct measurements Registries Case records Use multiple controls

45 Questionnaire Favour closed, precise questions; minimise open - ended questions Seek information on hypothesis through different questions Disguise questions on hypothesis in range of unrelated questions Field test and refine Standardise interviewers’ technique through training with questionnaire

46 Minimising errors in epidemiological studies Error Study size Source: Rothman, 2002 Systematic error (bias) Random error (chance)

47 Epidemiological association : true or false? Association ? Bias in selection and measurement ? Confounding ? Chance ? True association p resent absent likely likely unlikely p resent absent unlikely False association

48 References Grimes DA, Schulz KF, Bias and Causal Associations in Observational Research, Lancet 2002; 359: 248-52 Sackett DL, Bias in Analytic Research, J Chron Dis 1979, Vol. 32, 51-6 Hill AB. The environment and disease: association or causation? Proc R Soc Med, 1965; 58:295-300 Hennekens CH, Epidemiology in Medicine, Lippincott Williams &Wilkins, 1987, first edition Rothman KJ, Modern Epidemiology, Lippincott Williams &Wilkins, 1998, second edition Giesecke J, Modern infectious disease epidemiology Last JM, A Dictionary of Epidemiology, Oxford University press, 2001, fourth edition
Tags