Literature Evaluation Literature evaluation is a skill that healhcare professionals develop with practice; it requires knowledge in several areas, including clinical trial design, outcome measures and statistical techniques. Reasons to read clinical literature Improve patient care Learn about research Educate peers and students about clinical care
Systematic approach First developed in 1975 and initially five step approach later on modified to seven step
References :Tertiary references These include: General textbooks, Formulary, Computer database e.g MICROMEDEX (for rare adverse effects during clinical trials) Tertiary sources provide detailed background and quick information
Secondary References When information is outdated in tertiary sources, then the secondary sources are used. These include indexing, citing and abstracting Medline and International pharmaceutical abstract
Available secondary References
Primary References It consists of original studies or published report in biomedical journals. These provide most recent information Critical evaluation of the primary references is required.
Forming Answerable questions: Background questions: Understand the problem in general 2. Foreground questions: Decision making questions
The ‘ pico ’ format for foreground question P = Patient and problem (Population, kids, women, men, patients) I = Intervention (test) C = Comparison intervention (Control group) O= Outcomes
Systemic approach To avoid poor quality literature and mastering irrelevant information we move to systemic approach, which includes; Retrieve: Collect broad range of articles from reliable databases (PubMed, Cochrane, Embase). Review: Scan titles/abstracts to check relevance & study design. Reject: Exclude irrelevant, outdated, or poor-quality studies. Read: Critically appraise full-text for validity, results, and applicability.
Selecting an article: filtering process 1.Primary Survey (Initial evaluation and brief overview) Analyze the title: Review the list of authors: Read the summary or abstract beginning with conclusion: If the conclusion is valid, important to me Results if they true, how useful they are Do the interventions make sense Can the information be generalized by patient
Secondary survey Introduction: Problem under study, context of the study, reasons for conducting study Importance of the topic, what is known, and what is known about the topic Specific questions (objective, goal of study and hypothesis) to be evaluated Study sample, primary outcome and intervention being evaluated Method design Conclusions should not extend beyond the stated objective
2. Methods: Research design-descriptive or comparative study 3. Study sample: How are the subjects and controls selected? Are the inclusion and exclusion criteria sufficiently clear to describe the target population 4. Treatment Allocation: Randomization Masking(Blinding)
5. Outcomes: Primary outcome-all study Secondary outcomes-some study How it was measured ? Was the measurement free of bias ? How the reproducible was the result ? How to standardize measurements and to minimize inter-observer variability ? 6. Statistical Analysis: Statistical tests
7. Results: Tables and figures How many patients eligible for the study ? How many enrolled ? How many completed ? 8. Discussion: Comparison with various studies, similarities and differences Limitation of the study Suggest new directions for appropriate study
9. Conclusion: Must consistent with the study objective Justified by the study results Should not over generalize the results of the study
Types of study by content Evaluation of a new therapy Evaluation of a new diagnostic tests Determination of the etiology of a condition Prediction of outcomes Natural course of a condition
1.Is the study valid? Did the author answer the questions What were the characteristics of the group Is it clear how the test was carried out Is the test results reproducible Was the reference standard appropriate Was the reference standards applied to all patients Was the test evaluated on an appropriate spectrum of patients
2. What were the results Are the sensitivity/specificity present Could the results occurred by chance Are there confidence limits
3.Will the results help the patient ? Is the diagnostic test available, affordable, accurate and precise Are the results applicable to patient ? Do my patients have a similar mix of disease severity and competing conditions Will the results change the case management Will the information gain be sufficient to change a clinical decision Will patients be better off as a result of performing the test
Bias Is systematic error that can distort measurements and/or affect investigations and their results Bias can be reduced by: Randomization Control group Blinding Use of objective outcome measures
Types of bias
Validity Validity refers to how accurately a method measures what it is intended to measure. Internal validity Internal validity is defined as the extent to which the observed results represent the truth in the population we are studying External validity whether the study results apply to similar patients in a different setting or not