Critical evaluation ppt

3,369 views 162 slides Apr 02, 2019
Slide 1
Slide 1 of 162
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128
Slide 129
129
Slide 130
130
Slide 131
131
Slide 132
132
Slide 133
133
Slide 134
134
Slide 135
135
Slide 136
136
Slide 137
137
Slide 138
138
Slide 139
139
Slide 140
140
Slide 141
141
Slide 142
142
Slide 143
143
Slide 144
144
Slide 145
145
Slide 146
146
Slide 147
147
Slide 148
148
Slide 149
149
Slide 150
150
Slide 151
151
Slide 152
152
Slide 153
153
Slide 154
154
Slide 155
155
Slide 156
156
Slide 157
157
Slide 158
158
Slide 159
159
Slide 160
160
Slide 161
161
Slide 162
162

About This Presentation

critical evaluation of a journal


Slide Content

GOOD MORNING

CRITICAL EVALUATION PRESENTED BY: Dr. SARATH SK DEPARTMENT OF PUBLIC HEALTH DENTISTRY

Content

Content

CRITICAL EVALUATION INTRODUCTION Evidence-based medicine' has become a catch-phrase in the last two decades. It means that reading, interpreting, evaluating and acting on published literature should become a routine part of clinical practice. Decisions that we take should depend on evidence base, the techniques that we employ and the skills that we engage should be supported by evidence base.

One should not accept the content of a scientific article on its face value assuming that the journal in which it is published is of a very high reputation and authors are considered as pioneers in the area. It is a must to remain open minded, judge, evaluate and assess a scientific article by engaging in critical reading which facilitates critical evaluation.

CRITICAL READING: Considered and justified examination of what others have written or said regarding the subject in question and ability to recognize, analyse and evaluate the reasoning and forms of argumentation in the articles. This skill is called' critical reasoning'

WHAT IS CRITICAL READING? Critical reading is: 1. One that goes beyond mere understanding into making personal response to what has been written. 2. One that relates the writing in hand with various other writings and tries to compare, contrast and deal with contradictions. 3. One that does not take what is written at face value. 4. One that views research reports as a contested terrain, within which alternative views and positions may be taken.

The assumption that reading should be simple, can be an obstacle when we are reading a scientific journal. If we don't engage the matter actively, we might "get" what the author says, but we cannot evaluate it. We may know what the writer thinks about a subject, but we cannot reasonably defend agreeing or disagreeing with his opinion or argument.

If we don't read critically, we can't say much beyond what the text happens to say. Critical reading should be followed immediately by a critical reflective session. Skimming, annotating, asking questions, contemplating and writing an outline of text are the different strategies employed during critical reading

EVALUATIVE FRAMEWORKS

MARTHA PATTON'S EVALUATIVE FRAMEWORK FOR CRITICAL READING This list of questions adopts an inquisitive mode while reading. 1. What is the research question or hypothesis? 2. What is the claim or thesis of the text? 3. What is the relationship between author and reader 4. What is the method by which the author attempts to answer the question?

5. What are the assumptions Underlying the article? 6. How respectable is the evidence? 7. How the references are used and presented? 8. What are some of the implications of the text?

THE ART AND SCIENCE OF CRITICAL EVALUATION

Comprehensive evaluation of a scientific paper consists of two components:- Critical Appraisal 2. Critical Reflection

Critical Appraisal It can be defined as the assessment of the scientific quality of the paper. including design, methods and analysis. Critical appraisal in dentistry has been described as making sense of the evidence and systematically considering its validity results and relevance to dentistry

As a critical appraiser one would ask questions like, 1. Is the title relevant to the study conducted? 2. Is the method used appropriate for answering the research question? 3. Is the design appropriate for the defined aim and objectives of the study? 4. Are the statistical tests used appropriate and indicated?

Critical Reflection It is to judge and discuss the implications of the study for the world outside the study. The extent to which the study findings are generalizable and what are the consequences of this study on larger population outside the study are the two aspects which are majorly covered under critical reflection.

INTERNAL AND EXTERNAL VALIDITY OF A STUDY.

Validity of a study is divided and presented under two headings. 1. Internal validity: Can I believe the results of the study? This involves 'critical appraisal' skills to assess the methodological quality of the study.

2. External validity: Assuming the result of a study is internally valid, how true it is for wider population outside the study? This involves the skills of reflecting on the importance and practical relevance of the study.

Two points to be always kept in mind by a critical evaluator 1. Results are always biased-. Every study is dirty to some extent; the perfect one does not exist. The crucial task is to judge the effect any bias may have on the results and its implications of the results.

2. Statistics do not equal truth They give us an estimate of how chance might affect study results. One should remember that a statistical significant difference means simply and only that a result is unlikely to have arisen from chance alone.

HOW TO CTRITICALLY EVALUATE A SCIENTIFIC PAPER?

1. PROLOGUE The presenter for the session should ideally introduce the topic in the discipline to which the selected article belongs. This should be brief and relevant. This is done primarily to facilitate a movement from "general" to "specific" instead of abruptly beginning the session.

2. ABOUT THE ARTICLE Identifying the source of the article or tracing the source of the article (paper)

3. ABOUT THE JOURNAL • What type of journal? • Who is the publisher of the journal? • Under what section of the journal the article is published? • What is the ISSN number of the journal? • What is the MeSH representation?

• Is it peer reviewed or not? • What is the year of publication? • What is the issue number? • What is the volume number?

PEER REVIEW When an article is submitted by authors to the editor of a journal, before it is accepted for inclusion in a journal, it is subjected to a process of peer review. The editors have knowledge and integrity, according to the conventional view and act as ‘gate keepers’

5. ABOUT THE AUTHORS: Is there a mention of author's name/names? Is there a mention of designation! institutional attachments? Have they done work in the same area or in different areas earlier to the present one? Are they familiar figures in the literature?

Are they pioneers in any specific area? What is their track record? (a seasoned reader will know the track record of many authors) Are they new or unknown? (like the unknown sculptors)

TITLE OF THE ARTICLE

TITLE OF THE ARTICLE Does it indicate the topic and focus of the study? Does it indicate the research question? Is the title meaningful and complete? Does it reflect the aim and objectives of the study? Does it include the important variables which are intended to be measured?

• Does it give an idea of study population and study setting (site)? • Does it give an idea about the design of the study? • Does the title look catchy? • Is it very short or over long? • Is it too general or specific or over specific?

ABSTRACT

An abstract is defined as "an abbreviated, accurate representation of the contents of a document, without added interpretation or criticism and without distinction as to who wrote the abstract".

The function of an abstract is to summarize the nature of the research project, its context, how it was carried out and what its major findings were. Ideally it should require no more than one page of text, and will typically be restricted to 200 to 300 words or less.

TYPES OF ABSTRACTS Structured abstract : The content is presented under subheadings like Aim and objectives, methods, results, conclusions etc. Unstructured abstract: 'It is a text without any subheadings, but the matter may implicitly follow the same pattern.

Abstracts are also classified as : Informative Indicative

Informative It is best for papers describing original research. It should typically contain 100-250 words. It should ideally answer the following Issues. Aims and objectives Why was the research done? Methods What was done and how?

Results What were the findings? Conclusions What do the findings mean? If the paper is about new method or apparatus a) What are the advantages? (method or apparatus) b) How well does it work?

Indicative It is used for long articles such as reviews, review of reviews, meta-analysis (secondary research) etc. It gives reader a general idea of the contents of the paper but little, if any, idea of specific methods or results.

Do's and Don'ts about abstract: Under no circumstances should the abstract contain any information which is not in the following text. A short abstract should be written in a single paragraph. The sentences should be complete & followed logically. Acronyms, abbreviations & symbols should be avoided, if used they should be defined at first mention. Tables, diagrams and equations are not to be included in abstract. Citations are not to be included.

Difference between an abstract and summary of an article: Summary is not same as the abstract. Strictly speaking a summary restates the main findings and conclusions of a paper and it is written for people who have already read the paper. It is placed after the discussion section. Abstract is an abbreviated version of the paper written for people who may never read the complete version.

KEYWORDS

In the journal's contents list the most important and specific words are chosen for listing as keyword. Keywords facilitate the search from literature data bases key words selected should facilitate data search through engines when explode commands are given to data bases.

Everyone is facing the problem of how to deal with the ever-increasing volume of literature. Index medicus, excerpta -medicus are index journals which can be searched electronically using Medline (the national library of medicines bibliographical data bases) and MeSH terms (medical subject headings).

PUBMED, EMBASE, HEALTHSTAR, SSCI and SCI are other data bases. "Boolean operators" can be used to focus the search. We can use the AND, OR, NOT, and AND NOT operators. Cochrane library is another computerized data base that can be searched for systematic reviews, clinical trials.

Critical appraisal Questions related to abstract and key words: 1 . Is it structured or unstructured? 2. Is the abstract informative, indicative or a combination of informative and indicative? 3. Is it comprehensive in its contents (containing aim and objectives, methods, results and conclusions)?

4. Is it short, long or overlong? 5. Is it giving the gist of the whole text? 6. Does the information given in abstract match with what is present in detail text? 7. Does it contain any information which is not in the text?

8. Are there acronyms, short forms, abbreviations which are Not defined? 9. Can it facilitate reader in his selection of pertinent study?

IMRaD It is the conventional format (modified from Greenhalgh 1997) which is highly accepted as a norm in the publication circle. It refers to the order in which the contents of an article should be presented.

I = INRODUCTION (why was the research done?) M = METHODS (how the study was done, and results were analysed?) R = RESULTS (what were the findings?) D = DISCUSSION (what do the findings mean?) Critical appraisal question: Is the article following the IMRaD format?

INTRODUCTION

DO NOT DIRECTLY JUMP INTO

It should ideally introduce the literature to the reader This section answers the question: why was the study done? What forms the background for the study? Introduction should have logically flowing sentences which create a movement from general (background) to specific (foreground).

Background provides the context of study and the foreground reveals the specific research. Introduction should enable a reader to understand the current status of knowledge in the respective area . Introduction should cite a small number of important and pertinent papers already published in the journals (literature exploration).

These citations should be represented in the list of references or bibliography at the end of the article. There should be no citations which do not follow representation in the references.

The last few lines of the introduction should mention the research question, research hypothesis, aim and objectives of the study either explicitly or implicitly.

STRUCTURE OF INTRODUCTION OPENING ↓ BODY (LITERATURE REVIEW) ↓ TERMINATION (Need for the study/Research question)

HOW TO BEGIN INTRODUCTION? There are three standard methods of stock opening. 1. Seminar approach 2. Alarmist approach 3. Much discussion recently (MDR) approach.

The Vancouver Group on the introduction state the purpose of the article is to summarize the rationale for the study or observation, give only strictly pertinent references and do not include data and conclusions from the work being reported. The central part of introduction should cover the relevant researches which form the background to the study.

The introduction should normally lead towards an overview of what the study will actually do and should conclude with the statement of the hypothesis that the study actually intends to test.

Critical appraisal Questions related to introduction 1. Is the introduction meaningful and concise? 2. Is it built on existing literature? 3. Has it adapted a specific approach or is it written according to a specific pattern? 4. Is it logically presented?

5. Are there omissions of some important studies in citations? 6. Are the citations relevant and pertinent to the study being reported? 7. Are these citations followed with correct references in the list of references? 8. Has it presented the need for the study?

9. Is there an implicit or explicit mention of aim and objectives? 10. Has it stated research question or research hypothesis? 11. Has it succeeded in introducing the background of the study subject to the reader?

METHODOLOGY

BLUEPRINT

This section answers the question: What was done? Methodology section can be called the "Brain" of the paper for it refers to the active area of operation of a study. It should give readers clear and correct information about the methods employed, how they were employed, on whom and when they were employed, including statistical methods utilized for analysis and the ethical guidelines which were followed.

General part of methodology: Context of the study It is concerned with two issues: 1. Study setting: Where did the study take place? 2. Study population: On whom the actual study was carried out.

I. STUDY DESIGN: A plan or scheme for setting up a study which will test the stated hypothesis and address the broader issues raised by the research question is called study design

1. Is it interventional (experimental) or observational? Interventional study is the one where researchers deliberately introduce or withdraw treatment or procedures as part of the study. Observational study is the one where the researchers observe what is happening as mute spectators without intervening in what is happening.

2. The Time Frame:- (Cross Sectional, Retrospective, Prospective) a. Cross-Sectional surveys provide snap-shot of the current state of affairs. They are used for assessing the burden of disease in a specific population or to assess the health needs. b. Retrospective studies (Case control study) Moves backward from effect to cause. Looking back to what has already happened is the key.

c. Prospective studies (Cohort study) It moves forward from cause to effect. It is investigation into future and involves follow up.

3. Controlled or Uncontrolled Control group is essentially the same as the study group in all its characteristics except for the factor which is under study like positive control, placebo control are used.

4. Randomized or non-randomized: The key feature of a randomized intervention study is that the allocation of a particular treatment to any person who participated in the study is entirely random. The advantage is that selection bias is eliminated and one can attribute the results only to two things: the intervention or chance.

5. Blinding or No-blinding: Blinding is done to eliminate subjective bias which can arise as a result by the participants, investigator and statistician of having prior knowledge about the group allocation, intention of study and the agent used.

Which design is appropriate? • For deriving a clue of causality→ Descriptive study design • To know causality→ Analytical study design • To test causality→ Experimental study design

Critical appraisal Questions related to study design: 1. Whether observational or interventional? 2, Whether cross sectional, retrospective or prospective? 3. Whether controlled or uncontrolled?

4. Whether randomized or non-randomized? 5. Whether blinded or non-blinded? 6. If blinded -is it single/double/triple blinded?

II. SAMPLING STRATEGY It presents the sampling method, sampling frame, sample size and the methods for assigning samples to conditions. The sample size justification or the way the authors reached at a specific size with respect to sample, plays a vital role in deciding validity of the study. While selecting candidates what were the inclusion and exclusion criteria need a clear mention.

III. MEASUREMENT STRATEGIES AND MEASUREMENT INSTRUMENTS What were the variables measured? (Primary and secondary), how they were measured? The parameters under measurement should be defined and the definition should be more practical than theoretical. 'Standardization' of measuring criteria can be useful and can eliminate the 'measurement bias'.

It can be done by using standard indices and instruction manuals of standard research organizations. When more than one examiner is assigned the duty of measurement, calibration of examiners is a must and the details of calibration techniques used to reduce intra-examiner variability and inter-examiner variability using Kappa-statistics requires a clear mention in this section.

The sensitivity, reliability and specificity attained by the measurement has to be mentioned. The above said measures can eliminate measurement bias, and instrument bias to a great extent.

a. Defining Measures Measures are of three types. 1. Baseline measures: - They are characteristics measured at the start of the study. They are required to study their influence on outcome and also ensure comparability. 2. Process measures: - The way in which the treatment or measures are carried out in the study (e.g. number of visits, dosage of drugs, types of specimens, at what intervals) They give an idea about the rigors of the methods.

Outcome measures

3. Outcome measures: - Those occurrences which the study aim to investigate (death, recovery, return to work, disability, reduction in blood pressure etc.) Important outcome measures in direct relation with the objectives of study are called' Primary Outcomes'. Those which are important but not the main focus of the study are called Secondary outcomes.

BLUEPRINT Experimental Design

IV. The Experimental Design This should be described in detail, so that the reader is able to replicate the study. The recruitment, orientation, and assignment of subjects followed by a record of drop-outs, missing subjects, loss of compliance should be noted down. Statistical methods are developed for doing drop out analysis and also to adjust for the loss.

V. Statistical Analytical Procedures The Proposed strategy for quantifying, evaluating and analysing the results should be presented along with the actual statistical procedures employed. The significance levels and confidence limits set for the study need an explicit mention in this part. The specific statistical tests selected should be clearly mentioned.

VI. Ethics One should report that experiments on human subjects were done in accordance with ethical standards of the responsible committee on human experimentation (institutional or regional)

VII. STATISTICS a) Statistical methods should be described with enough detail b) The confidence limits set, and the 'P' value should be mentioned. c) Details about randomization, blinding procedures, complications of treatment should be given.

d) Losses to observation such as drop-outs, non-compliance etc should be mentioned. e) Unless required complex statistical procedures should be avoided. f) If computer programmes are used like SPSS or Epi-info for analysis, mention has to be made in this section.

VANCOUVER GROUP ON METHODS a) Describes selection of subjects (patients or laboratory animals or controls) b) Identifying the age, sex, and other important characteristics of the subjects. c) Identifying the methods, apparatus (manufacturers name, address in parenthesis) and procedures in sufficient detail to allow replicability.

d) Give references to established methods, describe new or substantially modified method and give reasons for using them. e) Reports on randomized controlled trials should present information on all study elements including the protocol (study population, intervention, outcomes etc), assignment of interventions (Randomization, concealment of allocation) and the methods of marking (blinding)

f) Authors of review articles should include a section on methods used for locating, selecting, extracting and synthesizing data.

Critical appraisal Questions related to methodology: I. What and how of the study? 1. Is the methodology presented in a logical, clear and meaningful manner? Is it replicable? 2. Is it the appropriate design for the aim and objectives set by the study? 3. Has the author mentioned the design of the study explicitly? 4. On whom was the study done? 5. How were the subjects selected?

6. Has the study setting been mentioned by the authors? 7. Target population, sampling frame, and study population are they clearly defined? 8. Were the subjects studied in "real life" circumstances? II. Was the design of the study sensible? 1. What specific intervention or other manoeuvre was being considered and what was it being compared with?

2. What were the variables measured and what were the base-line values? 3. What were the outcomes measured and how were they measured? III. Was systematic bias avoided or minimized? • In clinical trials Was randomization done for allocation into study and control groups?

• In cohort study Was complex statistical adjustment made for baseline differences in key variables between exposure cohort and control cohort? • In case control studies. Was the diagnosis of "caseness" made based on clear criteria? Was misclassification avoided?

IV. Was the assessment "blind"? What type of blinding was done? V. Were preliminary statistical questions dealt with? 1. Was sample size and its derivation mentioned? 2. What was the power of the study? 3. Was the duration of follow-up justified? 4. What was the "drop-out rate" or "drop-out proportion"? 5. What were the statistical measures taken to control the "dropout" effect? Have the authors made a mention of it?

VI. Advanced statistical questions 1. What sort of data have the authors obtained? (Quantitative or qualitative) 2. What is the nature of observed data distribution? (Normal distribution or otherwise) 3. Have they used appropriate statistical tests? (parametric or nonparametric) 4. If the authors have used complex and obscure statistical analytical tests, have they mentioned why they were used? Have they referenced them?

5. Were the "outliers" mentioned and analysed with "common sense" and suitable statistical adjustments? 6. Has correlation been distinguished from regression and has the correlation co-efficient ( r-value ) been calculated and interpreted correctly? 7. Have "p" values been calculated correctly and interpreted appropriately? 8. Have confidence intervals been calculated and do the author's conclusions reflect them?

9. Have the authors expressed the effects of an intervention in terms of the likely benefit or harm which an individual patient or community can expect? (Effect size) VII. Standardization or calibration 1. Were the examiners calibrated? 2. Were the instruments used calibrated? 3. Were the methods standardized?

VIII. For descriptive surveys 1. Whether a pilot study was conducted to check feasibility consistency and validity of the measuring instrument and the methods 2. How were the subjects selected? a) Was it deliberate, opportunistic? b) Did subjects volunteer? 3. What steps were taken to ensure that the data were reliable and repeatable?

4. What steps were taken to eliminate the effects of researcher or the research procedure on the response of the subjects? 5. How was the data gathered? 6. If questionnaires were used, what was the layout? How many questions were there? What were the types of questions? Were they tailored to reach common people?

How was it administered? Was it translated to local language? Was it checked for validity, reliability, consistency? What statistical tests were done to check them? How were the ethical implications managed?

RESULTS

The first few tables should provide subject characteristics (descriptive statistics) and the later tables should describe outcomes as measurements of dependent variable reported by the study and analytic results (inferential- statistics).

Thus, may provide clues for future studies and expand into newer areas. Every table, chart or graph should be numbered. The columns should have appropriate headings in tables. Every table should be titled. Table should be simple, because complex tables are difficult to read as most of us may have inherent fear to numbers. Graphs and charts also should be presented in a legible form with particular details.

There are no specific rules about summarizing results except that we need to identify the important and relevant ones. The description of the main results should be based on aim and objectives of the study.

Tables may be used to summarize information, usually in a numerical format, and it indicates the relationships between the different variables under consideration. Diagrams are also useful for indicating relationships and structure: they can convey ideas much more effectively than lengthier textual explanations.

When to use these illustrations? • Where the illustration replaces a substantial piece of text (i.e. a paragraph or more), use it but do not keep the text as well. • Where the illustration serves to make a point, which would be difficult to make otherwise. • Should not be used if it is copyright without having appropriate permission. • Don't use the illustration unless it is clear, unambiguous and well reproduced.

THE VANCOUVER GROUP ON THE RESULTS Present your results in logical sequence in the text, tables and illustrations. Do not repeat in the text all the data in the tables or illustrations; emphasize or summarize only important observations.

On statistics: - Put a general description of statistical methods in the methods section. When data are summarized in the results section, specify the statistical methods used to analyse them. Restrict tables and graphs to those needed to explain the argument of the paper and to assess its support, use graphs as an alternative to tables when there are many entries.

On tables:- Number the tables consecutively in the order of their first citation in the text and supply a brief title to each. Give each column a short or abbreviated heading. Place explanatory matter in foot notes, not in the heading. Identify statistical measures of variations such as standard deviation and standard error of the mean. Be sure that each table is cited in the text.

Questions related to results: 1. Are the results presented in logical and comprehensible manner? 2. Are the important results presented in both tables and text matter? 3. Are the tables, charts and graphs numbered properly and titled properly? 4. Are there tables showing descriptive as well inferential data?

5. Wherever required are there notes below the tables? 6. Are the tables simple and alignment of information properly done? 7. Are the data given in text and tables match or tally with each other? 8. Are the diagrams, graphs and charts judiciously used? 9. Are the results based on aim and objectives of the study?

DISCUSSION

Discussion is majorly about interpreting and explaining the results obtained. The researcher attempts to make the sense of findings. Inferences are drawn with respect to population, product and test materials which were used in the study.

Structure of a discussion A discussion should be constructed in a linear manner with logical transitions and should have the following structure. 1. A summary of the principal results of the study. 2. Composition of the study findings to those of previous researches. The reason for discrepancies should be explained and properly accounted.

3. An explanation on problems and limitations of the study. 4. Suggestions for future research, to remedy the limitation and extend the generalities of its findings.

The first sentence is relatively straight forward and it summarizes the main findings of the research: in this study we found that. This is followed with a brief essay about their implication. Discussion should emphasize how the study has unravelled some important aspects related to the topic, how it is different or similar to other studies.

If the study has broken a new ground in the area it has to be highlighted and explained in-detail. It is often the place in an article where authors can give full rein to their imagination. Language of speculation can be used with a degree of control hence it is the most creative part of an article.

The limitations of the study and the extent to which study conclusions can be generalized needs a clear description in this section. The prospectus for conducting new studies should be discussed.

THE VANCUOVER GROUP ON THE DISCUSSION Include in the discussion section the implications of the findings and their limitations, including implications for future research. Relate the observations to other relevant studies.

Critical appraisal Questions related to discussion: 1. Is the discussion meaningful? 2. Does it highlight the important findings of the study? 3. Is there enough explanation of all significant findings? 4. Have the authors compared the current findings with that of the ones already reported in literature?

5. Is the comparison logical and reasoned properly? 6. Are the implications of the study with respect to research field, practice, other populations and general population discussed? 7. What are the limitations of the study as presented in the discussions? 8. Did the discussion include an element of imagination or speculation?

CONCLUSIONS

  At the end of the article the researcher provides a summary and interpretation of the study findings and attempts to draw conclusions related to the original theory and research question. The summary should provide a gist of what the study was about, what was done and what was found?

It is important to arrive at our own conclusions after critically reading the paper-irrespective of the author's conclusion. The reasoning proposed by the author in reaching conclusions should be rigorously analysed and assessed for its strength.

Any conclusion which is extending beyond the frame work of aim and objectives of the study or the results drawn by the study is called "extended conclusion". Such a conclusion is invalid and lacks evidence and is unsupported by data.

Critical appraisal Questions related to conclusions: 1. Are the conclusions meaningful? 2. Are they supported by the data collected and the results drawn (evidence based or anecdotal)? 3. Are they based on aim and objectives of the study? 4. Has the research question been answered? 5. Have they generated and presented some new hypotheses as conclusions? 6. Have they made appropriate suggestions or recommendations?

14. REFERENCES

References provide an opportunity to the reader to pursue further reading and enable more learning. Primary and secondary references: Primary references are those which an: direct sources of the stuff which is cited in the article. Secondary references are those which are indirect sources because they would have imported and cited that stuff from others work.

References or Bibliography? A 'references list' contains references to material directly used in the research and preparation of the scientific article, that is the stuff cited in the text. A 'bibliography' contains references to works for further information or background reading in addition to all references.

VANCOUVER GROUP ON REFERENCES References should be numbered consecutively in the order in which they are first mentioned in the text; references should be identified in text and tables. Avoid using abstracts as references; references to papers accepted but not yet published should be designated as 'in press or forthcoming'.

Critical appraisal Questions related to references: 1. Are there references for every citation done in the text part, tables, legends etc. of the article? 2. How many secondary references are present? 3. Are they accurate references? 4. Are there enough references to recent publications? 5. Have they been presented according to specific scientific conventions?

3. CRITICAL REFLECTION OF A SCIENTIFIC PAPER Critical reflection is an integral step of critical evaluation. It should follow soon after critical appraisal. It refers to the external validity of the study. A study on critical appraisal may be found highly internally valid.

The very next question of the evaluation is, so what? (Is it externally valid?) 1. Are the results and conclusions be extrapolated to other populations and other contexts? (Generalizability) 2. Can the conclusions be applied to routine practice?(Clinical practice significance)

3. What is its utility of population at large? (Public health significance) 4. How significantly can it change our concepts, ideas and nature of practice? (Concept tilting) 5. What are its implications at economic level? (cost benefit, economic viability, cost effectiveness)

Referencing Styles in Research

There are various standard methods used for citing the source of work. These methods are called as referencing styles or citation styles. Some common and widely used citation styles are: Harvard Vancouver APA (American Psychological Association) Referencing Style

MLA (Modern Language Association) Referencing Style Chicago/ Turabian Referencing Style

There are other styles that are not that common but are still required at some places: ACS (American Chemical Society) AGLC (Australian Guide to Legal Citation) AMA (American Medical Association) CSE/ CBE (Council of Science Editors/ Council of Biology Editors) IEEE (Institute of Electrical and Electronics Engineers)

Checklist for Epidemiological Studies Reporting guidelines are potent tools which help to improve the accuracy, transparency and completeness of health research and increase the value of published research

STROBE checklist

CONSORT checklist

COREQ checklist

MOOSE checklist

PRISMA checklist

Reference A HANDBOOK ON JOURNAL CLUB AND CRITICAL EVALUATION by Dr. L. Nagesh von Elm E, Altman, D.G, The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies. UIJ. 2009 Apr;2(2). doi : 10.1371/journal.pmed.0040296 Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 6(7): e1000097.doi:10.1371/journal.pmed1000097

THANKYOU