Impact Factor Journals as per JCR, SNIP, SJR, IPP, CiteScore
11,307 views
43 slides
Dec 06, 2020
Slide 1 of 43
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
About This Presentation
Journal-level metrics
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited att...
Journal-level metrics
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important.
Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment. ( Elsevier)
Size: 27.01 MB
Language: en
Added: Dec 06, 2020
Slides: 43 pages
Slide Content
Impact factor journals as per as per journal citation report SNIP, SJR, IPP, CiteScore Dr. S.Ghosh Associate Professor Department of Library & Information Science, University of North Bengal, West Bengal 734013
Publish or Perish? “Publish or perish " is an aphorism describing the pressure to publish academic work in order to succeed in an academic career. ... The pressure to publish has been cited as a cause of poor work being submitted to academic journals. 12/6/2020 @sghoshnbu 2
The Harsh Consequences of “Publish or Perish” The culture of “publish or perish” is clearly pervasive and appears to be here to stay. Calls for instant distribution and transparency of both authorship and peer review may help to address problems with research quality, but as long as researchers are threatened by the publication venue of their research, the system will remain fundamentally broken. 12/6/2020 @sghoshnbu 3
Download counts Page views Mentions in news reports Mentions in social media Mentions in blogs Reference manager readers … etc. Journal Impact Factor Citation counts Perspectives of impact ACADEMIC IMPACT SOCIETAL IMPACT Alternative metrics “altmetrics” + Traditional metrics Traditional metrics More article-centric, as opposed to journal-centric.
What are metrics
Why is metrics?
What are the different metrics? Scholars have combined standard research metrics, like scholarly output and citation counts, into formulas to measure and assess author and journal impact in new ways. Some of these metrics include: Journal Impact Factor h -index g-index Eigenfactor score Altmetric @sghoshnbu 12/6/2020 7
Ways of Measuring Impact 12/6/2020 @sghoshnbu 8
Journal-level metrics al-level metrics Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important. Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment. 12/6/2020 @sghoshnbu 9
Journal Citation Reports™ (JCR) Journal Citation Reports™ (JCR) provides you with the transparent, publisher-neutral data and statistics you need to make confident decisions in today’s evolving scholarly publishing landscape, whether you’re submitting your first manuscript or managing a portfolio of thousands of publications. Quickly understand a journal’s role within and influence upon the global research community by exploring a rich array of citation metrics, including the Journal Impact Factor™ (JIF), alongside descriptive data about a journal’s open access content and contributing authors. Web of Science does not depend on the Journal Impact Factor alone in assessing the usefulness of a journal, and neither should anyone else. The Journal Impact Factor should not be used without careful attention to the many phenomena that influence citation rates – for example, the average number of references cited in the average article. The Journal Impact Factor should be used with informed peer review. In the case of academic evaluation for tenure, it is sometimes inappropriate to use the impact of the source journal to estimate the expected frequency of a recently published article. Again, the Journal Impact Factor should be used with informed peer review. Citation frequencies for individual articles are quite varied. Journal Citation Reports now includes more article-level data to provide a clearer understanding of the reciprocal relationship between the article and the journal. This level of transparency allows you to not only see the data, but also see through the data to a more nuanced consideration of journal value. 12/6/2020 @sghoshnbu 10
Journal Impact Factor (JIF) Journal Impact Factor (JIF) is calculated by Clarivate Analytics as the average of the sum of the citations received each year to a journal’s previous two years of publications (linked to the journal, but not necessarily to specific publications) divided by the sum of “citable” publications in the previous two years. Owing to the way in which citations are counted in the numerator and the subjectivity of what constitutes a “citable item” in the denominator, JIF has received sustained criticism for many years for its lack of transparency and reproducibility and the potential for manipulation of the metric. Available for only 11,785 journals (Science Citation Index Expanded plus Social Sciences Citation Index, per December 2019), JIF is based on an extract of Clarivate’s Web of Science database and includes citations that could not be linked to specific articles in the journal, so-called unlinked citations. 12/6/2020 @sghoshnbu 11
Metrics in a nutshell(Impact Factor) @sghoshnbu 12/6/2020 12 Impact Factor Journal Citation Reports Use a two-year period to divide the number of times articles were cited by the number of articles that were published Example: 200 = the number of times articles published in 2008 and 2009 were cited by indexed journals during 2010. 73 = the total number of "citable items" published in 2008 and 2009. 200/73 = 2.73 2010 impact factor Impact factor reflects only on how many citations on a specific journal there are (on average). A journal with a high impact factor has articles that are cited often.
Traditional metrics for journals Impact Factor and Citation Counts, created to measure Journals and journal articles Scholarly (journal) impact Initially created for librarians, then largely adopted by STEM Image from Journal Citation Reports (library database) Software
Source Normalized Impact per Paper (SNIP) Source Normalized Impact per Paper (SNIP) is a sophisticated metric that intrinsically accounts for field-specific differences in citation practices. It does so by comparing each journal’s citations per publication with the citation potential of its field, defined as the set of publications citing that journal. SNIP therefore measures contextual citation impact and enables direct comparison of journals in different subject fields, since the value of a single citation is greater for journals in fields where citations are less likely, and vice versa. SNIP is calculated annually from Scopus data and is freely available alongside CiteScore and SJR at www.scopus.com/sources . Unlike the well-known journal impact factor, SNIP corrects for differences in citation practices between scientific fields, thereby allowing for more accurate between-field comparisons of citation impact. Centre for Science and Technology Studies(CWTS) Journal Indicators also provides stability intervals that indicate the reliability of the SNIP value of a journal. SNIP was created by Professor Henk F. Moed at Centre for Science and Technology Studies (CWTS), University of Leiden. @sghoshnbu 12/6/2020 14
CiteScore metrics CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s leading abstract and citation database of peer-reviewed literature. CiteScore itself is an average of the sum of the citations received in a given year to publications published in the previous three years divided by the sum of publications in the same previous three years. CiteScore is calculated for the current year on a monthly basis until it is fixed as a permanent value in May the following year, permitting a real-time view on how the metric builds as citations accrue. Once fixed, the other CiteScore metrics are also computed and contextualise this score with rankings and other indicators to allow comparison. CiteScore metrics are: Current: A monthly CiteScore Tracker keeps you up-to-date about latest progression towards the next annual value, which makes next CiteScore more predictable. Comprehensive: Based on Scopus, the leading scientific citation database. Clear: Values are transparent and reproducible to individual articles in Scopus. The scores and underlying data for more than 25,000 active journals, book series and conference proceedings are freely available at www.scopus.com/sources or via a widget (available on each source page on Scopus.com) or the Scopus API. 12/6/2020 @sghoshnbu 15
SCImago Journal Rank (SJR) SCImago Journal Rank (SJR) is based on the concept of a transfer of prestige between journals via their citation links. Drawing on a similar approach to the Google PageRank algorithm - which assumes that important websites are linked to from other important websites - SJR weights each incoming citation to a journal by the SJR of the citing journal, with a citation from a high-SJR source counting for more than a citation from a low-SJR source. Like CiteScore, SJR accounts for journal size by averaging across recent publications and is calculated annually. SJR is also powered by Scopus data and is freely available alongside CiteScore at www.scopus.com/sources . 12/6/2020 @sghoshnbu 16
The impact per publication(IPP) The impact per publication, calculated as the number of citations given in the present year to publications in the past three years divided by the total number of publications in the past three years. IPP is fairly similar to the well-known journal impact factor. Like the journal impact factor, IPP does not correct for differences in citation practices between scientific fields. IPP was previously known as RIP (raw impact per publication). 12/6/2020 @sghoshnbu 17
Immediacy Index The Immediacy Index measures how frequently the average article from a journal is cited within the same year as publication. This number is useful for evaluating journals that publish cutting-edge research. Immediacy Index Numerator - Cites to recent items: The numerator looks at citations in a particular JCR year to a journal's content from the same year. For example, the 2015 Immediacy Index for a journal would take into account 2015 citations to the journal's 2015 papers. The numerator includes citations to anything published by the journal in that year. Immediacy Index Denominator - Number of recent items: The denominator takes into account the number of citable items published in the journal in 2015. Citable items include articles and reviews. @sghoshnbu 12/6/2020 18
Histcite(By Eugene Garfield) 12/6/2020 @sghoshnbu 19 See in Action
H-index variant H5-Index @sghoshnbu 12/6/2020 20 h-index Web of Science, Google Scholar , Scopus 1) Create a list of all your publications. Put the list in descending order based on the number of times it was cited (you can get this information from any of the sources to the left). The first article should have the most citations. Go through and number these. 2) Look down through the list to figure out at what point the number of times a publication has been cited is equal to or larger than the line (or paper) number of the publication. Example: Paper Number # of citations 1 13 2 7 3 4 h-index= 3 *please remember that many databases will give you this number; this is only if you'd like to calculate it manually. You can also often find calculators online. The h-index focuses more specifically on the impact of only one scholar instead of an entire journal. The higher the h-index, the more scholarly output a researcher has. Software Jorge E. Hirsch Argentine American professor of physics at the University of California, San Diego . [1] He is known for inventing the h -index in 2005
G-index @sghoshnbu 12/6/2020 21 g-index Harzing's Publish or Perish Given a list of articles ranked in decreasing order of the number citations that they received, the g-index is the largest unique number to the extent that the top g articles received together is at least g 2 citations. The g-index can be thought of as a continuation of the h-index. The difference is that this index puts more weight on highly-cited citations. The g-index was created because scholars noticed that h-index ignores the number of citations to each individual article beyond what is needed to achieve a certain h-index. This number often complements the h-index and isn't necessarily a replacement. Egghe, Leo Hasselt University , Nederlands in 2006 suggested g-index
Publish or Perish by Anne-Wil Harzing ' 12/6/2020 @sghoshnbu 22
Eigenfactor score @sghoshnbu 12/6/2020 23 Eigenfactor score Eigenfactor.org The Eigenfactor score is calculated by eigenfactor.org. However, their process is very similar to calculating impact factor and they pull their data from the JCR as well. The major difference is that the Eigenfactor score deletes references from one article in a journal to another in the same journal. This eliminates the problem of self-citing. The Eigenfactor score is also a five-year calculation. More information can be found through Journal Citation Reports. A high Eigenfactor score signals that the journal does not self-cite and controls the network of that discipline. It's useful to look at scholar's h-index as well as the Eigenfactor score of the journals they publish in in order to get a broad sense of their impact as a researcher. Jevin West Carl T. Bergstrom Ted C. Bergstrom Ben Althouse
i10-index The i10-index is used by Google Scholar and indicates the number of publications that have been cited at least 10 times. 12/6/2020 @sghoshnbu 24
Altmetrics Jason Priem The tweet by Jason Priem , which coined the term altmetrics . 12/6/2020 @sghoshnbu 25
“The Umbrella Classification of Non-Citation based Metrics”
An article-centric approach Measure online attention surrounding journal articles (and datasets). Collect and deliver article-level metrics to journal publishers.
Categories of altmetrics
Where do altmetrics come from?
How do we collect data for altmetrics? Directly from the individual tools From publishers (views, download data) From (some) library databases From scholarly networks Through aggregating tools SlideShare views PLOS article metrics Web of Science usage ResearchGate metrics Altmetric metrics
Strategies to Maximize Your Impact 12/6/2020 @sghoshnbu 38
Take Steps to Broaden Your Impact 12/6/2020 @sghoshnbu 39
Identity Exploration Google Scholar Profile A Google Scholar Profile tracks your publications listed in Google Scholar, provides the number of citations and links to the items citing your work, and calculates your h-index. (Note: You need to have a Gmail account to track your profile. Once you are logged in to your Gmail account, click on "My citations" to view and edit your profile.) Impactstory This web-based service collects metrics and displays them with a link that can be added to CVs. Join free with an ORCID account. Share Your Research Online The process of writing for publication often creates several outputs in addition to the final journal article, book, or book chapter. Consider posting slides from presentations, brief videos of presentations, data sets, or other materials online with a link to the official publication. Postprints/White Papers/Drafts of work - DigitalCommons@EMU or subject/disciplinary repositories. Presentation Slides - SlideShare or Speaker Deck Videos - Vimeo or YouTube Data Sets - Dryad or figshare (figshare can handle other outputs as well) Code & Software - GitHub @sghoshnbu 12/6/2020 40
Single Platform @sghoshnbu 12/6/2020 41
References Ayris, P., López de San Román, A., Maes, K., & Labastida, I. (2018). Open Science and its role in universities : A roadmap for cultural change. League of European Research Universities. Bose, R. (2004). Knowledge management metrics. Industrial Management and Data Systems. https://doi.org/10.1108/02635570410543771 Commission, E. (2017). Next generation metrics: Responsible metrics and evaluation for open science: European commission Report. Brussels. Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature. https://doi.org/10.1038/520429a Lăzăroiu, G. (2017). What do altmetrics measure? Maybe the broader impact of research on society. Educational Philosophy and Theory. https://doi.org/10.1080/00131857.2016.1237735 LibGuides: Introduction to Impact Factor and Other Research Metrics: Home. (n.d.). Retrieved from https://guides.library.illinois.edu/impact SAGE Publishing. (2019). The latest thinking about metrics for research impact in the social sciences (White paper). Thousand Oaks, CA: Author. doi: 10.4135/wp190522. Understanding research metrics. (n.d.). Retrieved May 17, 2020, from https://editorresources.taylorandfrancis.com/understanding-research-metrics/ 12/6/2020 @sghoshnbu 42
Questions & Answers & More 12/6/2020 @sghoshnbu 43