Heading Off 'Fakes' in the Academic Publishing Space
ShalinHaiJew
19 views
37 slides
Oct 20, 2025
Slide 1 of 37
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
About This Presentation
Academic research is supposed to be valuable based on novel research methods, techniques, problem-solving, technologies, data, modeling, and / or insights.
Academic publishers, editors, reviewers, and researchers / writers are all supposed to be acting as “gatekeepers” in dealing with the sci...
Academic research is supposed to be valuable based on novel research methods, techniques, problem-solving, technologies, data, modeling, and / or insights.
Academic publishers, editors, reviewers, and researchers / writers are all supposed to be acting as “gatekeepers” in dealing with the scientific body of knowledge. They are supposed to be able to identify the “fakes” (digital and analog) from the real, the actual experts from the amateurs. A number of factors in the present age militates against that oversight: the democratization of knowledge, the Dunning-Kruger cognitive bias, the push for open-access and open-source research, and other practices. This work explores how book publishers and editors evaluate works for “fakeness,” defined here as a failure to achieve real and relevance standings.
Size: 388.88 KB
Language: en
Added: Oct 20, 2025
Slides: 37 pages
Slide Content
Heading
Off “Fakes”
in the
Academic
Publishing
Space
Overview
Academic research is supposed to be valuable based on novel research methods,
techniques, problem-solving, technologies, data, modeling, and / or insights.
Academic publishers, editors, reviewers, and researchers / writers are all supposed to be
acting as “gatekeepers” in dealing with the scientific body of knowledge. They are supposed
to be able to identify the “fakes” (digital and analog) from the real, the actual experts from the
amateurs. A number offactors in the present age militates against that oversight: the
democratization of knowledge, the Dunning-Kruger cognitive bias, the push for open-access
and open-source research, and other practices. This work explores how book publishers and
editors evaluate works for “fakeness,” defined here as a failure to achieve real and relevance
standings.
2
Real World…
What would you think if…?
3
Manuscript 1
A manuscript is submitted by a team…and reads generally (abstractly). The topic is one that
the editor has put out in the world instead of an original study. Its logic is fairly
straightforward. There is no primary research though. The source citations are sparse. There
are a few DOIs in the References list that do not exist.
Go or no go? Continue with peer review or not?
4
Manuscript 2
A manuscript is submitted…as a survey study. The research instrument is not shared as part
of the manuscript. The instrument does not have a name. It was not pilot tested per se. It
was not tested for various types of construct validity…or reliability… The work reports an
even 50 respondents but does not offer much in the way of the demographics of the survey
participants. The results are described generally. No quotes are added for “color.”
Go or no go? Continue with peer review or not?
5
Manuscript 3
A manuscript is submitted…as a meta-analysis. The research team has described some
sources used to identify potentially relevant articles. There are categories of articles that are
accepted. Articles are accepted, but chapters are not. Reviews are accepted, but op-eds and
letters are not. Computational means have been applied to the findings to come out with
some observations of the uses of theories in the particular space. The theories are a hodge-
podge from various parts of the domain.
Go or no go? Continue with peer review or not?
6
Reading for
Substance
7
Substance in the Research Space
Does the article or chapter have an original title and abstract?
Do the research questions align with the interests of the discipline?
Does the work have a clear research methodology aligned with the best practices in the
research literature?
Does the work use techniques and technologies appropriately?
Does the work report data that is trustworthy? Does the work report data based on the
conventions of the field / discipline?
Are there necessary qualifiers that better explain the work?
What is the quality of the research insights?
8
Substance in the Research Space (cont.)
Is the work logical? Inductively? Deductively?
9
Informational
Sourcing
10
Pressure Testing Submitted Articles /
Chapters / Manuscripts
Most draft manuscripts are pressure-tested by the publisher, editors, reviewers, and peer
contributors.
Most are run through computational assessments for plagiarism.
Most are run through various integrity checks, such as for originality, source citations (do the
sources exist or not?), and other dimensions.
11
Relevance of Research Questions
Is the hypothesis in the manuscript an assertion that is not understood as settled research?
Are the research questions relevant? Is there research depth to those questions? Are they
asked methodologically in ways that are relevant and that promise to offer some insights?
(Or are these superficial? Formulaic?)
Are the research setups for the research questions relevant?
Are the research questions plugged into the discipline or not? Do they have potential
relevance in the world?
Is there possible relevance of the research findings for transferable models to other contexts?
Other disciplines?
12
Relevance of Research Questions (cont.)
Is an established research instrument used? Is it used in whole or in part? Is it used correctly?
Are the results reported accurately?
If a research instrument is created for the work, what is it built off of? Is it pilot-tested? Is it
assessed for construct validity? Is it assessed for reliability?
13
Following Information Conventions
Is there sufficient and relevant information for readers to understand the information fully?
Does the work follow data conventions in terms of their collection, handling, and
communications? The privacy protections? Controls against re-identification?
Are the visuals legible? Accessible? Are the font types (families) and sizes consistent? Is the
language in visuals (figures, data visualizations, diagrams, maps, data tables, and others)
grammatically consistent?
Are there proper labels in terms of captions?
Are the bibliographic citation methods correct in-text? In the References list?
14
Informational Sourcing
Where does the research information come from?
What primary research? Based on what methods? Are there controls against biased
results? Incomplete results? Skewed results? Results affected by unintended factors?
What secondary research? Tertiary? Are these sources trustworthy? Respectable?
Are the sources accurately represented? Are the truth claims supported by the cited sources?
Are there appropriate sources for all relevant truth claims? Or are there truth claims that are
made without any “ground truth” support? Is there evidence of plagiarism? Are there gaps in
the logic? Do the sources bear out (or are there faked or “hallucinated” sources? Are all the
sources trackable?
15
Professional Research Standards
Does the work show training on fair research? Non-biasing in terms of seating participants
for a study?
Are there signs of professional oversight by regulatory agencies? Institutional review boards?
Human subjects research? Animal research?
Are the standards spelled out?
Is there informed consent for the human participants to the research?
Are important details available? Contact information? Are there ways to check for the
research propriety?
16
Contributor
Expertise
17
A Lifetime to Build a Relevant Skillset
It takes a lifetime to begin to build a subject matter expert (SME) skillset in terms of
knowledge, skills, and abilities (KSAs).
From that body of knowledge, researchers acquire skills to conduct research.
They are better able to assess others’ research and the credibility of that work.
They are better able to understand the academic research space.
They have a sense of the published academic literature, the main paradigms, the seminal
research, the research methodologies, the main movers and shakers in the field, the extant
research questions, the challenges in related industries, and so on.
18
A Lifetime to Build a Relevant Skillset (cont.)
They are able to originate novel ideas and inventions.
They have the discipline to do the hard work.
They know the evolving rules.
They have the social networks to engage in collaborative work.
They can build a track record of achievements.
They have a basis for sensemaking, perspective, and fact-based groundedness.
They can wield the power of logic.
They may have something of an entrepreneurial ability in the world.
19
Eliciting the
Underlying Realities
20
Systemic Methods for Assessing a Work
It helps to apply systemic heuristics to assess a work for its originality.
It is reckless to just form impressions without any basis in fact.
21
Manual and Technical Tools for Assessing
“Integrity”
There are manual and technical tools for assessing manuscript integrity. These are used in
combination.
There can be assessments of the manuscript, the team (and their track record), and other
assessments.
Evaluators of manuscripts have every right to ask for and receive information to validate or
invalidate a manuscript.
They can ask for and receive information until the author / authoring team reaches a bridge
that cannot be crossed.
22
Generative AI Tells
There are generative AI tells. Some genAI tools have particular structures in their writing.
Some have “watermarks” in the text.
There are computational methods that are usable to detect genAI contents (whether textual,
visual, animation, audio, video, or multimedia).
GenAI often hallucinates sources. Some hallucinate “facts.”
23
Assessing for Fakes
24
An Adversarial Process?
The idea of “assessing for fakes” may make it seem like an adversarial process.
In some senses, it is. An evaluator has an interest in conducting due diligence on the work,
the contributor professional biographies, and other relevant aspects. This is done in the spirit
of professionalism and mutual development.
This process may be litigious, too, especially if careers and moneys and reputations are on
the line.
In the publishing space, it is sufficient to highlight the limits and decline the work.
In the following section, red flags are issues that trigger warnings. “Fatal errors” are those that
result in the declination of the work. Red flags may turn into fatal errors…with some further
research.
25
Red Flags vs. Fatal Errors
Red Flags (warning signs)
Inappropriate use of terminology, phrases,
models, theories, historical details, and
others
Queries to the editor about off-topic
questions (that may read as manipulations)
Showboating
Machine-generated translations
Some logical snafus
Fatal Errors
Irrelevant research
Inappropriate research methodologies,
inappropriate applications of research
methodologies
Incorrect research
Insufficient review of the literature,
insufficient contextualization
26
Red Flags vs. Fatal Errors (cont.)
Red Flags (warning signs)
Hallucinated data (such as from generative
AI tools)
Student standing (vs. those with professional
positions and earned doctorates), a lack of
formal and accredited education
A lack of affiliation with a professional
institution of higher education
Fatal Errors
Generative AI-created contents (fraud,
misrepresented authorship)
Plagiarism, derivations from others’ work
without credit, self-plagiarizing (by copying
one’s own work)
Misrepresentations (misunderstandings of
core research)
Misuse of research instruments, tools, and
technologies
27
Red Flags vs. Fatal Errors (cont.)
Red Flags (warning signs)
A readiness to engage small deceptions
A lack of professional self-respect,
professional pride
Fatal Errors
Illogics
Insufficient completeness in describing
context, missing information
Misrepresentations of professional affiliations
Double publishing the same work,
versioning work
The release of personally identifiable
information (PII), sensitive data
28
Red Flags vs. Fatal Errors (cont.)
Red Flags (warning signs)
Poor paraphrasing
Fatal Errors
The contravening of copyright, intellectual
property
Defamation
29
A Lack of Researcher / Writer Ego
Some SMEs are egotistical about themselves and their work.
Ironically, this can be a net positive. It can mean that the individual is willing to invest a lot of
time and effort to develop themselves and their skills.
It means that they will not copy someone else’s work but try to forge their own path. It often
means that they will not just take the ideas offered in a call for proposals and swipe their title
and focus from that alone.
It means that they are willing to work hard even when there is no money in it for them.
30
About Newbies and
Amateurs
31
Newbies and Amateurs
All are welcome!
However, there are core KSAs necessary to engage in the academic research space.
Some expertise may be acquired through apprenticeships, to an expert in the field.
The core has to be professional. The core has to be ethical.
Even if amateurs have gaps in their expertise…and the limitations of naivete, they do need to
engage in trustworthy work.
Deceptiveness is a deal breaker. Fraud is a deal breaker.
N O T
Dearth of Training at the Doctoral Level
Some may think that doctoral studies in the world cover the research and academic
publishing piece in depth.
In most cases, this is not covered much (maybe one course in research statistics, if that)…
Learners have to make it a priority to learn how to conduct research, to acquire the KSAs, to
acquire access to the technologies, to acquire regulatory permissions, and so on.
Undergraduate students include research as part of their learning, so the pipeline for
research starts younger and younger…but there are many complexities at play.
33
Ecosystem for
Quality Work
34
Authorizing Environment
Ambition, self-importance, and ego are par for the course. Many in academia stand to advance if
they can publish research.
Some in academia think that editors are easy to dupe and that the structure of the publishing
market focuses on publishing everything.
What is not uncommon also is the drifting to lies. People know it is unprofessional and unethical to
be deceptive, but sometimes, convenience for them is more compelling.
Publishers have to be aware of unethical actions in the publishing space. They have to enforce
professional ethics for it to work. If they leave such issues unaddressed, the deceptions will
continue.
Administrators (and leaders) should stand up for ethical and professional work as well.
Professional colleagues should support ethical work as well.
35