You’ve undoubtedly seen reports that fraud and fakery in academic-level research papers are increasing. There are several likely reasons for this: the ease with which such a paper can be created, AI, of course, is a big help here, the pressure to publish or perish, the need to show success for more grants, and personal ego and brand. In addition to outright fraud, there are overly optimistic claims and extrapolations of test results, what I’ll call an academic form of marketing hype.
How much fraud is out there? Of course, it’s almost impossible to say with certainty, but the consensus and “gut feel” is that it is more than you think. The challenge is aggravated by the reality that many legitimate papers in deep physics, optics, or similar disciplines are inherently so sophisticated that they are close to magic and require some suspension of belief, akin to Figure 1.
A recent article in The Wall Street Journal, “Scientific Journals Can’t Keep Up With Flood of Fake Papers,” made the situation dramatically clear. That article was based on an intense, detailed formal paper, “The entities enabling scientific fraud at scale are large, resilient, and growing rapidly,” published in the prestigious Proceedings of the National Academy of Sciences (PNAS), which tried to put some numbers on it.
Unfortunately, the PNAS paper was limited to using what I will call second-order metrics, such as the number of retractions, as it is simply impossible to examine each published paper and try to verify or reproduce the results. My sense is that the number of retracted papers is only the tip of the actual iceberg, to use a cliché.
The source of the trouble is largely but not exclusively linked to “paper mills,” which are businesses or individuals that charge fees to publish fake studies in legitimate journals under the names of desperate scientists whose careers depend on their publishing record. Some prestigious journals have actually had to close themselves down as they have been inundated by works from these paper mills.
I’ve always been skeptical for many years of papers focused on behavioral, social, or “participant” medical research papers, whether with just a few or even hundreds of participants. Who has the time or resources to actually verify that those participants actually existed or did what was reported? Or that the subjects did what they said they did, or that they didn’t give incomplete answers about their lifestyle, conditions, or whatever, whether through negligence or just not knowing? The opportunities for inadvertent errors, as well as deliberate fakery and fraud, are obvious.
As a quick one-off example, I saw this recent report about a nutrition study from Lebanon that attracted considerable attention, but it has now been retracted due to what are politely called “inconsistencies” in the data, “Viral Apple Cider Vinegar Weight Loss Breakthrough Turns Out To Be Flawed Science.” I suspect that this is just one of many studies about which we should be very skeptical.
What about “harder” sciences?
At the same time, I have always had high levels of confidence in papers based on “hard” sciences such as physics, chemistry, materials science, and similar. The attribute that they all have in common is that the test subjects are inanimate physical objects that can be independently reproduced if desired. (Obviously, topics such as astronomy and astrophysics fall outside this definition.)
But now, I am starting to wonder about these papers as well. Many of today’s experiments are so complicated and sophisticated that no one has the time or money to reproduce them. It’s especially tricky when the project has just one or a few authors, since it’s harder to keep malfeasance quiet as the number of participants increases; think of a typical CERN/LHC paper with its dozen authors (although, as financial scandals have shown, there are often many innocent participants in the fraud).
Even the IEEE — formally the Institute of Electrical and Electronics Engineers — has seen an increase in retracted papers, as seen in Figure 2.

What really worries me is not the number of retractions, as that figure only calls out suspected papers that were caught. The real worry is how many that are actually false or faked to some extent go undetected. It’s like talking about crime trends in terms of statistics on reported versus unreported crimes.
The problem is aggravated by the marketing hype associated with some topics. In my experience, it’s unusual to find a battery-related paper that does not have words such as “revolutionary” or “breakthrough” in its headline or subheading, with additional claims of making a major contribution to solving the climate-change problem. For example, one “breakthrough” battery report, which raised my concerns, boasted of extending battery life by a factor of ten by adding some sort of salt to the electrolyte.
If even a small percentage of these findings were really as dramatic as their authors claim, battery theory would have advanced along a trajectory similar to that of semiconductors and Moore’s Law. Ironically, the Wall Street Journal article notes that “the rate of fake papers generated by these operators roughly doubled every 1.5 years between 2016 and 2020,” according to the PNAS study — that’s somewhat of a “Moore’s law” of its own!
Adding to the worries, researchers can use AI to create “new” papers with relatively little effort. Why fake the just results when you can fake the whole story?
I don’t know where this all goes and how it ends up. I do know that skepticism and diligence are now more important than ever, especially as there is so much riding on these papers in terms of getting future funding, building academic credentials through papers, or both.
For now, I am reasonably confident that reproducible and demonstrable projects from credible sources will have the fewest fakes compared to “studies,” especially those using groups of anonymous subjects. But you can’t be too careful or even slightly gullible (as the joke goes: “Do you know that the word ‘gullible’ isn’t in the dictionary?”).
References
Scientific Journals Can’t Keep Up With Flood of Fake Papers, The Wall Street Journal
The entities enabling scientific fraud at scale are large, resilient, and growing rapidly, PNAS
Apple cider vinegar for weight management in Lebanese adolescents and young adults with overweight and obesity: a randomised, double-blind, placebo-controlled study, BMJ Nutrition, Prevention & Health
Viral Apple Cider Vinegar Weight Loss Breakthrough Turns Out To Be Flawed Science, SciTechDaily
Related EE World content
UT Arlington provides first academic study of journalists and private citizens’ drone use
Renewable Energy: Research Reveals More Than Industry Truths
Early full-term babies may face later school woes
Does Learning Improve When Every Student Gets a Laptop?
Students as Teachers Effective in STEM Subjects