How the “Publish or Perish”​ rewarding system in academia may unwittingly lead to pseudoscience?

Michel Gokan Khan
4 min readDec 14, 2020


The “Publish or Perish” rewarding system in academia demands researchers to rapidly publish their scientific results to retain or promote their careers. From one perspective, such a culture may encourage and motivate early-stage researchers to focus on scientific advancements, and therefore produce bleeding-edge researches in the field. At the same time, this phenomenon may lead researchers to hastily publish inaccurate results (scrambling whatever they can submit to journals or venues) and force them to spend much less time on the actual scientific process (i.e. discussions, planning, etc.).

“Junk-science” vs. “pseudoscience”

Such a highly criticized culture on itself may lead to producing “junk science”, which refers to a type of research or publication which is fraudulent or spurious. In other words, research with inadequate evidence but presented as a scientific fact.

Even though “junk science” is different from “pseudoscience”, but in several cases, it may lead to it.

Pseudoscience would refer to situations where a scientific claim is shown to be false, has been falsified several times, and in spite of this, the practitioners still tout it as a valid claim. They would repeatedly ignore counter-evidence and more significant theories. However, there are several “grey areas” in science, which is debatable.

The phenomenon of “micro-pseudosciences”

Examples are when we meet a publication that misused a hyped theory and combined it with a too complex problem (not all cases). For example, a publication regarding forecasting stock data using deep learning might be suspicious to contain Pseduscintific contents, as by many critics a financial system is too chaotic to be predictable using technical analysis but very hard to find out for a non-specialist. In my field (network optimization), there are cases that a set of suspicious results and conclusions in a single scientific paper leads to a series of nonsense publications with several cross-citations/self-citations, which in my opinion this makes it more than just a “junk science” and ends to generating a “micro-pseudoscience” as such claims are not easily testable or reproducible, and therefore it creates a “grey area” in the field.

In such cases, if authors don’t publish all their datasets and codes, there is no real way of measuring the accuracy of their results. Even if they do, and for some reason, their publication finds its way on good venues, other researchers from the ”publish or perish” culture don’t have time and resources to verify their results (because of the pressure that this culture brought to them).

This is what happening right now in several fields, and countries and it generates Pseduscientific communities that cite each other’s publications over and over.

The trustworthiness of scientific publications

In 2018, Grimes Dr published an open research paper [1] in the Royal Society Open Science and they modeled the trustworthiness of scientific publications under publish or perish pressure.

“We presented a simple but instructive model of scientific publishing trustworthiness under the assumption that researchers are rewarded for their published output, taking account of field-specific differences and the proportion of resources allocated with funding cycle”.

Interestingly, in the discussion section they concluded:

“In our simulations, best outcome was obtained by simply paying no heed to whether a result was significant or not.”.

“This is akin to the model used by many emerging open access peer-reviewed journals such as PLoS ONE, who have a policy of accepting any work provided it is scientifically rigorous. Our simulation suggests this model of publishing should improve science trustworthiness, and it is encouraging that many other publishers are taking this approach too, including Royal Society Open Science and Nature Scientific Reports.”

At the end of their discussion, they suggested that scientific firms need to find new ways to assess researchers and to encourage “judicious diligence over dubious publishing”. This is exactly a counterpoint to the “Publish or Perish” culture.


Some researchers believe this “Game of Publications” industrialized the scientific researchers and becomes cancer to academia, and some others believe this culture is an effective way to move forward in science. On the other hand, this culture has been supported by a large and intricate business: the business of some conferences, journals, workshops, and (even universities that want) to show off scientific achievements through merely “number of publications” to gain more resources, either through funding, subscriptions or submission fees. Without a doubt, “the number of citations” are the unquestionable currency of this possessed industry (needless to say, it’s only a one-way argument: having a high number of citations doesn’t necessarily mean to be a player of this game.)

I would love to finish this note with a nice cartoon by Brendan Boughen, but since I don’t have a proper license to put his work on Medium, I refer you to his page in

Sometimes, it’s better to “Perish” rather than “Publish”.

P.S: Here is an interesting (and short) list of some well-known Pseduscintific topics:


[1] Grimes DR, Bauch CT, Ioannidis JPA. 2018 Modelling science trustworthiness under publish or perish pressure.R. Soc. open sci.5: 171511.

“Publish or Perish”: A path to bad science



Michel Gokan Khan

I enjoy solving ideological yet practical problems — PhD student in computer science @KAU — Focus: optimization of cloud-native service chains via #DeepLearnig