Author: Una Pale
Review of scientific publications, analysis of colleagues’ works and results, constructive feedback clarifying advantages and potential disadvantages of works are basic elements of the scientific publication process. Today, over 2 million works are reviewed annually, without any compensation and with minimal recognition of the time invested. Such a system is proof of the community of scientists and collective work towards the creation of new knowledge and discoveries.
This system was created in the pre-internet era
The traditional publishing system is illustrated in Figure 1. Nevertheless, we can ask ourselves about the role of journals in today’s Internet age. Namely, journals publish papers, but actually contribute minimally to the system. At the same time, they direct the review process, publishing costs and ultimately decide what scientific information reaches the public. Going back in history, how this system of publishing scientific papers came into being, it easily becomes obvious that this system of publishing scientific papers is quite outdated.
Namely, the publication system, where journals review papers and select only a few of them, is a system that was developed in the pre -internet era when the number of papers published was dictated by the prices and possibilities of physical printing of papers, binding in journals and sending them several times a year to subscribers. With the advent of the Internet, which was actually primarily developed so that scientists could communicate more easily, the publishing system has changed minimally — submission of papers is online, communication is online, and papers are published in pdf instead of physical form, but nothing fundamental has changed. The possibilities offered by the Internet, search technologies and machine learning have remained unused.

Missed opportunities
There are many disadvantages to this kind of system, and many missed opportunities in the age of incredible technologies.
Today, in fact, this kind of system denies information and scientific knowledge. First of all, reviews often contain very important information and contextualize the work: what are the advantages and disadvantages of the work, what are the innovative ideas, what are the issues that remained unresolved, what are the future steps, how does the work fit into the larger picture, and what are the potential impacts on the field and society? All this information remains withheld from other scientists by not making reviews public.
Insight into reviews can directly help readers, whose comments can be taken into account when reading the paper. Those who do not feel expert and knowledgeable enough in a particular field can profit immensely by having insight into reviews and comments by experts in that field. Likewise, insights into high quality reviews serve as examples to young researchers of what good reviews should look like.
In a way, journals have become the ‘currency’ of an academic career and have institutionalized the practice of evaluating scientists only according to where they publish, not what they publish, how much they contribute to the scientific community in other ways, how good mentors and professors they are, etc. The evaluation of scientists is almost one-dimensional due to the enormous influence of journals. At the same time, this has also influenced and transformed the journals themselves, and instead of the primary focus on communicating science (in the pre -internet era), today they have become ‘gatekeepers’ whose evaluations — which are often influenced by bias, superficiality, status and power — determine which research get a chance to shine, and which scientists will be successful.
What could the system look like?
A system that would serve and focus primarily on researchers and science, rather than journals, should be based on several basic ideas:
- Authors should be able to publish their work and results freely, publicly and without restrictions, when they think they are ready.
- Systematic reviews should be moved from before to after publication (of course with possible subsequent refinement of the work, which will be visible). In this way, the review waiting time is reduced, the pressure on reviewers is reduced, and it allows the scientific community to interact and validate the work in a more natural way by involving those who will use the results of the work.
- Reviews of papers and results should be publicly available, transparent and thoroughly explained and segmented, not just a binary pass (and therefore the article is published) or fail. Equally, the author’s answers and corrected versions of the works should be public. Reviews should be ‘journal-agnostic’ and more in the form of advice and friendly consultation, so that reviewers are more willing to be listed publicly, rather than anonymously.
- The work of reviewing should be valued and considered for career advancement in science, in order to raise the quality of the reviews themselves.
Recent scientific work (Stern and Shea , 2019) that deals with this topic and proposed changes can be found here.
Many might say, if we allow all kinds of works to be published, how can we find the best quality ones in the sea of works. Realistically, even today, among the published works, we find all levels of quality, but without more detailed information about the value of the work. Reading only those papers published in top journals also makes little sense today, because many good ideas, experiments, and especially negative results are not published there. The navigation through the works could be greatly helped by ratings that are segmented and detailed.
Here, new technologies are also just waiting for an opportunity. Linking, searching and contextualizing works by topic, data, authors and visualized in the form of networks and connected graphs can greatly help in finding important works. Factors such as the number of readings/downloads of papers, references, postings on social media, etc. are becoming much more valuable and informative than the journal in which the paper was published. The field of ‘altmetrics’ deals precisely with the development of alternative metrics complementary to traditional metrics based on citations, with the aim of measuring public interest into the publication and its wider impact.
Changes are coming but slowly
Such ideas were proposed back in 1998 by the then director of the National Institute of Health (NIH) in America, Harold Varmus, but were met with complete misunderstanding.
Changes are slow, because those who make decisions are those who have been helped by this system in their careers, and are not ready to change it. But still, changes are happening.

The first step is increasing number of available preprint databases, such as arXiv, bioRxiv, medRxiv, and the increasing number of papers published on them [Figure 2]. Works published on such servers more and more often reach the same citation levels as those published in journals. In addition to the fact that publishing papers on such preprint servers accelerates scientific research, which has proven to be very useful during COVID, they also promote the idea of open science. Open science advocates that scientific research, usually financed with public funds, should not be behind a ‘paywall’, and scientists who publish their preprints in this way send a message and slowly create pressure on journals to switch to open access publication paths.
It also subtly sends the message that works have value because of their content, and not necessarily just the journal in which they had the opportunity or influence to be published. This is especially important for research in smaller countries or universities, with less influential professors, less money, and fewer connections around the world.
Lobbying and advocacy of recruitment and promoting criteria that include many more aspects is extremely important today and is slowly spreading. Including a broad range of criteria such as number of collaborations, number of open-source works (and code repositories or databases), quality and number of reviews, quality of teaching and mentoring, etc., and not only the h-index and journals of published works is extremely important for the entire scientific community, especially future generations.
Further, journals and platforms have appeared that publicly publish reviews (e.g. eLife , Review Commons, F1000Research, Peer community In, PubPeer, etc.), changing the way scientists submit, review, and publish scientific articles. An example of the process for Review Commons is shown in Figure 3.

eLife magazine has a similar process. eLife recently published an article about changes to the review process that they introduced. Although they previously published reviews, they have decided to completely remove the binary decision of accepting or rejecting submitted works. Reviews are published publicly and authors may choose to respond or not respond to comments as well as revise the paper, which may go through the review step again. In addition to reviews, eLife publicly publishes their analysis of the work, according to several criteria related to the quality of the publication of the work, such as — publicly available data and code, clearly presented methods, ethical approvals, text quality, etc. Each work receives a DOI, of course, through which versions can be tracked. Finally, they do not take any ownership or control over the work, and the authors can publish it elsewhere if they wish.
F1000Research has extended its interest to publishing posters and presentations, which also have value and are sometimes difficult to cite without a proper DOI. On the platform, they offer options for this [Figure 4].

Changes are also taking place in the direction of the growing need and expectations for publishing code and data publicly, and thus databases like Zenodo are gaining more and more popularity (more on that in another post).
Conferences have also begun to introduce changes. More and more conferences today introduce ‘open reviews’ through the platform https://openreview.net/ which means that reviews become public for accepted papers. Many conferences introduced special calls for public datasets or ‘benchmarks’ like the famous NeurIPS conference in the field of machine learning. The idea that scientists should think about the potential positive and negative impacts of their work is also becoming more prevalent, with many conferences asking for a section of the article to be dedicated to potential applications and implementations of proposed research.
Many other new ideas for using technologies to facilitate the evaluation of papers and research are continuously emerging, and we should give them a chance😊.
This is a huge topic for discussion and for new ideas and I’m definitely looking forward to the changes that are happening.
— –
Sources:
- Kim et al — 2019 — Scientific journals still matter in the era of academic search engines and preprint archives
- Ferrer-Sapena et al — 2018 — Citations to arXiv Preprints by Indexed Journals and Their Impact on Research Evaluation
- Eisen et al — 2022 — Peer review without gatekeeping
- Stern et al — 2019 — A proposal for the future of scientific publishing in the life sciences
Leave a Reply
You must belogged in to post a comment.