This blogpost was first written in french. ChatGPT was used to translate quickly to english. The english version has been reviewed before posting.
One issue I have encountered during the peer review process, and I am not alone, is the request by reviewers to add references to the manuscript.
It is common for a reviewer to request justification for certain statements or to ask for additional explanations that may seem clear to the author but not to a less specialized reader. In such cases, the reviewer may suggest adding sentences or paragraphs and, consequently, citing relevant articles (and thus their authors) to support a claim. This is a normal part of the review process and contributes to improving the quality of scientific papers that are published. It also fosters scientific debate and, in some cases, allows for multiple viewpoints to be expressed within the same article, thereby more accurately reflecting current knowledge and ongoing controversies.
However, the peer review process can be misused by a reviewer who requests the inclusion of numerous citations to their own work without scientific justification.
But why would they do this?
It is important to understand that researchers are assessed by their institutions, funding agencies, and potential employers. These evaluations often rely on various metrics, such as the total number of citations to a researcher's work or their H-index..
When writing a scientific article, we must base our arguments on existing knowledge to ensure that we are not making unsupported claims. We cite articles that have previously addressed a topic or demonstrated relevant findings. If I publish a paper that contributes meaningfully to my field, it is likely that other researchers will cite it later to support methodological or theoretical decisions. In theory, the more useful an article is to other researchers, the more it will be cited.
It follows, then, that the more citations a paper receives, the more “important” it appears to be, at least to those who rely on such metrics.
The H-index, for example, is a commonly used indicator that reflects both productivity and citation impact. A researcher has an H-index of 5 if they have 5 publications that have each been cited at least 5 times.
Both the total number of citations and the H-index are good illustrations of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” (https://pmc.ncbi.nlm.nih.gov/articles/PMC7901608/)
Thus, what was originally intended as an indicator of an article's relevance, or a researcher's quality, can become an end in itself, leading to metric-driven distortions.
From the above paragraphs, it becomes clear why increasing citation counts may be important for individual researchers. Some reviewers take advantage of their position in the peer review process by pressuring authors to cite their own work, even when such citations are not relevant. This misuse can result in the publication of articles that include irrelevant or tangential information, inserted solely because a reviewer insisted on their inclusion to boost citations of their own papers.
Example from and article that is now retracted (i.e., not considered as valide). At the second page of the article, it is possible to read that reviewers asked authors to add a dozen of references: https://www.sciencedirect.com/science/article/pii/S0360319924043957
It is also worth noting that in a recent preprint (link to the preprint), Adrian Barnett highlights that the acceptance of a manuscript following peer review is strongly influenced by whether the authors cite the reviewers. If the reviewers are cited in the initial submission, the manuscript is more likely to be accepted for publication (1.16 times more likely). However, the effect goes even further: when a reviewer requests to be cited during the first round of review, they are significantly less likely to recommend acceptance of the manuscript (odds ratio = 0.15, meaning the manuscript is 6.7 times less likely to be accepted [1/0.15] than if no such request is made). During the second round of review, if the authors comply with the request to include self-citations, the likelihood of acceptance increases substantially (3.5 times more likely).
This situation highlights the need to curb such practices. Some publishers have already acknowledged the problem and begun implementing potential solutions. For example, journals in the ‘Frontiers in’ have developed an algorithm that detects whether a particular author is being cited disproportionately during the peer review process. In response, they may propose the authors to remove some or all of those citations once the article is accepted. While this is a promising initiative, it only partially addresses the issue. Nothing prevents the same reviewer from repeating this behavior in future reviews. Moreover, this practice raises questions about the objectivity of the review itself, especially if the reviewer allowed a subpar article to be published solely to gain more citations.
Upon reflection, here are some thoughts on how this phenomenon might be mitigated. (Edited on August 8, following the reading of Adrian Barnett’s preprint and a discussion with him on Bluesky (link to the post), I have added a section at the end, before the conclusion, outlining the suggestions he presents in his article as well as those that emerged during our conversation).
I) Adding a Dedicated Section for Post-Submission Citations
1. A “Citations Added During Peer Review” Section in Published Articles
One of the most transparent approaches would be to include a dedicated section at the end of each article listing all citations that were added at the request of reviewers after the initial submission. This would allow readers to assess whether the added references meaningfully contribute to the final article or were inserted under pressure. It would also help identify whether one author is cited disproportionately, either because of genuine relevance or because they were a reviewer.
The main limitation of this proposal is that journals would need to adapt their publication formats. Currently, this section is not standard and may take time to be adopted.
2. Including the Section in the Final Preprint or Postprint Version
If adding such a section directly to the published article is not feasible, authors could instead include it in the final version of a preprint or a postprint.
The drawback here is that readers accessing the article through the journal may never see this section. Nevertheless, it would still be accessible to those who are interested, and authors could promote it as part of their dissemination efforts. Since preprints offer greater flexibility, the added references could also be highlighted in red within the reference list. However, a dedicated section is preferable to color coding, as it centralizes all added references for easier identification and review.
This method does require that the archived preprint differs from the published version, which may raise copyright or licensing concerns depending on the publisher's policy.
3. Publishing the Added Citations on an External Website
If modifying the preprint is not an option, an alternative approach would be to list all reviewer-requested citations on an external platform.
a. Using PubPeer: A Centralized Solution
PubPeer (https://pubpeer.comPubPeer (https://pubpeer.com) is an existing platform where researchers can comment on published articles. It serves as a space to flag potential flaws or issues that were not caught during peer review. Browser extensions are also available to display PubPeer comments directly when viewing academic papers online.
Authors could use PubPeer to publicly list all references added at the request of reviewers. If adopted widely, this practice would allow researchers to regularly consult a single site to verify how peer review influenced the final content of an article.
b. Personal Website
Alternatively, authors could publish these references on their personal websites, where they could also describe the peer review process in more detail. This would allow for a comprehensive account of unusual or questionable reviewer demands.
However, the primary limitation of this option is visibility. Few readers would visit individual websites, meaning that there would be little pressure on reviewers to moderate their behavior or improve their reviewing standards.
II) Making Review Reports and Author Responses Public.
Another proposed solution is to publish the full set of reviewer comments along with the authors' replies. The main challenge here is practicality. It would significantly increase the length of articles, making it feasible only for online formats. For print journals, the added pages would be a constraint in terms of space and cost.
If made available exclusively online, in a supplementary materials section, these details may still go unnoticed. As a result, reviewers may not feel sufficiently exposed for making inappropriate citation requests. This approach, while potentially useful, may not be optimal on its own. However, it could be combined with one of the other strategies described above.
III) Proposed Solutions by Adrian Barnett
1. Clearer Guidelines
The first solution proposed in the preprint is that journals should clearly define what constitutes a self-citation request, specify the contexts in which such a request is justified, and clarify when it is not. However, reviewer guidelines are rarely read in detail, particularly because researchers often have limited time available to conduct peer reviews.
2. Declaration of Self-Citations
Reviewers should be required to declare to the editor when they request self-citations. This would allow the journal editor to assess whether the request is justified. This proposal aligns with another suggestion: that reviewers should clearly explain the rationale behind any request for a self-citation. An additional measure could involve automatically flagging a review if it contains more than three self-citation requests, with editorial verification required before the review is accepted.
3. Initial Masking of the Article’s References
This proposal involves conducting an initial review of a submitted manuscript without access to the reference list. In this way, reviewers theoretically do not know whether they have been cited. This allows for a more authentic first reading and the preparation of an initial review report. In a second stage, reviewers would then have access to the full manuscript, including references, and could amend their initial report if any clear errors in the citations are identified.
The drawback of this approach is that it is significantly more demanding, as reviewers would need to read the manuscript twice: once to produce an initial report and then again to complete it. It is important to remember that peer review is a time-consuming process, carried out voluntarily and without compensation by researchers.
4. Allow Added References but Do Not Index Them
This proposal was not included in the article but emerged during our discussion on Bluesky (link here). The idea is to allow all citations requested by reviewers to remain in the manuscript, but not to index them. In this way, the references remain visible to readers who may be interested in those articles, without contributing to the reviewers’ citation counts or H-index. This could help reduce excessive or unjustified citation requests.
The first issue with this approach concerns its feasibility. It is unclear whether journals have the capacity to implement selective indexing of references, especially since indexing is likely handled by automated systems. The second issue is that a reviewer’s article may be cited appropriately, sometimes even before the peer review begins. Automatically blocking the indexing of reviewers' articles could therefore “penalize” them for participating in the review process. To avoid this unintended consequence, the editor would need to flag each reference that was self-cited with undue insistence or without sufficient justification. While this would add complexity to the review process, it nevertheless appears to be a potentially valuable solution.
Conclusion
The misuse of peer review to artificially inflate citation metrics is a real. While some journals have begun addressing the issue, further transparency measures are needed to protect the integrity of the scientific publication process. Whether through journal policy changes, community-driven platforms like PubPeer, or author-led initiatives, increasing visibility around reviewer-suggested citations is a key step toward curbing this unethical practice.
“Add this reference… and this one too”: How the Peer Review System Can Be Misused and Some Ways to Address It