Peer review week 2025

Rethinking Peer Review in the AI Era

Peer review is facing a profound transformation as we enter into the revolutionary age of artificial intelligence (AI), offering new opportunities and posing fresh challenges for research integrity. In acknowledgement of this transformation, the theme for Peer Review Week this year is “Rethinking Peer Review in the AI Era.” To celebrate Peer Review Week, we are highlighting iThenticate and Proofig, two powerful tools that you can access through Lane Library to help maintain trust and authenticity in scholarly publishing.

The Peer Review Revolution

AI is rapidly changing how manuscripts and grant proposals are written and reviewed. Large language models (LLMs) are increasingly used by publishers and reviewers to flag errors, double-check reviewer feedback, and even, sadly, generate full reviews. 19% of researchers recently reported some use of LLMs to increase speed and ease, though most journals currently restrict or require disclosure of AI-assisted reviews (Naddaf, 2025). While some publishers encourage AI as an assistant for improving grammar or double-checking statistics, others warn that over-reliance risks undermining the social contract between researchers and peer reviewers (Naddaf, 2025). AI-generated reviews may be less helpful than thoughtful human feedback provided by one’s peers, but they sometimes outperform rushed or poorly written peer reports in less than vigilant journals (Naddaf, 2025).

Crucially, the growing gap between available peer reviewers and the enormous growth of scientific fraud threatens the system’s credibility. Systematic misconduct, especially through organized “paper mills” and brokers, has led to a surge in fraudulent publications, which now outpace legitimate outputs and overwhelm current fraud detection and punishment mechanisms (Richardson et al., 2025). New approaches, including AI-led assistance and robust pre-submission integrity screening, are needed to help researchers proactively protect trust in their work (Richardson et al., 2025). 

In light of the changing landscape of peer review, we provide access to two tools that you can use to check your own work for possible plagiarism or misuse of AI.

iThenticate: Safeguarding Text Integrity

iThenticate is a web-based service available to Stanford researchers that is designed to compare research manuscripts, grant proposals, and other scholarly works against an extensive database of published content. The tool highlights risks of uncited sources and word patterns characteristic of AI-generated text, helping authors identify and correct issues before submitting their work. Its similarity reports overlapping text and allows authors to judge whether matches are due to plagiarism or an appropriate citation. iThenticate’s ability to flag AI-generated content gives researchers early warning as standards for disclosure and authenticity continue to shift. If you need help reading the results of your iThenticate report, Lane librarians can also help with that.

Proofig AI: Detecting Image Manipulation and Plagiarism

Proofig leverages AI to analyze images embedded in manuscripts and detect duplications or manipulations, as research fraud increasingly involves fabricated or duplicated images. The platform extracts and compares images throughout papers for screening. Its advanced algorithms specializes in catching both deliberate fraud and “innocent mistakes,” tackling problems that traditional peer review and editorial oversight often miss (Richardson et al., 2025). All analyses are confidential, supporting a culture of trust and transparency.

Why These Tools Matter in the AI Era

The explosive growth of systematic scientific fraud—especially in biomedical sciences—has outpaced existing detection and punishment measures, undermining trust in the literature and threatening the effectiveness of peer-reviewed publishing (Richardson et al., 2025). Tools like iThenticate and Proofig give you the power to use AI for the benefit of reviewing and validating your work.


References for further information

Naddaf, M. (2025, March 27). Will AI take over peer review? Nature, 639, 852–854

Richardson, R. A. K., Hong, S. S., Byrne, J. A., Stoeger, T., & Amaral, L. A. N. (2025). The entities enabling scientific fraud at scale are large, resilient, and growing rapidly. Proceedings of the National Academy of Sciences, 122(32), e2420092122. https://doi.org/10.1073/pnas.2420092122

Leave a comment