
Nature News · Feb 23, 2026 · Collected from RSS
A newly described artificial-intelligence coach provides feedback to make peer reviews more specific and useful.Credit: Mohd Izzuan Roslan/AlamyAn artificial-intelligence coach can help peer reviewers to provide more constructive and less toxic feedback, according to a new study1. Whether that improves the quality of research papers, however, remains to be seen.Scientists doing peer reviews are increasingly turning to AI for a variety of tasks, including searching for relevant literature, sharpening prose and more.James Zou, a computer scientist at Stanford University in California, and his colleagues set out to assess whether large language models (LLM) could help to address a common complaint about peer reviews: feedback often lacks thoroughness or strikes the wrong tone. At the 2023 Association for Computational Linguistics annual meeting in Toronto, Canada, for example, authors of conference papers flagged 12.9% of reviews as being poor quality.That’s mainly because the reviews were vague, says Zou, with broad, simple comments such as “not novel”. Reviews can also, rarely, be unprofessional or include personal attacks, with comments such as “these authors don’t know what they’re talking about”, says Zou. Others make factual errors, for example criticizing work for omitting an analysis when that analysis is, in fact, there.Tone checkerZou and his colleagues gathered about a dozen reviews that were vague, unprofessional or incorrect, along with what they considered to be appropriate feedback about those reviews. They fed that curated data to an LLM to help refine its responses and used this to develop a Review Feedback Agent, which uses a total of five LLMs to collaborate and check each others’ work. The team put their AI tool to work in the lead up to the 2025 International Conference on Learning Representations in Singapore. This major AI conference has attracted more than 10,000 submissions for the past few years. Each paper is reviewed by 3–4 people and around 30% are accepted.AI, peer review and the human activity of scienceThe group randomly selected around 20,000 already-written reviews, used the Review Feedback Agent to evaluate them, and sent the reviewers the AI tool’s feedback. Most of the time the AI system suggested ways that reviewers could be more specific and constructive, frequently using the phrase “to make this feedback more actionable …”.