A recent study scrutinized peer reviews in the field of computational science, finding a spike in specific adjectives with the advent of ChatGPT.
Think of the last time you were reading a scientific paper and noticed words like "laudable," "innovative," or "meticulous." Maybe you haven't paid much attention to them. But what if I told you that these adjectives could be the unobtrusive signature of an AI assistant?
The team led by Stanford University's Weixin Liang analyzed more than 146,000 peer reviews, looking for language patterns that could indicate AI involvement. The results are surprising: up to 17% of the reviews analyzed could have been substantially modified by chatbots.
But what makes these adjectives so revealing? It's about their unusual frequency. "Commendable," "innovative," "meticulous," "intricate," "notable," and "versatile" suddenly appeared much more often after the launch of ChatGPT. It's as if the AI has a limited palette of colors with which it tries to paint a complex picture.

This trend isn't just limited to reviews. Andrew Gray of University College London extended the analysis to peer-reviewed studies published between 2015 and 2023. The result? A significant increase in the same suspicious adjectives. Gray estimates that the authors of at least 60,000 papers published in 2023 – just over 1% of all scientific studies – used chatbots to some extent.
But what does this mean for you as a researcher or science communicator? First, it is a wake-up call about transparency in the scientific process. If AI is used in peer review or writing papers, it should be declared open. Not to penalize, but to better understand how these tools influence the scientific process.
Second, it challenges us to rethink what authenticity means in science communication. If a chatbot can produce apparently competent text, what is the added value of the human perspective? Perhaps the answer lies in nuance, in our ability to make unexpected connections or ask challenging questions.
Artificial intelligence can be used ethically in research and academia. If you want to know how, I'm waiting for you at my workshops dedicated to this topic.
As a researcher or science communicator, you can start by paying more attention to your own language. Avoid over-reliance on general or laudatory adjectives. Instead, focus on specific and detailed descriptions of your research methodology, results, and implications. Use concrete examples and original analogies to illustrate your ideas.
At the same time, be open about using AI tools in your work. If you use ChatGPT or other similar systems to refine your ideas or check grammar, mention it. Transparency not only builds trust in your work, but also helps the scientific community better understand the role and limits of AI in research.
For peer reviewers, the challenge is even greater. If you choose to use AI to assist you in the review process, make sure your human input remains central. Use AI as an assistant, not a replacement. Verify and challenge AI-generated claims, add your unique perspective, and ensure your feedback reflects a deep understanding of the domain.