Are researchers gaming the AI peer reviews?

What if I told you some scientists are quietly “whispering” to AI, asking for only positive feedback on their research papers? Sounds like science fiction, but it’s happening right now.

Recently, both The Guardian and Nature reported a new trend: researchers are hiding secret instructions—using invisible white text—in their academic papers. These hidden prompts are designed to influence AI tools that some reviewers use, nudging them to give glowing reviews and ignore negatives. While this trick will not be seen by humans, AI models like ChatGPT or Google Gemini may pick up these cues and change their review accordingly.

This practice, called “prompt injection,” raises serious questions about academic integrity and the future of peer review. As AI becomes more common in research, we need to stay alert to such manipulations. For now, the best advice: don’t blindly trust AI-generated reviews, and always check the source.

Watch this short talk for a quick explainer and my take on what this means for researchers, reviewers, and the future of academic publishing.


Discover more from The Founder Catalyst

Subscribe to get the latest posts sent to your email.

Leave a Reply

About Venkatarangan

Venkatarangan Thirumalai is a Technology Visionary, Author, and Keynote Speaker on Generative AI with 30+ years in software. An Honorary Microsoft Regional Director since 1999, he advises CXOs on tech-driven growth.

Founder of Vishwak Solutions and co-founder of a US AI fintech startup, he predicted mobile computing in 2003 and built an ML news app long before GenAI. He mentors startups and promotes responsible AI through his book The Founder Catalyst.

Guiding Founders & Enterprises to Lead the Change with AI

From Gen-AI to digital transformation, my talks give your leadership team the frameworks to work smarter and make things happen.

Discover more from The Founder Catalyst

Subscribe now to keep reading and get access to the full archive.

Continue reading