Researchers Claim AI Startups are Exploiting Peer Review for Self-Promotion

A controversy has emerged over “AI-generated” studies submitted to this year’s ICLR, a well-established AI research conference.
Three AI labs—Sakana, Intology, and Autoscience—claim their AI-generated studies were accepted into ICLR workshops, where submissions are typically reviewed by workshop organizers before publication.
Sakana informed ICLR leadership before submitting its AI-generated papers and obtained consent from peer reviewers. However, Intology and Autoscience did not, according to an ICLR spokesperson who spoke to TechCrunch.
Several AI researchers criticized Intology and Autoscience on social media, arguing that they exploited the peer review system.
“These AI-generated papers are using peer-reviewed venues as human evaluation tools, but no one consented to this unpaid labor,” wrote Prithviraj Ammanabrolu, an assistant computer science professor at UC San Diego, in an X post. “It diminishes my respect for everyone involved, no matter how impressive the system is. Please disclose this to the editors.”
The Growing Burden of Peer Review
Critics highlighted that peer review is a demanding, time-intensive process largely carried out by volunteers. A recent Nature survey found that 40% of academics spend between two and four hours reviewing a single study. The workload has been increasing, with submissions to NeurIPS, the largest AI conference, reaching 17,491 last year—a 41% jump from 12,345 in 2023.
Academia was already grappling with AI-generated content. One analysis estimated that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 likely contained synthetic text. However, AI companies leveraging peer review to validate and promote their technology is a more recent development.
Intology boasted about its ICLR results in an X post, stating that its papers received “unanimously positive reviews” and that workshop reviewers praised one AI-generated study for its “clever ideas.”
However, many academics were unimpressed.
Concerns Over Respect for Peer Reviewers
Ashwinee Panda, a postdoctoral fellow at the University of Maryland, criticized the submission of AI-generated papers without giving workshop organizers the chance to reject them, calling it a “lack of respect for human reviewers’ time.” He noted that Sakana had sought permission for its experiment in a workshop he was organizing at ICLR, but he and his team declined. “I think submitting AI papers to a venue without contacting the [reviewers] is bad,” he added.
Many researchers also questioned whether AI-generated papers were worth the peer review effort. Even Sakana admitted that its AI introduced “embarrassing” citation errors and that only one of its three AI-generated papers met the standard for conference acceptance. In the interest of transparency and respect for ICLR’s norms, Sakana withdrew its paper before publication.
Alexander Doria, co-founder of AI startup Pleias, argued that the surge of AI-generated submissions underscored the need for a regulated body to conduct rigorous evaluations—at a cost.
“Evaluations should be done by researchers fully compensated for their time,” Doria wrote in a series of X posts. “Academia is not here to provide free AI evaluations.”
Read the original article on: TechCrunch
Read more: YC-Backed ReactWise is Leveraging AI to Accelerate Drug Production
Leave a Reply