A Revealing Paradox
The anecdote is making the rounds on academic social networks: a researcher recounts having their paper rejected from ICML 2026 for reasons they consider questionable, while still being solicited by the same conference to evaluate their peers' work. This absurd situation crystallizes the frustrations of an AI research community watching its publication system crack under pressure.
The Explosion in Submission Volume
Major artificial intelligence conferences — ICML, NeurIPS, ICLR — face exponential growth in submissions. ICML 2026 reportedly received over 12,000 papers, a volume impossible to properly process with traditional peer review methods.
This explosion is explained by several factors: the democratization of AI research, the academic pressure of "publish or perish," and the influx of industry researchers who use publications as a showcase for their work.
Overwhelmed and Sometimes Incompetent Reviewers
Faced with this deluge, conferences struggle to recruit enough qualified reviewers. The result: rushed, inconsistent evaluations, sometimes performed by people who don't master the subject of the paper they're evaluating.
The viral testimony perfectly illustrates this dysfunction. How can one be deemed insufficient for publishing, yet expert enough to evaluate others' work? This contradiction reveals a system functioning by default rather than by design.
New Policies: A Band-Aid on a Wooden Leg
ICML recently announced a new policy whereby reviewers will themselves be evaluated by meta-reviewers. The intention is laudable: holding evaluators accountable and improving review quality. But this approach doesn't solve the fundamental problem of the imbalance between submission volume and evaluation capacity.
Some even see an additional risk: if reviewers fear being poorly rated, they might adopt more conservative positions, systematically rejecting innovative but risky work.
AI's Shadow Over Peer Review
Ironically, the AI community is beginning to use AI for peer review. LLM-based tools are employed to pre-evaluate papers, detect plagiarism, or even generate first draft reviews.
This evolution raises profound ethical questions. Can we entrust the evaluation of artificial intelligence research to artificial intelligence itself? The potential conflicts of interest are dizzying.
Toward a New Publication Model?
More and more voices are calling for a fundamental rethinking of the AI publication system. Some propose abandoning the conference model in favor of journals with a continuous review process. Others suggest open publication systems where the community collectively evaluates work after publication.
Initiatives like arXiv and OpenReview have already begun transforming the landscape, enabling rapid dissemination of work without waiting for conference verdicts. But these platforms pose their own quality and signal-to-noise problems.
A System Running on Empty
The ICML 2026 case is merely the visible symptom of a systemic crisis. The world of AI research moves at a speed its institutions can no longer keep up with. Without deep reform, peer review risks becoming a mere bureaucratic formality, emptied of its original quality control function.
The question is no longer whether the system will change, but when and how. Researchers, institutions, and companies funding research all have an interest in quickly finding solutions. Because a failing publication system means a science losing its bearings.
