🛡️Satisfaction guaranteed — Setup refunded if not satisfied after 30 days

← Back to blog
iaFebruary 27, 2026

ICML Revolutionizes Its Process: Reviewers Will Now Be Reviewed

ICML introduces a revolutionary policy: reviewers will now be evaluated on review quality, a bold response to chronic peer review problems in ML.

Introduction

ICML (International Conference on Machine Learning) has just announced a policy shaking the academic world: from now on, reviewers themselves will be evaluated on the quality of their reviews. This bold decision aims to solve an endemic problem in AI research: the uneven quality of peer reviews.

A Systemic Problem

The Volume Challenge

Machine learning is a victim of its own success. Major conferences now receive thousands of submissions, requiring an army of reviewers. ICML received over 10,000 submissions this year, each requiring at least three independent reviews.

Finding enough competent reviewers has become a major challenge. Organizers often find themselves expanding the pool beyond established experts, with consequences for quality.

Symptoms of the Malaise

The ML community has accumulated testimonies of problematic reviews:

  • Superficial reviews: Generic comments not demonstrating thorough reading
  • Inconsistencies: High scores with negative criticism, or vice versa
  • Missed deadlines: Rushed last-minute reviews
  • Apparent bias: Favoritism toward certain institutions or approaches

Renowned researchers have publicly shared examples of aberrant reviews, fueling debate on the need to reform the system.

The New ICML Policy

How It Works

The new system introduces systematic review evaluation. Here are the key mechanisms:

Area Chair Evaluation: ACs will rate each review on several criteria - technical depth, constructiveness, consistency with score, guideline compliance.

Reviewer Score: Each reviewer will accumulate a score based on their evaluations. This score will be visible to future organizers.

Consequences: Reviewers with persistently low scores will be excluded from the pool. Conversely, excellent reviewers will receive official recognition.

Evaluation Criteria

Reviews will be judged on:

  1. Demonstrated expertise: Does the reviewer truly understand the field?
  2. Actionable feedback: Do criticisms allow authors to improve?
  3. Fairness: Does the review avoid personal or institutional bias?
  4. Calibration: Is the score consistent with comments?
  5. Professionalism: Is the tone respectful and constructive?

Community Reactions

The Enthusiasts

Many researchers welcome this initiative enthusiastically. "Finally a concrete measure," comments a Stanford professor on Twitter. "I've lost count of inexplicable reviews that torpedoed good papers."

Young researchers, often most vulnerable to bad reviews, are particularly positive. For them, this system could rebalance the power dynamic.

The Skeptics

Other voices express legitimate reservations:

  • Who evaluates the evaluators? Area Chairs have their own biases
  • Risk of conformism: Reviewers might avoid sharp judgments for fear of being poorly rated
  • Additional burden: ACs are already overloaded

Alternative Proposals

  • Pay reviewers to incentivize quality work
  • Use AI to detect superficial reviews
  • Reduce the number of conferences to decrease the burden

Broader Implications

Effect on Academic Culture

This initiative could transform peer review culture. Historically considered "free" academic service, reviewing could become a recognized and valued skill.

Young researchers could see their CVs enriched not only by publications but also by their reviewer reputation.

Precedent for Other Conferences

If ICML succeeds, other major conferences (NeurIPS, ICLR, AAAI) could follow. This could standardize quality expectations across the field.

Ethical Questions

  • How to handle legitimate disagreements between experts?
  • Will minority reviewers be penalized?
  • How to avoid systematically favoring "big names"?

Implementation Challenges

Evaluator Calibration

Ensuring Area Chairs evaluate consistently represents a major challenge. Calibration sessions and detailed guidelines will be necessary.

Managing Appeals

An appeal system will allow reviewers to contest their evaluations. But managing these appeals will add administrative complexity.

Transparency vs Privacy

Should evaluations be made public? Transparency promotes accountability but could create tensions within the community.

Future Perspectives

Pilot Phase

ICML plans a pilot phase for the next edition. Collected data will allow refining the system before full deployment.

Toward a Unified System?

Eventually, some imagine a reviewer reputation system shared between conferences, creating a sort of academic "credit score" for peer review.

Conclusion

ICML's decision marks a pivotal moment for AI research. By holding reviewers accountable, the conference attempts to restore trust in a system under pressure.

The success of this initiative will depend on its implementation. If it succeeds, it could significantly improve peer review quality throughout machine learning. If it fails, it will at least provide valuable data for designing better solutions.

One thing is certain: the status quo was no longer tenable. ICML had the courage to act.

icmlpeer reviewmachine learningacademicresearchconferencereview processai research

Want to automate your operations?

Let's discuss your project in 15 minutes.

Book a call