🛡️Satisfaction guaranteed — Setup refunded if not satisfied after 30 days

← Back to blog
techJanuary 26, 2026

“AI writes 100% of my code now”: real shift or just a punchline?

A Reddit post claims an OpenAI engineer now has AI writing 100% of his code. Behind the hype: recent numbers, what actually changes, and how to benefit without getting burned.

The headline spread fast: “OpenAI engineer confirms AI is writing 100% now.” A Reddit post, a screenshot, hundreds of upvotes, and the usual mix of awe and sarcasm.

But between “100%” and real-world software delivery, there’s a gap. And that gap is where the useful truth sits: the job is no longer “typing code.” The job is “shipping reliable systems.” AI can write most of the lines. It does not replace accountability.

  • what “100%” actually means (and why it can be true for one person but not for an org)
  • recent numbers from OpenAI, Anthropic, and Big Tech
  • a pragmatic playbook to reach your own 80–95% without turning your repo into a landfill

The Reddit post: a punchline, not an audit The source is a link post on r/OpenAI pointing to an image. No whitepaper. No internal memo. No official metric.

  • “Someone at an AI company says their AI is amazing… more news at 11.”
  • “He didn’t write much code to begin with anyway.”
  • “Maybe not 100%, but common code is easy for an LLM.”

Translation: it’s an individual anecdote, not industry-grade evidence.

And that’s the key: when someone says “AI writes 100% of my code,” they usually mean who types the lines, not who designs, who validates, who owns the outcome.

Recent numbers: we’re already high… but not “fully autonomous” Let’s leave ideology aside and look at ranges.

Anthropic: “90% of code is written by AI” In Oct 2025, Anthropic CEO Dario Amodei said roughly 90% of code at Anthropic is now written by AI models—while stressing humans remain essential for review, security, and architecture. Source: LiveMint (reporting a public statement).

That’s massive. But “written by” doesn’t mean “shipped without humans.”

OpenAI: “almost all” + PR metrics Reports around OpenAI DevDay 2025 suggest “almost all” new code is produced with Codex, including: - about ~70% increase in weekly pull requests - an internal project (“Agent Builder”) built in under six weeks with ~80% of PRs generated by Codex Source: regulatingai.org (secondary source; treat as directional, but consistent with broader trends).

Individual OpenAI engineers: ~80% on some work One OpenAI employee (Aidan McLaughlin) reportedly said 80% of his code is written by AI (Codex). Source: OfficeChai.

This is believable—because it depends heavily on what you’re building.

Big Tech: more like 20–30% At large companies, numbers are often 20–30% (Google “well over 30%,” Microsoft in a similar range), typically via public comments and press coverage. The pattern is clear: the bigger and more constrained the org (compliance, legacy, process), the lower the percentage.

Why “100%” can be true (and still misleading) “100%” becomes plausible when: - you’re doing CRUD, API integrations, standard front-end, scripts, straightforward migrations - you’re on mainstream frameworks (Next.js, FastAPI, Django, Spring, etc.) - your codebase is clean and tested - you can specify clearly

In other words: pattern-driven work.

A Reddit commenter nailed it: OOP, design patterns, data access—these are patterns. LLMs are pattern engines. So yes, they shine.

  • specs are rarely complete
  • real constraints (business rules, edge cases, tech debt) aren’t in your prompt
  • bugs don’t vanish; they shift—from typos to “plausible hallucinations”

What actually changes: from “coding” to running a production workshop The useful shift:

Before You wrote code.

Now You orchestrate: - define expected behavior - generate - test - review (AI + you) - merge

You become closer to a workshop manager than a craftsperson carving every line.

That’s great news for founders: less time on boilerplate means more time on product, distribution, and customer success.

The constraints: security, reliability, and the hidden cost of review The Financial Times and plenty of field reports highlight a reality: productivity gains are often overstated because people forget the time spent on: - debugging - tests - security review - refactoring

  • outdated dependencies
  • vulnerabilities (injection, SSRF, broken access control)
  • unnecessary complexity

So yes, you can hit 80–95% generation—if you build guardrails.

A pragmatic playbook to reach 80–95% without wrecking your product No fluff. This works for solo founders, SMBs, startups.

1) Define “100%” properly: lines vs accountability A realistic goal: - AI writes 80–95% of lines - you keep 100% accountability

Mix those up and you’ll get hurt.

2) Standardize architecture (otherwise the model improvises) - repo template - naming conventions - folder structure - lint + format

The more standard, the more effective AI becomes.

3) Tests as your safety net (otherwise it’s gambling) Minimum viable: - unit tests for critical logic - integration tests for main flows - CI that blocks broken builds

Then you can have AI write tests too. But CI is the judge.

4) Use AI in “patch mode” (small PRs), not “big bang mode” - 50–200-line PRs: reviewable - 2,000-line PRs: unreviewable

A lot of “100%” claims are statistical tricks: you generated a huge blob, then spent two days fixing it.

5) Do an explicit security pass Simple checklist: - authn/authz: who can do what? - input validation - secrets management - logging without sensitive data

In B2B, this is non-negotiable.

Founder use cases where AI is already a cheat code ### Customer support → triage + draft replies - classify tickets - draft responses - extract key fields (order, contract, SLA)

Ops → scripts and integrations - sync Stripe ↔ Notion/HubSpot - Slack alerts on key events

Product → faster iteration - landing pages - A/B tests - analytics instrumentation

Common theme: lots of standard code, so it’s highly generatable.

The takeaway - The Reddit post is a cultural signal, not a scientific claim. - Recent numbers suggest reality is already strong: ~90% at Anthropic (CEO statement), ~80% on some OpenAI work, 20–30% in Big Tech. - “100%” can be true for an individual on standard tasks, but it doesn’t remove engineering. It shifts effort to specs, validation, and security. - If you want the upside, stop fantasizing about full autonomy: build a pipeline (templates, tests, CI, small PRs) and you’ll save time immediately.

Want to automate your operations with AI? Book a 15-min call to discuss.

IA et développement logicielcode généré par IAOpenAI CodexAnthropic Claude Codeautomatisation PME

Want to automate your operations?

Let's discuss your project in 15 minutes.

Book a call