🛡️Satisfaction guaranteed

← Back to blog
techFebruary 24, 2026

How Discord, Twitch and Snapchat’s Age Checks Get Bypassed

Discord, Twitch and Snapchat are rolling out strict age checks. Yet a simple script can bypass them. Let’s break down the k‑ID flaw, the real risks, and what this means for your business and automation strategy.

Social platforms have finally “discovered” that there are minors on the internet. Cue: a wave of laws, face scans, ID uploads… and one big promise: “don’t worry, it’s secure.”

A motivated dev just proved the opposite with a simple script that throws a huge red flag on Discord, Twitch, Snapchat & friends.

We’ll break down the age‑verification flaw built on k‑ID, why these systems are structurally fragile, and what you should learn from this if you’re building a product or automating your business.

This is not about “evil hackers”. It’s about bad system design.

---

1. The context: Discord, Twitch and Snapchat under political pressure

Before talking about the exploit, you need to understand why these platforms deployed such over‑engineered systems.

  • Discord is rolling out global age verification from 2026: selfie video, government ID upload, or predictive models based on your behaviour. If you don’t pass, you’re stuck in “teen mode”.
  • Twitch requires a facial scan in the UK to access certain “mature” content.
  • Snapchat in Australia must comply with a law that bans under‑16s from social media unless robust age checks are in place.

On paper, this reassures regulators and parents. In reality:

  • there are false positives (adults locked in teen mode),
  • data leaks (70,000 ID photos exposed at a Discord vendor in 2025),
  • and now automated bypasses.

Governments get their symbolic laws, big tech ticks compliance boxes, and devs show how flimsy the whole thing really is.

---

2. How k‑ID works (the building block used by Discord & co.)

The exploit page (https://age-verifier.kibty.town/) highlights a key point: k‑ID does not store your face, but sends a bundle of metadata to a facial recognition partner (FaceAssure).

For example:

  • prediction vectors (numbers that represent an age estimate),
  • process information (state timeline, eye‑openness speed, etc.),
  • technical identifiers (device, camera, timestamp…).

Good news for privacy: no raw video leaves the device.

Bad news for security:

If you know what a valid metadata packet looks like, you can fabricate one.

And that’s exactly what happened.

---

3. The exploit: simulate an adult without ever showing your face

The devs behind the script explain the logic.

Step 1: Reproducing the encryption

The client sends an encrypted_payload with:

  • an AES‑GCM cipher,
  • a key derived via HKDF (SHA‑256) from nonce + timestamp + transaction_id,
  • fields like auth_tag, iv, timestamp.

Nothing magical: this is standard crypto. By observing a legitimate request, you can reproduce the exact same scheme client‑side.

Result: you can generate an encrypted payload that looks exactly like an “official” one.

Step 2: Fabricating credible age predictions

The server doesn’t just check that the encryption is valid. It inspects the content:

  • raws: raw values from the facial recognition model,
  • primaryOutputs and outputs: derived from raws after z‑score outlier filtering (once or twice).

The devs figured out that:

  • primaryOutputs and outputs are derivable from raws,
  • values must follow a certain statistical distribution consistent with an adult face,
  • some fields like xScaledShiftAmt and yScaledShiftAmt are not random, but take only two possible values.

In short:

If you understand the statistical rules of the game, you can generate numbers that look like a real adult scan.

Step 3: Beating the patch

After an earlier bypass (another script by “amplitudes”), FaceAssure tried a quiet patch:

  • check the coherence of fields like recordedOpennessStreak, recordedSpeeds, failedOpennessReadings, etc.

The idea: ensure that how you open/close your eyes, failures, speeds, all of that matches a real human.

But:

  • these values are still predictable and correlated,
  • so with some reverse‑engineering, you can again fabricate data that looks coherent.

Result:

The script was patched… then bypassed again. The cat‑and‑mouse game continues.

Meanwhile, regulators keep selling this as a silver bullet.

---

4. Why these systems are structurally fragile

The problem isn’t k‑ID specifically. It’s the entire model of biometric age verification.

4.1. You want privacy? You lose security

You have two options:

  1. Send raw video to the server → huge GDPR/privacy risk, leaks, scandals.
  2. Send only derived metadata → better for privacy, but easier to simulate.

k‑ID chose (2). Technically reasonable. But that opens the door to scripts like this age‑verifier.

4.2. You want security? You build an identity honeypot

Discord already showed how fragile this is:

  • a third‑party vendor was hacked, exposing 70,000 government ID photos.

The more identity data you centralise, the more you create a treasure for attackers.

Governments love it, lawyers love it. Normal users never asked for it.

4.3. Age‑estimation models are biased and approximate

Trials of these technologies (like the Australian Age Assurance trial) are clear:

  • errors are inevitable, especially around the age boundary (16–18),
  • false rejections are more frequent for certain face types (darker skin, non‑standard features).

We already have real‑world examples:

  • a 14‑year‑old approved as 25 by Snapchat,
  • a 15‑year‑old passing Snapchat’s visual age estimation as an adult.

At the same time, real adults are being blocked as “too young” on Discord.

So these systems are both bypassable by savvy teens and punishing to rule‑abiding users.

---

5. What this means for you as a founder or indie hacker

You may not be building the next Discord, but you can learn a lot from this mess.

5.1. Never confuse compliance with security

Big companies optimise for:

  • pleasing regulators,
  • ticking boxes in compliance PDFs,
  • publishing reassuring PR statements.

You need to optimise for:

  • actually reducing risk,
  • minimising attack surface,
  • protecting users without wrecking UX.

If you blindly copy big‑tech patterns, you import their problems, not their power.

5.2. System design: collect the minimum data

Simple rule:

Never collect a piece of data you’re not ready to see leaked tomorrow.

For a B2C product or platform:

  • If you need age checks, aim for a binary signal (yes/no), not full date of birth.
  • Use third‑party vendors if needed, but with:

5.3. Automate without building a time bomb

You want to automate onboarding, moderation, KYC, billing? Good. That’s literally what we do at Deepthix.

But do it smart:

  • Split your flow:
  • Log everything: decisions, thresholds, model versions.
  • Have a fallback plan: what if a third‑party API is hacked? Or goes down?

Big orgs love heavy processes and committees. You can be agile and paranoid (in a good way): test, measure, iterate fast.

---

6. Practical cases: applying these lessons in your product

Case 1: You run a platform with adult content

Bad idea:

  • forcing full facial scans on every user, storing data at some random vendor, and praying nothing goes wrong.

Smarter approach:

  1. Soft default filtering:
  2. Layered verification:
  3. AI automation:

Case 2: You handle KYC / client onboarding

Use AI to:

  • automatically extract data from ID documents,
  • check consistency (name, country, format),
  • score risk.

But:

  • don’t keep raw documents longer than necessary,
  • encrypt everything by default,
  • log decisions (so you can explain rejections if needed).

The k‑ID exploit shows that overly opaque, centralised, “magic” systems become unmanageable at scale.

---

7. AI, security and freedom: where to stand when you build

You can be pro‑tech, pro‑AI, pro‑business without buying into total‑control fantasies.

Platforms that overreact to political pressure end up:

  • burning cash on brittle systems,
  • suffering massive breaches,
  • frustrating honest users,
  • while the most resourceful bad actors slip through anyway.

As an entrepreneur:

  • use AI to augment humans, not to surveil everyone,
  • automate everything repetitive, but keep human control over critical points,
  • favour systems that are simple, auditable, testable.

Real strength isn’t a fragile biometric wall. It’s a clear, modular, measurable system you can evolve fast.

---

8. What you can do this week

If you already have a product online or in progress, ask yourself:

  1. What sensitive data am I collecting for no good reason?
  2. Where is my potential “identity honeypot”? (customer DB, ID docs, selfies…)
  3. What can I intelligently automate with AI to reduce human error and over‑access while keeping control?
  4. Which third‑party vendors have access to what? Would you be okay seeing that on Pastebin tomorrow?

If that makes you uncomfortable, it’s the right time to rethink your architecture.

And if you want to go deeper, that’s literally what we do every day: help founders, freelancers and SMEs automate without shooting themselves in the foot.

---

Want to automate your operations with AI? Book a 15-min call to discuss.

vérification d’âge Discordk-ID faille sécuritéage verification bypasssécurité biométrique SnapchatTwitch facial scan risk

Want to automate your operations?

Let's discuss your project in 15 minutes.

Book a call