🛡️Satisfaction guaranteed

← Back to blog
techFebruary 5, 2026

Why Microsoft Copilot Is Struggling (and What It Means for You)

Microsoft has poured tens of billions into Copilot… for a paid adoption rate of roughly 3%. Security incidents, clunky UX, unclear ROI: let’s unpack what’s going wrong and what you should learn from it for your own business.

Microsoft has marketed Copilot as the ultimate AI work assistant: your new AI colleague embedded everywhere in Microsoft 365.

But in 2026, after more than $72 billion invested in AI (source: Barron’s), reality is much less shiny:

  • only 3.3% of 450 million Microsoft 365 users pay for Copilot (around 15 million people);
  • critical security incidents like EchoLeak and Reprompt;
  • a user experience many describe as confusing, slow, and often not that helpful;
  • competitors (ChatGPT, Gemini, Claude) are often perceived as better… and easier to adopt.

This isn’t just a story about a big tech company stumbling. It’s a live masterclass in what not to do when you roll out AI in a business.

In this article, we’ll break down:

  1. What’s really going wrong with Copilot (beyond the easy bashing)
  2. Microsoft’s strategic mistakes
  3. Concrete lessons for your own business, even if you don’t have $72B to burn
  4. A pragmatic way to approach AI that actually delivers ROI

AI is not the problem. The way it’s sold and deployed is.

---

1. Copilot: a lot of hype, not much paid adoption

The latest numbers are clear:

  • 450M Microsoft 365 users
  • ≈15M paying Copilot subscribers
  • That’s about 3.3% paid adoption (sources: Windows Central, Barron’s)

For a product pitched as the “pivotal” piece of Microsoft’s AI strategy, that’s weak.

Why aren’t companies buying?

Talking to SMB owners and teams in large enterprises, you hear the same objections:

  1. Fuzzy ROI
  1. Pricing disconnected from perceived value
  1. Confusing UX and branding

End result: for many decision-makers, Copilot feels more like an expensive “nice to have” than a no-brainer business tool.

---

2. Technical problems: when AI becomes a liability

When you plug a chatbot into the heart of your systems (email, docs, CRM…), it’s not a toy anymore. That’s where Copilot’s recent problems get serious.

EchoLeak, Reprompt & CoPhish: CISO nightmare fuel

A few recent facts:

  • EchoLeak (CVE-2025-32711):
  • Reprompt:
  • CoPhish in Copilot Studio:

Add to that the vulnerabilities reported by Aim Security affecting Microsoft 365 Copilot (potentially impacting every company using it – source: Le Monde), and you see why some CIOs are hitting the brakes hard.

Real-world case: New York’s “illegal advice” chatbot

New York City launched MyCity Chatbot, built on Microsoft infrastructure, to help small businesses navigate city regulations.

The result:

  • the bot frequently gave illegal advice;
  • it was deemed “functionally unusable”;
  • trust cratered.

(Source: TechRadar)

The moral: AI isn’t “evil”, but if you plug it into critical workflows without safety rails, you transfer risk to your users.

---

3. UX that frustrates: when employees prefer ChatGPT

Another big problem: end users aren’t sold.

Several reports and articles (TechRadar, Windows Central) point out that:

  • employees in large companies often prefer ChatGPT or Gemini over Copilot;
  • Copilot is perceived as:
  • some UI updates removed features or degraded the experience (mobile, new UI, voice, etc.).

Even Salesforce CEO Marc Benioff publicly mocked Copilot as “Clippy 2.0”, saying he prefers Slackbot for CRM use cases (source: Times of India).

Whatever you think of him, when another major SaaS CEO feels comfortable throwing that punch in public, you know market perception is rough.

---

4. Microsoft’s strategic mistakes (and what you should avoid)

Microsoft is not “bad at AI”. Far from it: massive OpenAI partnership, insane cloud capacity, deep integration across its stack.

The issue is product and go-to-market strategy. And there are golden lessons here for any founder.

Mistake 1: assuming distribution is enough

Microsoft starts with a ridiculous advantage:

  • 450M M365 users;
  • presence across most of the Fortune 500.

They clearly bet on something like:

“We ship Copilot everywhere, charge extra, and it will sell itself.”

Result: 3% paid adoption.

Your takeaway:

  • Having a user base, an email list, or a traffic firehose is not enough.
  • If perceived value isn’t sharp and immediate, your conversion rate will tank, even with massive reach.

Mistake 2: selling a “magic assistant” instead of workflows

Copilot is marketed as a general-purpose assistant that can do everything:

  • summarise emails;
  • draft documents;
  • build slide decks;
  • analyse data, etc.

In practice, in companies that actually deployed it, you mostly see:

  • very sporadic usage;
  • few truly end-to-end automated workflows;
  • lots of “toys” rather than mission-critical processes.

Your takeaway:

  • Don’t sell “an AI”; sell:

At Deepthix, that’s exactly what we focus on: we don’t talk about “magical chatbots”, we talk about specific processes we automate.

Mistake 3: adding complexity instead of removing it

Between Copilot in Windows, Edge, 365, Copilot Studio, etc., you end up with:

  • a non-trivial learning curve;
  • different behaviours depending on context;
  • shifting limitations.

Your takeaway:

  • Every extra click, every extra screen is one more chance for your user to drop off.
  • Your AI automation should be invisible: embedded in the tools your team already uses, with predictable behaviour.

---

5. What Copilot teaches us about AI in business

Beyond the drama, Copilot reveals the conditions you need for AI to actually deliver value.

1. AI must be wired into your data — but with control

Incidents like EchoLeak/Reprompt show that:

  • plugging a model into your email, docs, and CRM without a solid security architecture is suicidal;
  • you need:

Practically for you:

  • Start with a limited scope (e.g. support, back-office, invoicing), not “read everything in SharePoint”.
  • Implement explicit business rules: what the AI is allowed to do or not.
  • Keep a human in the loop for high-risk actions (legal, financial, regulatory).

2. AI must be measured like any other investment

Microsoft is now dealing with investors who are sceptical about the ROI of its $72B AI bet.

At your scale, the logic is the same:

  • Track simple metrics:
  • Define a payback horizon (3, 6, 12 months) and adjust if it’s not delivering.

3. AI must be adopted by teams, not forced on them

The fact that many employees prefer ChatGPT over Copilot highlights a core truth:

  • People use the tool that works best for them, not the one IT signed a big contract for.

For a successful AI rollout:

  • involve end users in defining use cases;
  • test multiple models/tools (OpenAI, Anthropic, Gemini, open source…);
  • keep what’s effective, not what’s most “corporate-compliant”.

---

6. How you, as a founder, can do better than Microsoft

You don’t have Microsoft’s budget, but you have a huge advantage: you can move fast, test, and iterate without bureaucracy.

Here’s a pragmatic approach to leveraging AI where Microsoft is currently struggling.

Step 1: pick 1–2 high-leverage processes

Examples:

  • customer support (automated replies + agent assist);
  • outbound sales (personalised email generation, lead qualification);
  • operations (report generation, order tracking, invoice reminders);
  • HR (CV pre-screening, internal FAQ automation).

Criteria:

  • repetitive;
  • time-consuming;
  • easy to measure.

Step 2: prototype in 2–4 weeks

With the right stack (no-code, AI APIs, connectors), you can:

  • plug a model (OpenAI, Claude, etc.) into your data;
  • create a specialised agent for one job (e.g. “invoicing agent”);
  • roll it out to a small group of users.

You don’t need a general-purpose “Copilot that can do everything”. You need small, highly specialised copilots that do 1–2 things extremely well.

Step 3: measure, refine, secure

  • Measure: time saved, errors avoided, user satisfaction.
  • Refine: prompts, rules, integrations.
  • Secure: access rights, logging, human approvals.

This is literally what we do at Deepthix: we take your existing processes, identify what’s automatable, and deploy AI agents that do the work for you.

---

7. Should you avoid Copilot altogether?

No. But you should stop treating it as the AI solution.

Copilot can make sense if:

  • you’re already deeply invested in the Microsoft ecosystem;
  • you have a strong IT/security team to handle governance;
  • you use it as a complement to more targeted AI components.

But if you’re:

  • a startup founder;
  • an SMB owner;
  • a solopreneur / freelancer,

then you’ll probably get more value by:

  • combining specialised tools (ChatGPT, Claude, Gemini, vertical SaaS);
  • building your own custom AI automations;
  • keeping control over your data and workflows.

---

Conclusion: Copilot is struggling, AI is not

Copilot is clearly in trouble:

  • low paid adoption (~3% of M365 users);
  • serious security incidents (EchoLeak, Reprompt, CoPhish);
  • uneven UX, with users preferring alternatives.

That doesn’t prove “AI is useless”. It proves that generic, poorly packaged, poorly integrated AI is not very useful.

If you want AI to actually work for you, the recipe is straightforward:

  • start from your processes, not the hype;
  • aim for measurable gains;
  • start small, secure it, then scale;
  • pick tools that work, not the ones with the loudest press releases.

And if you want help doing this pragmatically, without corporate nonsense:

Want to automate your operations with AI? Book a 15-min call to discuss.

Microsoft Copilot problemsIA en entrepriseautomatisation avec IAsécurité IA EchoLeak Repromptproductivité Microsoft 365

Want to automate your operations?

Let's discuss your project in 15 minutes.

Book a call