A Chatbot That Encourages Breaking the Law
Under the Eric Adams administration, New York City launched an AI chatbot meant to help businesses navigate government programs and regulations. The result? A documented disaster exposed by The City and The Markup.
The chatbot was advising businesses to:
- Take a portion of employee tips (illegal)
- Refuse cash payments (illegal in certain contexts)
- It didn't even know the minimum wage
New comptroller Brad Lander Mamdani has called the system "unusable" and plans to kill it off.
The Problem Isn't AI β It's the Deployment
This fiasco perfectly illustrates what happens when you deploy AI without:
1. Source Validation
AI doesn't "understand" the law. It generates responses based on its training. Without access to verified, up-to-date legal sources, it hallucinates dangerous advice.
2. Real-World Testing
A simple test with basic labor law questions would have revealed these flaws before launch.
3. Feedback Loops
Users had been reporting errors for months. Nobody was listening.
4. Human Oversight
For high-risk topics (legal, medical, financial), AI should never respond without human validation on sensitive questions.
The Real Cost
Beyond public embarrassment, the consequences are concrete:
- Legal risk: A business following this advice exposes itself to lawsuits
- Trust erosion: Citizens lose faith in the city's digital services
- Wasted resources: Money invested in this project is lost
The Right Way to Deploy AI in Business
This NYC case is a perfect anti-example. Here's what you should do:
Define the Scope
AI shouldn't do everything. Identify tasks where it excels (simple factual answers, routing) and where it should hand off (legal advice, complex decisions).
Build Guardrails
- Filter sensitive questions
- Redirect to humans for high-risk topics
- Verified, current data sources
Test in Real Conditions
Not just technical tests β real users, real questions, real edge cases.
Monitor Continuously
AI evolves, regulations change. A monitoring system catches drift before it goes viral.
Why SMBs Need Experts
Large enterprises have dedicated teams for these deployments. SMBs don't have that luxury β but they face the same risks.
That's where specialized AI deployment partners come in. Rather than cobbling together a chatbot with ChatGPT and hoping it works, working with experts ensures:
- Deployment compliant with regulations
- Systems tested and validated
- Ongoing maintenance and evolution
The Moral of the Story
AI is a powerful tool. But like any powerful tool, misused, it causes damage. The NYC chatbot isn't a technology failure β it's an implementation failure.
The question isn't "should we use AI?" but "how do we use it correctly?"
