January 25, 1979, Flat Rock, Michigan
Robert Williams was 25 years old. He worked at the Ford Flat Rock plant, in a section where a robotic arm β a one-ton machine β moved casting parts. That day, the robot had malfunctioned. Williams climbed up to retrieve a part manually. The arm restarted. It struck him in the head.
Williams died instantly. His family sued Litton Industries, the robot's manufacturer. In 1983, a jury awarded them $10 million β reduced on appeal to $6 million. It was the first judicial conviction involving an industrial robot as a cause of death.
Before safety sensors
What strikes us today is the total absence of what we now consider elementary. No presence sensors. No delimited safety zone. No automatic shutdown when a human enters the robot's workspace. The arm didn't "know" Williams was there. It couldn't know.
In 1979, industrial robots were blind machines. They executed programmed sequences with no awareness of their environment. The idea that a robot should detect humans and stop didn't yet exist as a standard.
The birth of a safety industry
Williams' death triggered a regulatory cascade. OSHA (Occupational Safety and Health Administration) began developing specific standards for robots. In 1986, the Robot Industries Association published the first American robotic safety standard (ANSI/RIA R15.06).
Today, collaborative robots ("cobots") are designed from the start to coexist with humans. Force sensors, 3D vision, instant stop on contact β everything missing in 1979 has become mandatory.
The trial as turning point
Beyond technical aspects, the Williams v. Litton trial established a crucial legal precedent: a robot manufacturer can be held responsible for a human death, even if the end use is by a third party (Ford). This "product liability" doctrine has shaped the entire automation industry.
Manufacturers understood that investing in safety cost less than lawsuits. A cynical calculation, perhaps, but effective.
From Williams to AI
In 2026, we face a similar question but at a different scale. AI systems that drive cars, diagnose diseases, make credit decisions β when they fail, who is responsible? The model manufacturer? The company that deployed it? The end user?
Robert Williams died because a robot had no presence sensors. Tomorrow, someone might die because an AI couldn't recognize an edge case. Technology changes, the question of responsibility remains.
What we remember
Robert Williams' story isn't a macabre curiosity. It's a reminder that every technology goes through a phase where its dangers are underestimated, its safeguards nonexistent, its victims involuntary pioneers.
We're probably in that phase for generative AI. The question isn't whether serious accidents will happen, but whether we'll learn fast enough to limit the damage.
