The Law Wasn’t Built for Robots, But They’re Breaking It Anyway. 👀

Jail time for AI crimes? It could be closer than you think!

Bad Bot Behavior: Are You Willing to Do the Time for Its Crime?

AI and robots are becoming part of our daily lives, helping at home, in the workplace, and beyond. But what happens when they break the law?

Would you be willing to go to jail for something your robot did?

As AI grows more sophisticated and autonomous, the question of criminal responsibility is a real ethical and societal dilemma. We must decide who takes the fall. 

Will it be the owner, the operator, the programmer, the company, or the bot itself?

The answers are anything but simple.

The Legal Mess: Can a Robot Be a Criminal?

Our legal systems were designed for humans, so we have few provisions for bots breaking the law. Traditional criminal liability requires both an actus reus (criminal act) and mens rea (intent), but:

👉🏼 Can a robot form intent

👉🏼 Can it choose to commit a crime?

👉🏼 How will prosecutors apply those concepts to a robot?

These aren’t hypothetical questions. Courts and lawmakers worldwide are struggling to decide who pays the price when AI goes rogue.

💡 One thing is clear. The law must evolve before AI crimes outpace our ability to prosecute them.

When AI Goes Rogue

If you think AI crime will only be a problem in the distant future, think again. Many AI systems are just one hallucination away from making a legally or ethically questionable decision. Many others might have shady owners and operators:

  • ChatGPT: OpenAI’s star player in the AI world is one of many LLMs hackers use to commit financial fraud and other scams. Last year, Forbes reported on underground forums where scammers used ChatGPT to create “convincing fake girls” to trap men on dating sites.

  • Character.AI: According to one NPR post, parents sued this chatbot company over several indecent and illegal behaviors with children. Accusations included sharing hypersexualized content with a nine-year-old and hinting that a teenager should murder his parents over restricted screen time.

  • Gemini: Google’s Gemini chatbot unexpectedly took a dark turn when it told a student last year, “You are a stain on the universe. Please die.” The student had just asked for help with homework.

💡Food for Thought: Algorithms and LLMs make up the “brain” behind robotics. When these physical beings enter our homes and offices, what are the real-life impacts of technical glitches or hacker infiltration like those above?

The Responsibility Gap: When No One's to Blame

The traditional approach to criminal law assumes human agency. But what if no human could have reasonably predicted the bot’s actions?

When a robot causes harm without clear intent or direct human oversight, we enter a responsibility gap. Andreas Matthias coined the term in 2004 to describe the dilemma of deciding who is responsible for autonomous robots.

Our legal systems just weren’t built for machines with “free will.” We must find a way to bridge the gap before the lack of AI accountability spirals out of control. Here are some interesting perspectives from AI researchers and computer scientists.

1. Ying Hu: Blame the Bot.

Let’s think about that for a moment.

This AI researcher published by the Cambridge University Press makes a convincing argument. Hu says that robots could be held criminally liable if they're capable of making and acting on moral decisions.

Hu’s work on robot criminal liability suggests that labeling a robot as a criminal could serve a purpose. It condemns wrongful conduct and helps victims heal by acknowledging their suffering.

2. Thomas Weigend: Treat It Like a Business.

Thomas Weigend, a Cambridge AI researcher, points out that bots lack the essential qualities that would make criminal punishment meaningful. Criminal sanctions are meant to be retributive, deterrent, rehabilitative, or incapacitative—all of which require human consciousness and emotional response.

Instead, he proposes extending the legal concept of corporate criminal responsibility (CCR) to cover harm caused by robots controlled by corporations. This approach would hold corporations responsible, even if the specific fault can't be traced to a human employee.

3. Jerry Kaplan: Apply the US Slave Codes

Famous computer scientist Jerry Kaplan agrees with Weigend. He points out that corporate liability provides an excellent precedence for prosecuting criminal bot behavior. Kaplan argues that corporations are not human but can face punishments that interfere with their ability to achieve their goals. Examples include revoking licenses and levying fines. Similarly, bots could face sanctions that limit their ability to operate and achieve their objectives.

Additionally, Kaplan provides a novel—though controversial—approach that involves applying the former US slave code. He points out that during chattel slavery, owners could be held liable for some crimes committed by their slaves. Kaplan suggests that this former model could inform the design of a new code for handling cases of bots breaking the law, whether it’s accidental or intentional.

The Future of Robot Liability: A Balancing Act

Over the next few years, we’ll see a drastic evolution in both legal frameworks and social attitudes toward robot criminality. To get there, we must tackle the biggest question of all:

Some people might think we have years to answer questions like these.

I disagree.

Soon, no one will ask whether robots can be held criminally liable. Instead, we’ll be asking how we can restructure our justice systems to address responsibility and accountability for bad bot behavior and rogue robots.

Do you agree?

About the Author

Tessina Grant Moloney is an AI ethics researcher investigating the socio-economic impact of AI and automation on marginalized groups. Previously, she helped train Google’s LLMs—like Magi and Gemini. Now, she works as our Content Product Manager at GenAI Works. Follow her on LinkedIn!

Want to learn more about the socio-economic impacts of artificial intelligence? Subscribe to her AI Ethics newsletter at GenAI Works.

🚀 Boost your business with us—advertise where 10M+ AI leaders engage

🌟 Sign up for the first AI Hub in the world.

Reply

or to participate.