Unreal Digital Group Blog

What the Law is Really Saying About AI

Written by Unreal Digital Group | Oct 8, 2025 10:09:30 PM

Right now, it feels like all the AI headlines are either apocalyptic (“robots will take your job”) or blissfully naive (“AI will save us all”). 

The truth — and the law — live somewhere in between.

As lawyer and legal tech founder Matti Neustadt put it in a recent episode of the Get Real podcast: “The law isn’t about banning AI. It’s about accountability.”

Which means businesses have a choice here — you can wait for the rules to land and scramble, or you can get ahead of the curve and start showing your work now.

The Scope of AI Regulation

At its core, from a legal standpoint, AI isn’t a free-for-all — it’s a moving target. The law isn’t trying to stop innovation; it’s trying to catch up to it.

That’s why, as Matti explains, what we’re seeing now is just the warm-up. Governments around the world are racing to define what “responsible AI” actually means, and every region is taking a slightly different approach.

  • Europe: Will overregulate. Expect long checklists and box-ticking before you can move.
  • U.S. federal level: Will underregulate. Things will move but at the pace of a law school syllabus.
  • California: Will split the difference. Leaning forward but still grounded in business reality.

Her call? Don’t wait for Washington. Watch Europe and California. They’ll set the tone — and the timeline — for what’s coming everywhere else.

Responsibility Beats Compliance

If regulation is the “what,” responsibility is the “how.”

As Matti puts it: “It’s not a check-the-box exercise… it’s about responsibility and transparency.”

In other words, compliance alone won’t save you. Regulators don’t just want to see that you filled out the right forms — they want to see thinking. They want proof that you made intentional, well-documented choices about how your business uses AI and manages risk.

That’s a big shift. Traditional compliance says, “follow the rules.” AI governance says, “show your reasoning.”

So instead of chasing “perfect” processes, start asking better questions:

  • What are the biggest risks for our business?
  • How are we evaluating accuracy, bias, and data privacy?
  • Who’s responsible for reviewing or approving AI outputs?

Because when (not if) scrutiny comes, it won’t be enough to say you followed a checklist. You’ll need to show that you understood the assignment — that you built a culture of accountability, not just compliance.

The Real Risk With AI Isn’t What You Think

AI isn’t going to take all our jobs tomorrow. And it’s not going to turn your contracts into instant lawsuits.

The real risk? Laziness.

Because, as Matti says: “If you just plug a question in and accept its answer, you’re getting lazy. And that’s when people get replaced.”

It’s not about AI replacing people — it’s about people replacing their own judgment with AI. That’s where things go sideways. When businesses stop verifying, stop questioning, and stop documenting how they use AI, that’s when mistakes turn into liability.

Lazy AI use doesn’t look dangerous at first. It looks efficient — until it produces biased outputs, unverified answers, or sloppy decisions that can’t be explained later. Those aren’t AI problems. They’re governance problems.

Regulators won’t care that the machine got it wrong. They’ll care that you didn’t catch it. Accountability still sits with the human — and that’s where smart governance beats blind automation every time.

 

The One Rule Every Business Can Follow

“Whatever you do, just write it the fuck down. So there’s evidence.” (We think Matti boiled it down perfectly.)

That’s it. Seriously.

Documentation is the quiet hero of AI governance — not glamorous, but powerful. Because when something goes wrong (and eventually, something will), you need to show not just what you did, but how you thought about it.

Treat AI documentation like financial reporting. It’s not paperwork for paperwork’s sake — it’s proof that your company acted responsibly. Start simple:

  • What tools are you using?
  • What risks did you consider?
  • What processes or safeguards did you follow?
  • Who made the final call?

You don’t need to overengineer it. A few clear notes in a shared doc can save you months of legal pain later.

If regulators — or courts — come knocking, your notes become your shield. They’re the story of how you made informed, ethical decisions in real time. And in the evolving world of AI law, that might be the most powerful defense you’ve got.

So What?

The legal landscape may still be shifting, but the direction is clear — accountability is the new currency. Businesses that build governance now will be the ones ready when the rules finally catch up.

Matti’s advice isn’t about compliance theater or overreacting to headlines. It’s about leadership. The smart move isn’t to wait for sweeping legislation — it’s to get your systems in order before someone tells you to.

Start now:

  • Think responsibility first, compliance second.
  • Track Europe and California for early signals.
  • Document everything. Seriously, everything.

Because at the end of the day, AI won’t get you in trouble, but ignoring accountability will.

🎧Check out the full conversation with Matti on The Verdict on AI from a Lawyer’s POV