- Attorney Intelligence
- Posts
- 🧠How State AGs are Shaping AI Regulation
🧠How State AGs are Shaping AI Regulation
What firms need to know
Welcome to Attorney Intelligence, where we break down the biggest advancements in AI for legal professionals every week.
Only a handful of states like California, Colorado, and Utah have passed laws that directly govern AI. The rest are relying on older statutes, using privacy, consumer protection, and anti-discrimination laws to fill in the gaps.
These laws weren’t written with machine learning in mind, but they’re being applied anyway.
That leaves companies embedding AI into their products in a tough spot: they’re innovating in areas where the rules are still being written.
For law firms, beyond interpreting laws, they’re now tasked with guiding clients through a regulatory gray zone with clarity and caution.
In this week’s Attorney Intelligence, we’ll explore:
Why state AGs are stepping in to regulate AI without waiting for federal laws
How risk management frameworks like NIST and ISO are becoming legal safeguards
Why oversight and verification matter more than technical performance
What law firms should be doing today to help clients future-proof their AI practices
Let’s dive in.
Frameworks are your first line of defense
For law firms, the best place to start is with a recognized risk management framework like NIST AI RMF or ISO/IEC 42001. Although these frameworks started out as suggestions, they’re emerging as the standard lens through which regulators assess AI risk.
In some states, demonstrating alignment with these frameworks can even act as a defense in the face of enforcement.
Encouraging clients to adopt one of these systems is a way to future-proof their operations. It shows they’re acting in good faith and with structure, even if the laws remain unsettled.
Oversight and auditability matter more than ever
Once a framework is adopted, the next priority is building oversight into AI deployments.
It’s not enough to have principles on paper. There also needs to be a clear process for reviewing how the AI behaves and who’s accountable when it doesn't go as planned.
Auditable systems with traceable decision paths are becoming essential, especially in regulated environments.
Regulators are already starting to ask: can this output be explained, verified, or challenged? If the answer is no, that’s a liability.
Train teams to review and verify AI outputs
Even the best systems fail when humans don’t know how to catch mistakes.
Case in point: K&L Gates recently made headlines after a judge criticized lawyers for submitting hallucinated case law pulled from an AI tool. It wasn’t a failure of intent - it was a failure of verification.
Training teams to double-check outputs, recognize red flags, and ask the right questions can go a long way in reducing exposure.
It’s one of the simplest but most overlooked steps companies, and the firms advising them, can take.
What law firms should be advising right now
This isn’t just about risk mitigation, it’s about designing defensibility into your client’s AI lifecycle.
This includes understanding where data comes from, tracking model outputs, and documenting edge cases. Firms should be partnering with their clients in helping them build with eyes open.
They can also lead by helping clients ask better questions:
Is the model fine-tuned or off-the-shelf?
Are outputs logged and reviewed?
Does someone own the responsibility for final decisions?
These all aren’t just technical details, they’re legal ones as well.
Regulation moves, but responsibility stays
While AI regulation is moving fast, enforcement is moving faster. In many cases, what matters most is not whether a company followed a clear law but whether it acted responsibly.
This is where lawyers come in.
If firms can help clients stay ahead of the law rather than just chase it they’ll be shaping how AI evolves, not just reacting to it.
The smartest firms do well with interpreting regulation, and do even better writing the playbook for responsible AI use.
Legal Bytes
Legora raises $80M Series B led by Iconiq Growth, signaling growing investor appetite for legal tech platforms that embed generative AI directly into legal workflows.
The New Jersey Supreme Court ruled that lawyers can lawfully buy Google ads that show up when potential clients search for a competitor’s name, opening the door for more aggressive digital advertising in legal marketing.
A Delaware judge has delayed the long-running case between legal publisher Thomson Reuters and Ross Intelligence to give Ross time to pursue appeals. The case could set a precedent for how copyright law applies to AI systems trained on proprietary data.
Looking to improve your firm's efficiency?
Book a demo to see how PointOne can help maximize billable time and cut out admin at your firm.
Thanks for reading and I'll see you next week,
Adrian
What did you think of this issue? |