The U.S. White House just released a national AI policy framework. It’s got seven sections, four pages, and exactly zero sentences about what your compliance team should do on Monday morning.
Now, I’m not saying it’s irrelevant. It’s not. The framework covers children’s safety, intellectual property, workforce development, and federal preemption of state laws. Those are big, important things. And compliance professionals absolutely need to understand what’s being proposed at the national level, because it will shape the regulatory landscape we all operate in.
But here’s what struck me as I read through it: this framework is written entirely for legislators, regulators, and the AI industry. It’s about what government should do and what companies need (at least from the perspective of the government). It says nothing—truly nothing—about what organizations owe the people inside them when they deploy AI.
That’s the framework nobody wrote, and that’s the work that still needs to be done, ideally by us.
The gap between policy and practice
We’ve been here before. A big regulatory signal comes from Washington, and organizations immediately ask: What do we have to do? That’s the wrong first question. The better question is: What should we be doing regardless?
Because here’s the thing: your employees are already using AI. Today. They’re using it to draft reports, analyze data, summarize documents, and make decisions faster than they could on their own. Some of that use is company sanctioned. Some of it isn’t. Most of it is happening in the exact space this national framework doesn’t touch: workflows, handoffs, daily decisions, and moments of individual judgment.
That’s the operational layer, and it’s where compliance either works or doesn’t.
National frameworks don’t operationalize themselves
The White House framework recommends regulatory sandboxes, sector-specific oversight, and industry-led standards. It explicitly says Congress should not create a new federal rulemaking body for AI. Instead, existing regulators should handle it within their domains.
That’s a reasonable position. But it also means that the practical governance work (AKA the how-does-this-actually-function-inside-our-organization piece) falls squarely on compliance teams. And not someday. Now.
If we’ve learned anything from watching organizations struggle with AI governance over the past couple of years, it’s this: the gap between “we have an AI policy” and “people know what to do with AI at work” is enormous. A policy sitting on your intranet is not governance. It’s a starting point. Governance happens when that policy gets translated into procedures, embedded into workflows, and made real at the task level because those are the places where people actually encounter risk.
What an internal framework should look like
So, what does the framework nobody wrote actually contain? The same things any effective compliance effort requires, applied specifically to AI:
Accountability. Not just “someone owns AI governance” but clarity about who makes decisions when AI surfaces a risk, who handles and funds remediation, and who is responsible when things go sideways. AI can flag anomalies, patterns, and control failures in seconds, but once the dashboard lights up, the hard questions begin. Those answers need to exist before you need them.
Procedures, not just policies. The national framework talks about policies. Your people operate in procedures. They need to know: When can I use generative AI for client work? What do I do if the output looks wrong? Who do I ask if I’m not sure? Procedures are where compliance becomes real. They’re cultural infrastructure, not administrative artifacts.
Clear expectations at the moment of decision. A training module about “AI ethics,” and a policy manual are good starts. But what you need to take it to the next level is practical guidance that shows up when someone needs it. For instance, when they’re drafting a report, reviewing a vendor’s AI tool, or deciding whether to automate a process that historically required human judgment, give them some actual examples and actionable tasks that help them to make the right decision in the moment, or at least know where to reach out for more help.
Don’t wait for the regulators
The conversation matters. But it’s moving at a legislative pace (read: slow), and AI is moving at… well, AI pace. If you’re waiting for regulatory clarity before building your internal governance, you’re already behind.
The good news? The foundational work doesn’t depend on which laws land. Anchor your values. Chart clear paths. Guide daily decisions. Adjust with confidence. Those anchors hold regardless of whether we end up with one federal standard or fifty state ones.
Washington wrote a framework for the U.S. Congress. Your organization needs a framework for your people. And that one’s yours to build.
Jennifer May, JD, CHC, Compliance Attorney, has spent 25+ years helping organizations replace compliance complexity with clarity.
Her work focuses on behavioral design and plain-language communication, turning policy into practical guidance that people actually use.