- Balanced AI Insights
- Posts
- Beyond the AI Policy: Navigating AI Regulation
Beyond the AI Policy: Navigating AI Regulation
Also: The Unspoken Truth About the U.S. AI Action Plan

Welcome back to Balanced AI Insights — where we help finance leaders make sense of the AI landscape from a practical, implementation-focused perspective.
In previous issues, we covered how to create an internal AI policy and what the emerging U.S. vs. EU regulation split means for finance teams.
Now it’s time to get specific.
This week’s developments in the U.S., Europe, and China mark a turning point. The AI landscape is no longer just about innovation — it’s a geopolitical race. Governments are moving quickly to shape, control, and lead the global AI market through regulation and strategic policy.
This issue is your short, CFO-ready update on what’s changed and what you actually need to do next.
AI as Strategy: How U.S., EU, and China Policies Will Affect You
What’s New in AI Regulation (July 2025)

United States: Innovation with Strings Attached
The White House just released America’s AI Action Plan, outlining over 90 federal initiatives to accelerate AI development, infrastructure, and trade. Accompanying this are three executive orders:
Fast-tracking data center permitting to support AI infrastructure
A new AI export policy promoting U.S. technology standards globally
A directive banning “Woke AI” in federal procurement — requiring ideological neutrality in AI systems used by the government
This shifts the U.S. position: not through regulation, but by shaping procurement incentives, global influence, and funding flows.
There’s still no comprehensive federal AI law — but policy is being shaped through executive power and agency mandates.
European Union: Real Compliance Begins
The General-Purpose AI (GPAI) Code of Practice released in July is now in effect. It provides a voluntary framework for major AI providers — like OpenAI, Anthropic, or Mistral — to align with the EU AI Act ahead of full enforcement.
GPAI compliance begins August 2, 2025
High-risk applications — including forecasting, risk modeling, hiring, and reporting — face mandatory requirements starting in 2026:
Traceability
Explainability
Human oversight
Detailed documentation
This is the strictest AI legislation, and it applies even if you're not based in Europe, as long as your tools, data, or customers are.
If you’re using tools built or hosted in the EU, or selling into the EU, this law affects you now.
China: A New AI Act, Passed This Week
China has just formally passed its national AI law, tightening an already strict environment:
Mandatory algorithm registration
Emphasis on ideological neutrality and state oversight
Local data storage and audit authority for regulators
While details are still emerging, China’s position is clear: AI must serve national interests — and businesses will be monitored accordingly.
If you operate or sell in China, you should already be planning for state audits and infrastructure localization.
What This Means for You as a Finance Leader
You don’t need to want AI regulation for it to become your responsibility.
Whether you’ve deployed a custom GPT, enabled AI in NetSuite, or used ChatGPT to draft reports, you're in scope. As finance leader, you’re accountable for:
Financial reporting integrity
Risk management
Vendor governance
Data protection
AI touches all four.
So what should you do? That depends on where — and how — you operate.
If You Operate Internationally: Comply with the Strictest Rule
The EU AI Act is now the default standard for responsible AI use — even for companies based elsewhere. If you work with EU clients, tools, or employees:
Adopt EU-style governance: explainability, auditability, documentation
Treat the EU risk classification system as your internal playbook
Prepare for your vendors to enforce these requirements, even before regulators do
If you meet EU-grade compliance, you’re likely covered everywhere else.
If You’re Choosing Where to Build or Expand: Know the Tradeoffs
Europe offers regulatory certainty, but it’s now the least innovation-friendly jurisdiction for fast AI rollout.
The U.S. and parts of Asia still allow for more experimentation, with lower compliance costs (for now).
AI-intensive startups and global finance hubs may increasingly shift activity outside the EU for flexibility.
Compliance is now part of your location strategy — not just legal housekeeping.
If You’re U.S.-Based Only: Monitor State Laws and Govern Yourself
With no federal law, you’re operating in a patchwork of emerging state legislation and voluntary frameworks:
California, Colorado, and Connecticut are leading the charge with laws on bias audits, disclosures, and transparency
The SEC and FTC have warned they’ll use existing enforcement tools (fraud, misrepresentation, privacy) against AI misuse
And with no official rulebook your internal governance is your only shield.
AI policy was just the beginning. What’s happening now is enforcement, expectation-setting, and geopolitical pressure — all of which CFOs are expected to navigate. You do need to know where the risks are, how they impact financial operations, and what steps you’ve taken to stay ahead.
Deeper Dive: The Unspoken Truth About the U.S. AI Action Plan
The U.S. still doesn’t have a formal AI law — and that lack of structure is exactly what makes this moment more risky for finance leaders, not less.
While some organizations are waiting for “official” rules before acting, Washington is already influencing AI behavior through executive orders, federal procurement standards, and international policy. In July, the White House released the AI Action Plan, backed by more than 90 initiatives, including fast-tracked infrastructure, an export agenda, and new procurement criteria.
None of this is technically legislation — but it’s shaping what “responsible AI” means in practice.
No law doesn’t mean no liability
In the absence of a federal framework, regulators aren’t standing still. They’re applying existing laws and enforcement powers to AI-related issues.
Here’s how:
The FTC can act if AI outputs are deceptive, violate consumer privacy, or misuse personal data
The SEC may investigate disclosures if AI-driven forecasts or analytics misrepresent material information
State attorneys general are already pursuing AI bias cases under civil rights and employment laws
So even if your use of AI isn’t explicitly illegal under a federal statute, misusing it can still expose your company — and your leadership — to enforcement.
If you missed it: Sam Altman, OpenAI’s CEO, recently confirmed that ChatGPT conversations aren’t private and can be used in court. That means any sensitive data shared — even in a casual experiment — becomes part of a searchable, discoverable record.
In finance, where AI is increasingly embedded in forecasting, reporting, risk analysis, and hiring decisions, the margin for error is thin.
This is where governance becomes essential
The absence of clear federal regulation doesn’t remove your obligation — it shifts it inward.
It means you need to self-regulate, and the standard for “reasonable oversight” is rising fast.
If your team is:
Feeding internal data into public LLMs without safeguards
Using generative tools for analysis or reporting without clear review steps
Working with vendors who change their compliance stance without informing you
…then your governance gaps may not show up until there’s a mistake — and at that point, it’s too late to say “we were waiting for guidance.”
A working AI policy isn’t about writing documents no one reads. It’s about making decisions today that keep your team, your data, and your financials accountable tomorrow.
That includes:
Documenting where and how AI is used
Setting clear rules for when human oversight is required
Training your team to understand not just how to use AI, but when not to
If you're in the process of shaping your internal AI governance or pressure-testing your current approach, feel free to reach out. I’m always happy to compare notes or share what I’m seeing across other finance teams.
Closing Thoughts
The conversation around AI is shifting — governments are moving from broad policy statements to concrete enforcement, and finance leaders are now at the center of that transition.
As a finance leader, you’re not just navigating cost and efficiency; you’re shaping how your organization handles data, risk, and accountability in a world where AI touches everything.
The path forward isn’t about waiting for regulation to settle. It’s about acting with clarity now.
With this issue, we’re wrapping up our series on AI governance — but this won’t be the last word. As AI regulation continues to evolve, finance leaders will need to revisit these questions again and again. This is an ongoing conversation — and one we’ll keep coming back to.
We Want Your Feedback!
This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!
Want to dive deeper into balanced AI adoption for your finance team? Or do you want to hire an AI-powered CFO? Book a consultation!
Did you find this newsletter helpful? Forward it to a colleague who might benefit!
Until next Tuesday, keep balancing!
Anna Tiomina
AI-Powered CFO
Reply