Navigating AI Regulation: What CFOs Need to Know

Also: A checklist to assess if your AI use puts you in the high-risk zone

The last couple of months have been packed with major developments in AI regulation. If you haven’t had time to follow them closely, don’t worry—I have.

Europe is pushing ahead with enforcement. Despite industry calls for a delay, the EU AI Act is moving forward, with the first provisions already being rolled out. It’s the most comprehensive AI law on the books anywhere—and it won’t just affect European companies.

Meanwhile, in the U.S., the federal government removed a proposed ban on state-level AI laws. That means states are free to continue drafting their own rules. Companies operating across states will need to comply with a growing patchwork of AI legislation, some already in effect.

The rules are inconsistent, evolving fast, and often hard to interpret. But the liability is real. 

That’s why I’m dedicating the July newsletter series to one timely, high-stakes topic: AI governance and regulation. Each issue will break down a different piece of the puzzle, starting today with the big picture and what it means for finance and compliance leaders.

AI Regulation Is Here: Ignoring It Can Be Costly

The last two months have marked a turning point in global AI regulation. The landscape is no longer speculative—it's operational. And it’s not just Europe setting the tone. The U.S. is now moving forward with a fragmented, state-led model, and several other countries are racing to implement their own frameworks.

It’s a complex, overlapping patchwork of rules that make AI compliance not only tricky but strategically risky if ignored.

Europe: The First Fully-Formed AI Law is Rolling Out

The European Union’s AI Act, adopted in May 2024, is the first comprehensive legal framework for AI in the world. It uses a risk-tiered approach to regulate AI systems:

  • Unacceptable risk (e.g., social scoring) is banned outright.

  • High-risk systems (e.g., credit scoring, fraud detection, HR profiling) must meet strict documentation, transparency, auditability, and oversight standards.

  • General-purpose AI (like ChatGPT or Claude) will face model-level compliance rules.

Timeline:

  • August 2024: Implementation begins.

  • August 2025: Requirements for general-purpose AI models go live.

  • August 2026: Obligations for high-risk use cases in effect.

What it means: Finance teams using AI in high-stakes workflows (forecasting, risk assessment, internal controls, employee scoring) are likely subject to high-risk obligations—including registration in a public EU database, required human oversight, model documentation, and regular audits.

Penalties: Fines reach up to €35 million or 7% of global turnover—whichever is higher.

Read more:

United States: A Fragmented, Rapidly Shifting Compliance Landscape

The U.S. does not have a federal AI law, but that doesn't mean regulation isn't happening.

In July 2025, the U.S. Senate rejected a proposed 10-year ban on state-level AI regulation. This paves the way for an increasingly state-led model, where AI rules will differ by geography, adding enormous complexity for companies operating nationally.

Federal Activity:

  • Executive Order on AI (2023): Requires federal agencies to assess and govern their AI use.

  • NIST AI Risk Management Framework: Provides voluntary guidelines now being used in procurement and internal controls.

  • FTC, SEC, CFPB: Actively investigating AI use in financial services for transparency, bias, and consumer deception.

Finance teams in the U.S. face a scenario where federal guidance is soft but growing, while states are quickly enacting legally binding requirements.

Read More:

State-by-State Regulation: 

Several states have moved aggressively on AI. Here’s a summary of where things stand:

State

Key Regulation Areas

Compliance Requirements

California

Public sector AI, transparency, data sourcing

Disclosure of AI-generated content, risk assessments, and data documentation for public use cases

New York

High-risk AI systems in employment, finance, and scoring

Advance user notice, opt-outs, human review, mandatory bias audits every 6–18 months, private lawsuits

New York City

Automated employment decision tools (AEDTs)

Annual bias audits, public reporting, employer accountability

Utah

Disclosure of AI-generated content, liability standards

State office for AI, mandatory labeling of generative content

Tennessee

Deepfake protections (e.g., voice cloning, impersonation)

Fines and criminal penalties for unauthorized AI use in likeness manipulation

Colorado

Consumer protection, algorithmic decisioning

AI impact disclosures, vendor audit obligations

Consequences of non-compliance: Firms violating these regulations may face fines, regulatory enforcement, litigation by individuals, or loss of public contracts.

Other Markets: 

  • Australia: Confirmed in June that it will proceed with a national AI law. A regulatory framework is expected by 2025.

  • Canada: The Artificial Intelligence and Data Act (AIDA) is progressing through Parliament and may pass this year. It mirrors many EU provisions, including enforcement for “high-impact” systems.

  • UK: Avoiding prescriptive laws, but regulators like the FCA are pressuring firms to implement audit frameworks and risk controls, especially in financial services.

Takeaway: Even companies headquartered in the U.S. or UK will need to comply with multiple jurisdictions if they operate internationally, use global vendors, or handle data tied to overseas users.

Conclusion

AI compliance is no longer optional. Regulatory frameworks are already in effect across Europe, gaining momentum at the U.S. state level, and taking shape around the world.
Finance leaders need to understand how these rules affect their tools, data, decisions, and ultimately, their accountability.

Is Your AI Use High-Risk? A Self-Assessment for Finance Teams

Use this short diagnostic to assess whether your company’s AI use falls under active or upcoming regulation.

1. Does your company operate in, serve clients in, or process data from any of the following jurisdictions?

  • European Union

  • New York State or New York City

  • California, Colorado, Utah, or Tennessee

  • Canada or Australia

Yes → Your AI use likely falls under at least one binding or soon-to-be-enforced AI law. You may need documentation, bias audits, registration, or risk disclosures depending on the use case.
No → You're likely out of scope for now—but voluntary frameworks (like NIST) may still apply to maintain investor or regulatory trust.

2. Do any of your AI tools influence decisions in finance, compliance, or HR?

Examples: budgeting, forecasting, risk scoring, employee evaluation, credit, pricing

Yes → These are typically considered “high-risk use cases.” You may be subject to registration, explanation requirements, and auditability standards.
No → Lower regulatory exposure—but still monitor vendor usage and internal experimentation.

3. Do you use third-party AI tools embedded in finance, HR, or planning platforms (e.g., ERP systems, Excel Copilot, ChatGPT, reporting software)?

Yes → You must assess the specific version, subscription level, and terms of use—many tools vary in their compliance posture by license. Regulators increasingly hold companies accountable for how vendor tools are used internally, even if not built in-house.
No → If you rely only on internally built or human-driven tools, vendor risk is limited, but internal controls are still key.

4. Do you have formal documentation or an internal review process for the AI tools in use?

This includes AI policy, risk assessments, approval workflows, and audit logs.

Yes → You are aligned with best practices and early regulatory expectations. Keep these materials updated and accessible.
No → You’re at risk for non-compliance or audit failures, even if the tools seem “low impact.”

5. Can employees currently use generative AI tools (e.g., ChatGPT, Claude) without policy or review?

Yes → You have “shadow AI” exposure. If sensitive data is uploaded or decisions are influenced without oversight, you may face liability.
No → Good—you’ve likely implemented usage guardrails, which regulators and auditors increasingly expect.

6. Are any of your AI-generated outputs public-facing or used in investor/customer communications?

Examples: financial summaries, pricing explanations, risk assessments, marketing copy

Yes → These uses increase exposure under emerging disclosure and accuracy laws (e.g., New York, EU). You may need to prove the content’s origin and maintain an audit trail.
No → Lower immediate exposure—but internal documentation is still advised.

Final Note
If this checklist surfaces more questions than answers, you’re not alone. One of the clearest paths to safer AI adoption—especially in finance—is developing a practical, fit-for-purpose AI usage policy. 

Closing Thoughts

AI regulation is no longer a “watch and wait” situation. The laws are taking shape, and they’re starting to impact how finance teams operate, automate, and even report.

In the coming weeks, I’ll break down:

  • Which finance workflows are most exposed under the new AI rules

  • How to build a lightweight, effective AI governance process

  • What to include in your company’s first AI policy—and what to avoid

If you need help drafting or reviewing your company’s AI policy, feel free to reach out—I’m actively working on tools and frameworks that can help finance and compliance teams move quickly without overcomplicating the process.

See you next Tuesday, with a practical guide to identifying high-risk AI use cases inside your finance team.

We Want Your Feedback!

This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!

Want to dive deeper into balanced AI adoption for your finance team? Or do you want to hire an AI-powered CFO? Book a consultation! 

Did you find this newsletter helpful? Forward it to a colleague who might benefit!

Until next Tuesday, keep balancing!

Anna Tiomina 
AI-Powered CFO

Reply

or to participate.