Welcome back to Balanced AI Insights!

If you’ve ever sat in a board meeting and heard the vague directive “Use AI,” you’re not alone. I’ve been in that spot myself as a CFO — getting pressure from the top without clear guidance on where to start. It can feel overwhelming, and honestly, I remember thinking back then: this is more noise than progress.

In this edition, I’ll share how I would respond to that same mandate today — with a practical roadmap you can use to move from pressure to progress.

📢 Upcoming Events for Finance Leaders – Save the Dates!

The AI Finance Club continues to be my favorite community for finance leaders exploring AI. If you’re not part of it yet, you’re missing out on some of the most practical, no-hype conversations about how finance teams can actually use AI.

I’m an expert in the Club — every month, I host the Fractional CFO Corner, a dedicated workshop for fractional CFOs. It’s a great way to see me regularly, share challenges, and learn hands-on AI strategies together.

Nicolas Boucher leads the community, don’t miss his live session on how AI is reshaping the CFO role and the tools top finance leaders are already adopting.

When: Friday, September 26, 2025 | 11:00 AM ET / 5:00 PM CET

Seats are limited—grab yours now!

The Balanced View: Turning ‘Use AI’ Into a CFO Strategy

A recent Accordion survey showed that 98% of private equity sponsors are telling their portfolio CFOs to prioritize AI—but 68% of CFOs admit they don’t know where to start.

I know exactly how that feels. Two years ago, I was in that position myself. My CEO told the executive team: “Use AI.” Nobody knew what to do with that mandate. I started experimenting, and back then it often felt like wasted time. The tools were clunky, the results inconsistent, and it was hard to connect experiments to business value.

Fast forward: today’s AI tools are far more powerful, easier to use, and constantly evolving. But that also makes them more overwhelming.

So here’s what I would do now if I were that CFO at the board table — and exactly how I’d frame this roadmap to the board:

Step 1: Get the Tools in Place

Buy a corporate LLM subscription. ChatGPT, Claude, Copilot, Gemini—any of them will work.

This is not optional. MIT’s State of AI in Business 2025 report found that while only 40% of companies have purchased an official LLM subscription, employees in over 90% of companies are already using AI tools for work—often through personal, unsanctioned accounts.

If you don’t provide a secure corporate option, your people will continue using free tools in ways you can’t monitor or control. That’s not just an efficiency issue—it’s a governance risk.

If budget is tight, you don’t have to buy licenses for every employee on day one. Start with shared accounts by department or prioritize access for high-volume finance and compliance teams. The key is ensuring safe, sanctioned access.

Step 2: Put Guardrails in Place

Once access is there, boundaries matter. The same MIT research shows that 95% of enterprise AI projects fail to reach production. One of the biggest reasons? No alignment with workflows, brittle governance, and lack of clear policies.

Start simple with a lightweight AI policy that covers:

  • Approved vs. unapproved tools (so employees don’t mix ChatGPT free with corporate data).

  • What data is strictly off-limits (e.g., customer information, financial reports).

  • Approval flow for high-risk use cases or exceptions (like client deliverables or investor decks).

  • Governance owner (legal, IT, or compliance).

And don’t forget the regulatory angle. AI regulation is no longer theoretical. Your policy should explicitly check:

  • Which laws and regulations your company must comply with (AI, data privacy, sector-specific).

  • Whether there are reporting or disclosure requirements tied to AI usage.

If you’re a single-state company, start by checking your state’s rules.

If you operate nationally or internationally, expect more complexity—rules may vary across states, countries, or regions like the EU.

The key is not just to protect internal workflows but also to ensure your AI use aligns with the external regulatory environment you’re subject to.

Step 3: Train the Employees

Tools and policies won’t move the needle if your people don’t know how to use AI effectively. Training is the multiplier.

If the budget is constrained, take advantage of free resources from Microsoft, Google, OpenAI, and Anthropic.

If time is more valuable than money, invest in training tailored to your workflows. I’ve run these sessions across finance teams, and no two are alike. Tailored training helps you skip hours of trial-and-error and go straight to use cases that matter.

And here’s a crucial point: train yourself first. Just like the oxygen mask rule on airplanes—you need to put your own mask on before assisting others. CFOs can’t effectively guide their teams on AI without first building their own fluency.

📌 That’s why I’m running the AI for CFOs: Definitive Leadership Masterclass — a six-week program designed specifically for finance leaders. It’s practical, finance-focused, and gives you the confidence to talk AI with your board, your team, and your peers. Once you’re trained, you’ll be in a much stronger position to lead your organization through adoption.

If Steps 1–3 are not in place, do not rush ahead. This is what boards should be demanding and where they should be investing: exposing employees to AI, giving them secure access, and providing education. Until these foundations are solid, moving on to automation or custom projects is premature—and almost guarantees failure.

Step 4: Appoint an AI Champion (Internal or External)

The biggest wins don’t come from the tools—they come from the intersection of business knowledge and AI knowledge. That’s your AI Champion.

  • Internal Champion: If you have capacity, identify a curious team member, give them training, and make AI exploration part of their official role. They become the bridge between AI tools and finance workflows.

  • External Champion: In many cases, this makes even more sense. If your team is lean or already overwhelmed, bringing in an external consultant can accelerate adoption, establish governance, and guide training.

MIT’s research shows that external partnerships are nearly twice as likely to succeed as internal builds. Many firms that tried to “DIY” stalled in pilots, while those that worked with experienced partners reached production.

Step 5: Don’t Rush the Big Stuff

Boards naturally want something bigger than “we bought ChatGPT.” But here’s the reality: MIT found that 95% of custom AI projects fail to scale.

Why? Because companies dive in without preparation:

  • Data audit: Is your data reliable, clean, and accessible?

  • Process audit: Which workflows are actually stable enough to automate?

  • People audit: Do you have the bandwidth and change-management capacity to absorb AI?

Without this groundwork, big builds collapse into “science projects” that burn money and credibility.

That doesn’t mean do nothing. Corporate LLMs already let you automate small but valuable workflows—variance draft explanations, first-pass reports, contract summarization. These aren’t moonshots, but they’re safe, low-cost ways to test what’s worth scaling later.

The lesson: use the small wins to inform the big bets. When the time comes for larger automation or custom projects, you’ll have real use cases, cleaner data, and a team that understands how to work alongside AI.

How to Frame It in the Boardroom

"Our approach to AI is intentionally phased. We’re not rushing into large, custom projects that 95% of companies fail at. Instead, we’re putting the fundamentals in place: secure corporate tools so employees stop using unsanctioned free versions, develop policies and guardrails to manage risk, and targeted training so employees know how to use AI effectively.

This lets us capture incremental efficiency gains now, while building the expertise and governance foundation for larger initiatives later. When we do invest in bigger projects, we’ll have the confidence that our data, processes, and people are ready—and that the investment will have a higher probability of success.”

Closing Thoughts

When your board says “Use AI,” you don’t need to panic—or rush into projects destined to fail. The most responsible move as CFO is to start with the basics: secure tools, clear guardrails, and real training for your team. That’s how you avoid the 95% failure trap and build a foundation strong enough for bigger initiatives later.

📩 I run tailored AI training programs for finance teams. If you think the time has come for your organization to move past trial-and-error and start building AI confidence the right way, just contact me. I’d be glad to help.

We Want Your Feedback!

This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!

Want to dive deeper into balanced AI adoption for your finance team? Or do you want to hire an AI-powered CFO? Book a consultation!

Did you find this newsletter helpful? Forward it to a colleague who might benefit!

Until next Tuesday, keep balancing!

Anna Tiomina
AI-Powered CFO

Reply

or to participate

Keep Reading

No posts found