Your team started to “use AI” and became 3x more productive. Your junior analyst, who used to deliver one piece of analysis per month, is now delivering three, plus commentary, plus scenarios. This should be a win.

Except now you're spending hours trying to figure out which parts are accurate and which parts are AI-generated guesses. 

There's a lot of advice out there about building elaborate review frameworks, standardizing prompts, and documenting AI processes. That's useful, but it misses the point. The real question is: when did we stop requiring our teams to verify their work before submitting it? 

In this edition, we discuss the accountability standard that needs to come first, how to communicate it to the team, and how to help them validate their deliverables.

The paid section also includes practical validation tools with step-by-step guides and ready-to-use prompts.

📅 Upcoming Events

The Finance & Accounting Technology Expo is happening in NYC next week. I’m not presenting this time, but I’ll be there — if you’re attending too, feel free to reach out via email or LinkedIn.

Later this month, the AI Finance Club is hosting two sessions:

Both take place just before Thanksgiving, and I hope to see many of you there.

Not a member yet? This is a great time to join the club and see how other CFOs are applying AI to real workflows.

The Standard No One Is Talking About

Remember when we were all calculating how much more productive our teams could be with AI? We ran the numbers on reports produced, hours saved, analysis delivered. We were aiming for efficiency gains.

Well, we got what we wanted. Our teams are now more productive with AI. The reports flow in faster than ever.

The problem is, it's turning against us. We're overloaded with deliverables and struggling to validate the results. 

Nicolas Boucher recently wrote about this problem and proposed the brilliant  Crystal Method - a framework for standardizing how teams document and review AI usage.

But here's the question nobody's asking: when did we change our standards and start accepting unverified work?

The Accountability Gap

We've already seen the consequences when this standard disappears. Deloitte in Australia delivered an AI-generated report to a government client with factual errors that made it through their review process. Not typos or minor mistakes, but wrong information that ended up in client's hands. Two US federal judges admitted their court rulings contained errors from AI-generated legal research - fake case citations and made-up precedents in published legal decisions.

These aren't AI failures; they're accountability failures. Someone at Deloitte hit send on a report they didn't verify. Two federal judges published rulings without checking their sources. The AI didn't force them to do that.

So when did this become acceptable? Pre-AI, if someone delivered you a variance analysis with commentary that contradicted the numbers, you sent it back. If they gave you a forecast model with broken formulas, you sent it back. If they cited sources they didn't actually read, you sent it back. The tool was irrelevant and the standard was clear: you deliver it, you own it.

AI didn't change that standard. But we somehow started treating AI output differently, accepting "the AI generated it" as an excuse for unverified work. We built elaborate frameworks for reviewing AI-generated content instead of asking why we're reviewing work that should have been checked before it hit our desk.

The Standard You Need

Here's what needs to be non-negotiable: you deliver it, you own it. No AI excuses.

To make it happen, you need to train your team on AI fundamentals and verification:

  • What AI is and how it works - Help them understand what AI can and cannot do reliably, where it excels and where it hallucinates.

  • Different models and tools produce different quality outputs - ChatGPT's deep research mode is not the same as standard chat. Claude's thinking mode produces more accurate analysis than quick responses. Google's Gemini has different strengths. They need to know when to use what.

  • How to verify outputs properly - Teach them to check sources by clicking links, cross-reference claims against actual data, spot commentary that doesn't match the numbers, and recognize when AI is guessing versus reasoning.

  • How to adjust prompts for better results - Show them how giving more context, being more specific, and asking for sources upfront produces more reliable outputs than vague questions.

But continue emphasizing that verification is on them. Even if it means redoing everything manually, even if it takes longer, the productivity gain doesn't matter if the work is wrong. If people keep delivering unchecked work to you, don't tolerate it.

This isn't about being harsh, it's about professional expectations. You wouldn't accept a budget model with no formulas and a note saying "I used Excel's auto-calculate." You shouldn't accept variance commentary with a note saying "I used ChatGPT." The tool doesn't matter. The accountability does.

And we as managers carry the same accountability. We're responsible for making sure the reports we present are correct and consistent. We own everything signed off by us, sent on our behalf, or presented in our name. Our stakeholders shouldn't tolerate "AI made this mistake" as an excuse from us any more than we should tolerate it from our teams.

And as a little help—not a replacement for our attention to detail or judgement—we can use AI for double-checking and verification. Here’s how:

Your Pre-Check System

Every manager has a way to spot problems before diving deep. When I was a cluster CFO, my market CFOs would send me budget presentations—often 70-page documents with full P&L builds, headcount plans, CapEx schedules, and narrative commentary.

I didn't start by reading the commentary. I did a pre-check: number consistency across pages, financial report accuracy, commentary matching the actual data. I scanned for signs that something was off, like a revenue growth assumption that contradicted the headcount plan or commentary about Q3 performance that didn't match the variance table.

If I spotted these signs, I sent the presentations back, asking my market CFOs to double-check and verify those details. Because why would I spend two hours reading commentary on potentially inaccurate data? 

You have tricks like this, too. You know what doesn't add up in your domain and which questions expose sloppy work. 

Use AI to Check AI

Here's what most managers don't realize: you can build a Custom GPT, a Claude Project, or a Copilot Agent that replicates your exact pre-check process.

The setup is simpler than you think. Describe how you verify reports - what you look for, what signals something is off, which inconsistencies matter in your domain. Ask ChatGPT/Claude/MS Copilot to create custom instructions based on that description. Test it on a few reports you've already reviewed, refine the instructions, and you've built a validation tool that does your pre-check in minutes instead of hours

Important caveat: this is a first pass, not a replacement for your judgment. Your validation AI can miss things or flag false positives. But it catches the obvious problems before you invest serious time in deep review.

Pro Tip: You can share this CustomGPT with your team so they can run this pre-check themselves!

In the paid section, I'll walk you through exactly how to build this validation process step by step, including the prompts to use and how to set up link verification with ChatGPT Agent mode.

Closing Thoughts

We're all figuring this out as we go. But one thing hasn't changed: good work requires ownership. AI can speed up the process, but it can't take responsibility for the output. Only people can do that. Our job as leaders is to make that crystal clear.

We Want Your Feedback!

This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!

Want to dive deeper into balanced AI adoption for your finance team? Or do you want to hire an AI-powered CFO? Book a consultation!

Did you find this newsletter helpful? Forward it to a colleague who might benefit!

Until next Tuesday, keep balancing!

Anna Tiomina
AI-Powered CFO

1 

Reply

or to participate

Keep Reading

No posts found