I'm excited to kick off a three-part series with Lawrence (Larry) Maisel, co-creator of the Balanced Scorecard Approach and an expert on finance operating models at scale. Larry works with Fortune 500 CFOs, large enterprise finance organizations, and Private Equity firms and their portfolio companies. I work primarily with startups and mid-market companies. We're seeing the exact same problem at both ends of the spectrum: widespread AI activity, but minimal meaningful implementation.
This isn't a company size issue. It's a structural problem that requires a structural solution.
Over the next three weeks, we're breaking down what's happening and what to do about it:
Today: The capability gap—why finance is "using AI" but not "AI-enabled."
Feb 17: The solution—Finance AI Centers of Excellence.
Feb 24: Implementation—building your CoE with a 90-day roadmap.
The timing matters. The gap between experimentation and execution is becoming expensive. Just last week, KPMG negotiated a 14% audit fee reduction from Grant Thornton by arguing AI should make the work cheaper. Recently, Goldman Sachs announced that it is moving beyond simple AI tools (like draft assisting or code helpers) to deploying autonomous AI agents that can perform complex, structured, rules-based business tasks — including core accounting and finance operations previously done manually. These precedents will ripple across every industry and size of company.
CFOs need structure, not more pilots to avoid or eliminate the resulting consequences of scattered AI activity into a governed, scalable capability aligned with finance’s core responsibilities: accuracy, control, insight, and decision-support.
📅 Upcoming Events: Meet me at the AFP FP&A Forum

The AI Capability Gap
The Research Paradox
Recent research paints a revealing—and at times confusing—picture of where finance stands today. On the surface, adoption appears widespread. Dig deeper, and a more nuanced story emerges: AI is present in finance, but rarely institutionalized.
The Numbers:
Gartner reports that approximately 59% of finance functions are "using AI." At face value, that sounds like mainstream adoption. Yet Gartner's own breakdown shows that much of this usage falls into categories such as knowledge management, intelligent process automation, and anomaly detection—often embedded within existing tools rather than purposefully designed finance AI capabilities.
By contrast, L.E.K. Consulting's 2025 Office of the CFO survey takes a more stringent view of adoption. While roughly 60% of CFOs believe AI will be one of the most impactful technologies in the Office of the CFO, only about 11% report AI as being meaningfully implemented within finance operations. Another third remains in pilot or proof-of-concept mode.
These findings are not contradictory; they are complementary. Gartner is capturing breadth—any use of AI within the finance function. L.E.K. is capturing depth—whether AI has been embedded into core finance workflows with ownership, controls, and repeatable value. Together, the studies highlight the central challenge facing CFOs: AI activity exists, but AI capability does not exist at scale.
Anna's comment: I see this exact split with every client. Finance teams proudly show me their AI experiments—ChatGPT for email summaries, Claude for analysis, vendor tools for forecasting. Five different experiments, zero coordination, no governance. The experiments work. But they stay experiments.
Why Pilots Stall In The Office Of The CFO
Finance organizations are not short on ideas for AI use cases. In fact, early adopters are implementing key processes such as:
Accounts Payable and Accounts Receivable automation
Financial close acceleration and reconciliations
Error, anomaly, and fraud detection
Cash flow forecasting and variance analysis
Management reporting and narrative generation
These are high-volume, rules-intensive, data-rich processes—ideal candidates for AI. So why do so many initiatives stall?
The answer is rarely the technology itself. Instead, CFOs consistently cite:
Data quality and integration challenges
Lack of finance-specific AI skills
Unclear ROI measurement
Governance, auditability, and explainability concerns
In other words, AI is running into the same barriers that finance has seen with every major transformation: Operating Model Maturity.
Anna's comment: The "lack of finance-specific AI skills" barrier is misunderstood. CFOs think they need data scientists. They don't. They need finance professionals who understand how to govern AI, measure its outputs, and explain models to auditors. Those are finance skills with AI literacy—not data science roles. But without a structure to develop those skills deliberately, teams default to hiring expensive consultants or just avoiding production deployment entirely.
Why The Gap Exists
The gap between "using AI" and "being AI-enabled" isn't random. It exists because of three structural problems:
1. Individual Experimentation vs. Enterprise Implementation
Small teams or individuals are using AI for departmental experiments—nibbling around the edges. But meaningful benefits require building AI into core processes, and that requires senior leadership ownership, cross-functional coordination, and governance. Without that structure, pilots proliferate but never scale.
Anna's take: I call this "shadow AI sprawl." It's the new version of shadow IT. FP&A has their favorite tools. Controllers have theirs. Treasury has theirs. Nobody's coordinating. Nobody's governing. IT discovers the chaos during an audit and tries to lock everything down, which just drives the tools further underground. You need a structure that enables innovation while maintaining control.
2. The "What Even Is AI?" Chaos
The market is a mess right now. LLMs, machine learning, embedded AI in software, autonomous agents, automation that looks like AI—CFOs are paralyzed by complexity. They don't know which bets to make, which vendors to trust, or what infrastructure to build.
Just last week, markets reacted to announcements about AI agents potentially replacing entire software categories. That's the kind of volatility that makes CFOs hesitant to commit. But hesitation costs money.
Anna's take: This complexity is exactly why you need a Center of Excellence. You need someone tracking technology developments, evaluating vendors, testing use cases, and making informed recommendations. Without that dedicated function, every finance leader is trying to become an AI expert on top of their day job. It doesn't work.
3. The Scale Problem
MIT research shows 95% of GenAI pilots failing at companies. The four barriers—data quality, skills, ROI measurement, governance—are real. Pilots work. Production doesn't. The capability gap is the gap between proof-of-concept and repeatable value.
From "Using AI" To "Being AI-Enabled"
The most important insight from both Gartner and L.E.K. is this: finance functions are using AI, but they are not yet AI-enabled.
Being AI-enabled requires:
AI is embedded in core finance processes
Governance and controls are designed in, not bolted on
Value is tracked and optimized continuously
Information is trusted enough to act upon
That leap—from usage to enablement—does not happen organically. It requires intentional leadership from the Office of the CFO and a formal mechanism to scale responsibly.
Anna's comment: Every client I work with thinks they'll get there eventually. "We're just being prudent—testing before we commit." But testing without a path to production isn't prudence. It's expensive procrastination. And while you're procrastinating, the gap is costing you actual money and competitive position.
Larry and I have covered the 'what' and the 'why'—what the capability gap is, why it exists, and why pilots stall.
But here's what matters most to CFOs: what does this gap actually cost you? Not in abstract terms about "competitive disadvantage" or "missed opportunities." In dollars. In risk exposure. In revenue at risk.
Because staying in pilot mode isn't neutral. It's not prudent. It's expensive.
In the subscriber-only section below, I quantify the five hidden costs of pilot mode—the money you're spending right now without realizing it. I also share the assessment framework to calculate your specific exposure, using the same risk methodology we use in corporate finance: likelihood × impact = exposure.
If you're going to make the case for AI governance to your executive team, you need these numbers.
Closing Thoughts
That's a lot to digest, I know. Larry and I spent weeks working through this research and what it means for finance teams at every size.
The diagnosis is uncomfortable—most of us are spending money on AI without the structure to actually benefit from it.
But here's the good news: the solution exists. Next Tuesday, we'll walk through exactly what that solution looks like: the Finance AI Center of Excellence. Larry will break down what it is (and isn't), why finance needs its own model, and the four value propositions that make it worth building.
I'll add the practitioner layer—what works with small teams, common mistakes to avoid, and how to position this to your C-suite.
Until then, we'd love to hear what you're seeing. Are you stuck in pilot mode? Have you already started building a governance structure? Hit reply and let us know—Larry and I read every response.
Anna
We Want Your Feedback!
This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!
Did you find this newsletter helpful? Forward it to a colleague who might benefit!
Until next Tuesday, keep balancing!
Anna Tiomina
AI-Powered CFO
1
