A lot of finance leaders try AI once, get mediocre results, and walk away thinking it doesn't work for real complex finance work.
The gap isn't the tool. It's the workflow. Real results come from treating LLMs the way you'd manage a junior analyst: clear instructions, check their work, correct or approve, move to the next step. Not one giant prompt and hope for the best.
This edition walks through two real workflows I used this month: building a financial model for a manufacturing company and running year-end accruals review for a mid-size business. Both worked because I managed them in phases with checkpoints, not because I had a magic prompt. The free section covers why prompt libraries fail and what management approach actually works. The paid section shows what the checkpoints caught - the errors that would have broken both projects if I'd let them run without stopping - plus the complete prompt sequences you can adapt for your own work.
A huge thank you to my paid subscribers for supporting this work!
If you're reading this as a free subscriber and want access to the complete prompt sequences and all future playbooks, consider upgrading here.
TL;DR
LLMs need management like human assistants: clear steps, check progress, iterate.
Copy-paste prompts fail because context always differs.
Free section: The management framework and why sequences matter
Paid section: What the checkpoints revealed and how to apply these workflows to your work
Why Prompt Libraries Don't Work (And What Does)
Last week, I was running my regular monthly masterclass at the AI Finance Club (join here if you are not a member yet). We were discussing year-end activities and how AI can help with them today. I was walking through a workflow - a sequence of prompts with mandatory checks and corrections in between each step.
Someone in the group asked: "Why do you break the prompt into steps? Why not put all of it in one prompt and just run it?"
The answer is simple. In real life, this is not how you delegate tasks to human assistants, right? You show them what you want, maybe share examples of deliverables, and give instructions. Then you tell them: "When you have this piece done, show me the result, and we will move forward or make corrections." That's basic management. And it applies to LLMs too.
Why Copy-Paste Prompts Don't Work
This is also why most of the prompts or workflows you'll find on the internet don't work by just copying and pasting. You have a different context that may change on the go.
Take the financial model I mentioned in my LinkedIn post this week. I needed to build a model for a manufacturing company with a hybrid hardware and recurring revenue business model. The core question: "How much cash do we need to survive next year?" They needed monthly cash flow precision to identify the exact month they'd hit their funding need.
There was no single prompt. There were five phases:
Phase 1: Initial structure - build interconnected tabs for assumptions, order tracking, unit economics, recurring revenue, P&L, cash flow, and dashboard.
Phase 2: First revision - switch everything to thousands format, add retail pricing alongside existing pricing models.
Phase 3: Major restructuring - move payment terms to the order level so each deal could have unique terms, combine monthly and yearly tabs instead of keeping them separate, add balance sheet estimation.
Phase 4: Formula corrections - found 900+ broken references after renaming tabs, fixed sheet name syntax issues.
Phase 5: Format cleanup - simplified the thousands display because "$2,500K" was ambiguous. Changed to "$2,500 (all values in thousands)" for clarity.
Each phase built on validated output from the previous phase. I checked the structure before moving to revisions. I validated formulas before moving to format cleanup. Context shifted as the client saw the model and realized what else they needed. That's how real projects work.
The same applies to year-end accruals. The workflow I use has 10 prompts with checkpoints between each step. Pattern analysis comes first. Then I validate those findings before moving to identify missing accruals. Only after those accruals are validated do I generate journal entries. Then, consistency checks across accounts. Then, cross-check against the balance sheet. Each step builds on the last, with my validation in between.
No magic prompt. Just management.
The Management Approach That Works
When you delegate to a human assistant, you don't hand them 10 instructions and disappear for a week. You give context, provide one clear instruction, check their work, and either move forward or make corrections. Then you give the next instruction.
LLMs work the same way. They need you to manage the workflow.
How LLMs are different from human assistants:
Poor memory: they forget what you said three exchanges ago
Weak context sharing: be prepared to repeat the same information multiple times
No independent judgment: they won't flag problems or push back on bad assumptions
How LLMs are better than human assistants:
Speed: instant responses, no waiting
Scale: can process huge volumes of documents and information
No fatigue: once the workflow works, it runs consistently
Available 24/7: critical during year-end crunch when you're underwater
They DO learn your specific work - when you manage it right: After your workflow works, Package your knowledge in a Project (Claude), Custom GPT (ChatGPT), or Copilot agent
The key is using the best of both: their speed and scale, combined with your judgment at every checkpoint.
What This Actually Looks Like
The financial model took about 3-4 hours total across five phases. Multiple revisions per phase. Constant validation. Each phase addressed what the client actually needed once they saw the previous output. Would that have been faster with a human analyst? No. It would have taken days, plus onboarding time, plus back-and-forth delays.
The year-end accruals workflow has 10 distinct steps. Upload the 12-month P&L. Analyze patterns. Identify accruals. Generate journal entries. Run consistency checks. Cross-check against the balance sheet. Build a close checklist. Each step waits for validation before moving forward.
The sequence matters. The checkpoints matter. The exact wording of each individual prompt? Less critical. You'll adjust based on your context anyway.
In the subscriber-only section, I'm sharing what the checkpoints actually revealed in both workflows, the principles that make them work, and how to adapt them for your own finance work. Plus, the complete prompt sequences in a downloadable format.
Closing Thoughts
I was seriously considering hiring someone to help with that financial model last week. I've taken on too many projects, and year-end is already overwhelming. But I have this “AI First” Principle: Test it with AI before hiring human help.
So far, I think it works. We'll see how it goes when the client runs their scenarios and stress-tests the cash flow projections. I'll keep you posted.
In the meantime, I want to hear from you. What finance tasks are you testing LLMs on? What's working? What's failing spectacularly? Reply to this newsletter and share your use cases and insights. I learn as much from your experiments as you learn from mine.
We Want Your Feedback!
This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!
Want to dive deeper into balanced AI adoption for your finance team? Or do you want to hire an AI-powered CFO? Book a consultation!
Did you find this newsletter helpful? Forward it to a colleague who might benefit!
Until next Tuesday, keep balancing!
Anna Tiomina
AI-Powered CFO
1
