- Balanced AI Insights
- Posts
- Top-Down or Bottom-Up? Building AI Culture That Works
Top-Down or Bottom-Up? Building AI Culture That Works
Also: 3 Common Mistakes Leaders Make When Rolling Out AI

Welcome to this week’s edition of Balanced AI Insights.
This edition is inspired by a true story:
During a leadership session last week, we uncovered a rare, high-ROI use case—easy to implement and critical to the business. The kind of breakthrough most teams spend weeks chasing through whiteboards and workshops.
Just minutes earlier, we’d been discussing a broader initiative: rolling out a corporate LLM and training the wider team. But once the use case surfaced, that plan was quietly set aside.
The win was clear, but so was the disappointment. Many had been eager to access an AI tool they could use in their daily work. And this use case only impacted a narrow slice of the company.
This edition explores how to avoid that trap and move fast toward strategic wins without sidelining the rest of your team.
The Balanced View: How to Create an AI-Friendly Culture (Without Losing Control)
During that session, I shared what I see as a foundational step: give people access to public LLMs like ChatGPT or Claude, teach them to use the tools safely, and let them start experimenting.
My colleague had a different view: “You don’t need to learn how to prompt. That’s our job—we’ll build the use case for you.”
Both perspectives have merit. His ensures control and efficiency, while mine focuses on building long-term intuition across the organization.
But the real challenge isn’t choosing one—it’s creating a culture that supports both. Even a brilliant use case often benefits just a few teams.
So how do you create a culture where everyone has a path forward—not just the fortunate few? And how do you ensure those valuable use cases keep emerging?
Most organizations face a fork in the road when introducing AI:
Top-down: “We’ll build and deploy use cases. You just use them.”
Bottom-up: “Everyone should start learning AI now. Tools like ChatGPT are your new calculator.”
The top-down approach keeps things efficient and secure, while the bottom-up model encourages innovation and individual empowerment.
But both can fail if left unchecked.
Too top-down, and your teams never gain AI intuition. They wait to be told what’s allowed.
Too bottom-up, and you risk chaos, shadow AI, prompt sprawl, and inconsistent results.
The best approach is to cultivate a guided culture of experimentation.
That doesn’t mean pushing every employee to become a prompt expert. But it does mean making AI feel accessible, practical, and safe to try.
You don’t need to flip a switch overnight. Culture changes best when they’re built in waves.
Here’s a 3-phase model I use:
1. Make AI visible, approachable, and non-threatening.
Provide secure, sanctioned access to public LLMs like ChatGPT or Claude.
Offer short “AI 101” sessions with demos of real, relevant tasks.
Identify safe workflows—summarization, research, brainstorming—to start with.
Implement a policy, ensuring everyone understands what is allowed and what isn’t, and what data is safe to share with AI.
2. Help people discover value for themselves and the company.
That golden use case I mentioned earlier? It was a lucky break—the win you hope for, but can’t count on. Most teams don’t uncover high-ROI opportunities in a one-hour meeting.
Make it clear to every team that you're actively looking for valuable AI use cases—those worth a custom implementation or investing in a new tool. Encourage submissions, evaluate them thoroughly, and chances are, you’ll uncover more profitable opportunities than expected.
3. Scale what works, safely.
Turn early experiments into playbooks.
Identify control points for sensitive workflows.
Embed AI tools into daily systems—but with visibility and verification built in.
You don’t need your controller to learn Python or your AP team to engineer workflows. But if they don’t understand how to use AI, they’re falling behind—and so is your organization.
The Leader’s Role: Architect, Not Engineer
Your role isn’t to prompt better than your team. It’s to build an environment where safe prompting can happen at scale. That means:
Providing access to trusted tools.
Championing both strategic use cases and personal productivity gains.
Building governance into experimentation, not against it.
Modeling a mindset of thoughtful exploration.
Yes, big use cases move the needle. But individual wins move the culture.
If you want an organization that evolves with AI, instead of reacting to it, don’t just fund a use case. Fund a culture. Create pathways for people to explore safely, learn quickly, and contribute meaningfully.
In the age of AI, confidence drives progress. And confidence is a cultural asset you can’t outsource.
3 Common Mistakes Leaders Make When Rolling Out AI—and How to Fix Them
In the main section, we explored how building an AI-friendly culture takes more than just choosing the right tools. It requires intentional leadership, broad access, and a structure that balances experimentation with control.
In this section, I’ll share three of the most common mistakes I see leaders make when trying to roll out AI and how to avoid them.
Mistake #1: Over-focusing on Use Cases Too Early
Why it happens:
Leaders are often expected to “bring a profitable AI use case to the table”—which makes sense. But this approach tends to spotlight only the highest-value, most visible processes, leaving the rest of the organization without access or direction.
What to do instead:
Use cases are critical, but they shouldn’t always be the first or only step. Begin with a phase of safe, low-stakes experimentation. Then, progress toward structured applications that align with strategic goals.
Mistake #2: Locking Down Public LLMs Without Providing Alternatives
Why it happens:
Some companies block access to tools like ChatGPT or Claude entirely to ensure security and compliance. While the intent is valid, this often backfires—employees simply turn to personal accounts or unsecured platforms.
What to do instead:
If full deployment isn’t feasible, offer limited access via enterprise LLM access with admin controls and a clear policy. Without an approved option, people will find workarounds—often without your knowledge or safeguards in place
Mistake #3: Assuming Curiosity = Competency
Why it happens:
You roll out tools like Claude or ChatGPT and expect teams to hit the ground running. But in reality, most people won’t move beyond a single prompt—especially if they don’t know what’s possible.
What to do instead:
Support curiosity with structure. Run short internal sessions, host “Prompt Clinics,” or assign AI champions to each team. Learning doesn’t need to be formal, but it must be intentional. Confidence grows when people feel guided, not judged.
The top-down vs. bottom-up debate isn’t really about right or wrong. It’s about balance—knowing when to lead with strategy and when to open the floor to discovery.
Closing Thoughts
The top-down vs. bottom-up debate isn’t about choosing sides—it’s about creating balance. Strategic use cases drive results, but without broader access, most of your team is left behind.
Giving employees access to a secure public LLM like ChatGPT or Claude—plus basic training—is one of the simplest, highest-leverage moves you can make. It empowers individual experimentation while reinforcing your bigger AI roadmap.
Bottom-up builds awareness. Top-down drives scale. But it’s the combination that builds lasting culture.
Ready to create that balance in your org? Book a meeting with me to explore how to roll out AI the right way—without leaving your team behind.
We Want Your Feedback!
This newsletter is for you, and we want to make it as valuable as possible. Please reply to this email with your questions, comments, or topics you'd like to see covered in future issues. Your input shapes our content!
Want to dive deeper into balanced AI adoption for your finance team? Or do you want to hire an AI-powered CFO? Book a consultation!
Did you find this newsletter helpful? Forward it to a colleague who might benefit!
Until next Tuesday, keep balancing!
Anna Tiomina
AI-Powered CFO
Reply