Financial Automation

AI worksheet - experiments-to-results snapshot

Download now
Header image

AI doesn’t advance on its own; it advances when your team experiments, learns, and has a clear way to turn what works into the new normal. It’s a powerful catalyst, but without leadership, it can just as easily widen gaps in your firm as close them. 

This is your introduction to our “Experiments-to-Results Snapshot worksheet.” It’s created for firm leaders and AI champions as a way to help turn scattered AI results into firm-backed standards your whole team can rely on.

Here are the three accomplished firm leaders that have weighed in on this topic: 

  • Ambra Wellbeloved, Partner, Client Accounting Services Operations, Aprio
  • Dan Luthi, M.Acc., Partner, Ignite Spot and Adjunct Professor, Utah Valley University
  • Jason Blumer, CPA, CEO and founder of Blumer CPAs and Thriveal CPA Network

The pattern is the same across the three firms:  people lead,AI follows.

For Dan Luthi, the real opportunity is not isolated wins. It’s building repeatable workflows teams can trust. 

“AI adoption should work as a loop: experiment, validate, standardize, and then revisit that standard as tools evolve,” says Luthi. “The goal is consistency, clear review, and enough flexibility to keep improving without reinventing the process every time.” 

Ambra Wellbeloved thinks integrating AI is not an initiative, it's a mindset. Within Client Accounting & Advisory Services, that mindset is grounded in an advisory‑led, technology‑enabled approach to building and delivering work. 

“Teams start with focused pilots to standardize data and design AI‑enabled workflows, refining them until they are ready to scale across the practice,” says Ambra. “The result is capacity measured by greater throughput, broader client coverage, and more time for professionals to high‑value advisory work and deepen relationships with our clients.”

Jason Blumer’s human-first, AI-forward firm sees adoption as a structured upskilling journey, asking people to leave default habits behind and learn a new way of working together. 

“AI is such a change component,” says Blumer. “We think of it as a 12 to18 month journey where you’re not just teaching people to use AI. You’re teaching them to stop working in the default way they used to.”

The Experiments-to-Results Snapshot turns those ideas into a quick working view of your firm. It helps you:

  • Map where AI shows up across different job levels within the firm.  
  • Classify each workflow into one of three stages:
    • Stage 1: Experiments where people work with AI in their own way 
    • Stage 2: Reliable wins where teams build repeatable patterns  
    • Stage 3: Firm standards of prompts/checklists, clear human reviews, documented guardrails, and an explicit owner 
  • Most firms are no longer asking, “Should we be using AI?” The question that actually keeps leaders up at night is, “Are our people getting consistent, professional results from the AI they’re already touching every day?”
  • Choose one workflow per team to move up a stage in the next 60–90 days.

In practice, that means doing what these three firm leaders are already doing:

  • Focus: Start with a small set of workflows where AI is already present, like vendor onboarding, CAS reporting narratives, or client update emails.
  • Standardize: Turn what consistently works into firm-backed standards with named owners and simple review cadences.
  • De-risk: Make explicit input and review guardrails so humans know what to check and where AI output needs extra attention.  
  • Repeat: Treat AI improvement as a continuous cycle, revisiting regular tasks and standards as tools and team skills evolve.

By the end, you’ll have information to help create a focused 60–90-day plan to convert tasks into standards with clearer quality, lower risk, and stronger collaboration.

Continue learning with BILL