Subscribe: Apple Podcasts · Spotify · YouTube · Amazon Music · iHeartRadio · Pandora · RSS
Episode Summary
Your team has been “using AI” for six months. Someone tried ChatGPT, got a generic answer, decided it doesn’t work for your business. Someone else built something elaborate that doesn’t connect to a goal you’ve actually written down. And the gross margin chart, the three-year financial picture, the cash flow constraint, none of that is in the prompt. So every output is a rocket ship pointed at nowhere specific. I sat down with Tyler LaFleur, a fractional COO who came up through nursing, biohacking, and concierge medicine before going deep on AI implementation. We got into why most AI experiments fail not because the tech is bad but because the owner hasn’t done the financial work first. Why context, not prompts, is the entire game. Where the lowest-hanging fruit actually lives (boring administrative workflows, not strategic moonshots). And Tyler’s line that runs the whole episode: when there’s no goal, the default is always more. If you’re spinning up a GPT before you’ve built a Three-Statement Model, you’re accelerating the wrong direction.
Top 10 Takeaways
- Without a clear three-year goal in the financials, AI just accelerates you the wrong direction faster.
- When there is no defined goal, your default is always “more,” and “more” is not a strategy.
- Build the financial picture first. Then AI has actual constraints to work inside.
- Lazy prompts produce lazy answers. Context is the entire job, and most owners skip it.
- Don’t write your own prompts. Have AI write the prompt, then let robots talk to robots.
- The lowest-hanging AI fruit is repetitive admin work, not your sexy strategic moonshot.
- Claude projects ingest your documents into the model. ChatGPT projects just hold them. Know the difference.
- When real numbers are on the line, run the answer through a second model and pin them against each other.
- Hope is not a method. If you’re not measuring KPIs weekly, you’re not course-correcting.
- AGI is months or years out. Stop waiting. Pick one repetitive workflow and start there this week.
Sound Bites
“When there is no goal, or definitive goal, the default is always more.” (@00:30:13) — Tyler LaFleur
“What can go wrong is you build your ladder on the wrong wall.” (@00:55:50) — Tyler LaFleur
“Hope isn’t a method, so what are we doing?” (@00:40:57) — Tyler LaFleur
“You can’t be lazy with AI. That’s where you’re gonna get bullshit answers.” (@00:57:49) — Tyler LaFleur
“All of that is a yes or no question, based on math.” (@01:00:43) — Ryan Tansom
About This Episode
Tyler LaFleur is a registered nurse turned fractional COO turned AI implementation consultant. He came up through toxicology lab work and concierge medicine, advising CEOs on health before that work evolved into operational consulting and then into building custom AI workflows for owner-operator businesses. Tyler describes himself as an integrator, not a visionary, with a default obsession for first principles, task management, and “boots on the ground practicality.” He was introduced to Ryan by Jeff West (Ep. 470). This is the first of an ongoing collaboration around how owners can apply AI without the hype.
Resources Mentioned
- Anthropic Claude (Projects, Skills) — Tyler’s go-to for context-heavy work because the model ingests uploaded files. — claude.ai
- ChatGPT (Projects, Teams) — Ryan’s daily driver, ~$60/month for the Teams plan.
- Google Gemini 3.0 — Released the Friday before recording; Ryan was about to dive in based on early reactions from Benioff and Andreessen.
- n8n — Open workflow automation platform Tyler uses to wire AI into business processes.
- Peter Diamandis — Moonshots podcast — Referenced for the “how do power users know when AI is right or wrong” framing.
- Peter Attia — Referenced for the gap in conventional medical training around diet, nutrition, and exercise.
- Tyler LaFleur — Email: tlafleur@hphi.life
Connections
Phase + Module:
- Module 4 — Sustainable Financials — The three-statement clarity AI needs to work inside
- Module 3 — Owner’s Playbook — Where the owner’s goals become the constraints AI tools point at
Milestones:
- Milestone 18 — Business Operating System — AI workflows as part of how the business actually runs
- Milestone 17 — Operational KPIs — The dashboards AI agents monitor and surface
- Milestone 10 — Three-Statement Model — The objective-truth document every AI prompt should sit on top of
- Milestone 12 — Five-Year Forecast — The constraint set that turns AI from “more” into “more of the right thing”
Concepts referenced:
- The iBD Ownership OS™ — The umbrella context Ryan wants to feed into models as a super-prompt
- Three-Statement Model — The constraint framework AI work has to live inside
- Owner’s Scorecard™ — Goal clarity that turns AI output from noise into signal
- The Owner-Operator Trap™ — Why owners gravitate toward what they like and avoid the financial work AI depends on
- Theory of Constraints — Tyler’s “Herbie” reference; the bottleneck logic underneath workflow automation
- Independence by Design™ — The thesis Ryan and Tyler are collaborating to encode into AI workflows
Related episodes:
- Ep. 470 — Jeff West - Practical Leadership — The mutual connection that introduced Ryan and Tyler