Picture this familiar scenario: your team confidently presents an AI project with prioritisation frameworks and estimates that worked perfectly last year. This year, on a new AI initiative the numbers are spiralling beyond expectations and nobody knows how we got here.
Product leaders doing their best work, still unable to stay on top of delays and rising costs. The frustration makes sense—the outcome shouldn't surprise anyone who's lived through previous enterprise technology transformations. Traditional budgeting methods simply don't translate to AI development.
AI is simultaneously more expensive, cheaper and faster than you expect, all at once. The financial dynamics mirror cloud migration projects, where 75% ran over budget and 37% fell behind schedule, but AI's compressed adoption timelines amplify these challenges exponentially.
The bottom line? AI costs are an iceberg.
Training models and writing code represent the visible tip. Beneath the surface lie the real expenses—data operations, infrastructure scaling, cross-functional coordination, and capability building. None of which appear in early budgets.
What emerges is actually a fairly predictable pattern. Enterprise AI investments follow the same cycle we witnessed during the dot-com boom, cloud adoption, and mobile transformation. Technology promises everything. Consultants sell transformation. Most companies struggle to capture value while a small minority quietly builds sustainable competitive advantages.
The ROI Measurement Trap
Traditional ROI models aren't wrong—they're asking the wrong questions entirely. When executives complain that "AI ROI is impossible to calculate," they're trying to measure AI investments like software purchases: input costs, output benefits, payback periods.
Wrong framework. AI investments demand a strategy that resembles R&D spending or market expansion—they build capabilities that enable new business models rather than automating existing processes.
Cassie Kozyrkov, former Chief Decision Scientist at Google, captured this perfectly in a recent podcast: "Decision intelligence is the discipline of turning information into better action—any scale, any setting." After training over 20,000 Googlers in this framework, she emphasises that "we don't always realise when we're doing this. We can be completely convinced that we're integrating information from the real world, but all we're doing is using it so much more like a mood board and less as a recipe plan or blueprint for decision making."
This confirms what emerges from successful AI implementation studies: success depends on organisational discipline around decision-making processes, not technological sophistication.

How to Structure AI Budgets That Actually Work
The budget structure itself needs fundamental restructuring. Traditional software budgets front-load technology costs then minimise ongoing expenses. AI budgets should invert this model completely.
Start with a portfolio strategy that allocates across three time horizons:
40% for proven use cases with immediate ROI
35% for promising experiments with 6-12 month payback potential
25% for exploratory capabilities that may not pay off for years
This mirrors how venture capital firms structure portfolios—expecting most investments to fail while a few generate outsized returns.
Within each horizon, budget explicitly for failure. The most successful AI companies plan to kill 60% of their AI experiments within six months while extracting maximum learning from each failure.
Budget for learning, not just winning.
Traditional software sees 80% upfront implementation costs and 20% ongoing expenses. AI demands the opposite when you properly account for ongoing support and operational expenses that most organisations aren't budgeting for at all.
Vendor costs, model training, organisational change management, and continuous learning represent ongoing investments that often exceed initial deployment costs by 200-300%. Plan for 40-60% operational expenditure estimates for the first three years of any AI product or project.
Consider the "AI tax" on traditional budgeting: data preparation typically costs 3x more than anticipated, compliance adds 17% overhead for high-risk systems, and specialised talent commands 25-40% labour premiums over traditional software engineering roles.
What's practical about this approach is that it acknowledges AI's fundamental uncertainty while maintaining fiscal discipline. Instead of betting everything on single implementations, you're building a portfolio of learning investments that compound over time.
Think venture portfolio, not software purchase.
The Organisational Readiness Reality Check
Are your systems, budgets and processes ready for the staggering level of uncertainty that AI investments demand? Probably not—but few are. Companies want both higher returns and acceleration in higher risk technologies.
Seventy-four percent of adopters fail to scale their AI adoption because they start with technology and work backwards into problems, instead of starting with business problems and working forwards into tech solutions.
This is where most enterprises stumble with AI budgeting. They allocate like a software purchase (heavy upfront technology costs) when they should allocate like a capability development programme (heavy upfront organisational investment).
The successful minority starts with business capabilities they want to build and uses AI as enabling technology. Accenture's analysis shows successful AI companies spend 70% on people and process changes, 20% on data infrastructure, and only 10% on algorithms.
The parallel is instructive: early cloud adopters who tried to "lift and shift" existing applications struggled. The ones who built cloud-native capabilities from scratch captured disproportionate value.
Same pattern emerging with AI.
Vendor Negotiation in an Unequal Power Dynamic
The AI vendor market has created a power dynamic that most enterprises haven't fully grasped. Unlike traditional enterprise software, where buyers could play vendors against each other, AI capabilities are concentrated among a handful of providers with genuine technological differentiation.
This changes negotiation fundamentals entirely. Traditional procurement—RFPs, competitive bidding, detailed SLAs—assumes vendor technologies are interchangeable. Not yet in AI. OpenAI, Anthropic, and Google aren't competing primarily on price; they're competing on capability development speed.
New reality, new rules.
Smart enterprises are adapting by treating AI vendors as technology partners rather than suppliers. They structure relationships around shared value creation rather than service delivery. Consider how Walmart leverages AI-powered negotiation with its tail-end suppliers—using chatbots to conduct focused negotiations at scale that benefit both parties. The AI automates the process while improving terms and supply chain flexibility.
What emerges from successful implementations is a shift toward outcome-based agreements. One financial institution restructured its AI contract to include performance incentives tied to customer satisfaction improvements rather than traditional technical metrics. By aligning vendor compensation with business outcomes, they achieved 24% cost reductions while strengthening the vendor relationship.
The practical shift involves three adaptations:
Value-based contracting. Structure agreements around business outcomes rather than technical deliverables. Instead of paying for compute hours or API calls, negotiate based on productivity improvements, customer satisfaction gains, or revenue impact. This aligns incentives and creates partnership rather than procurement dynamics.
Shared risk models. Establish contracts where vendors participate in both upside and downside. Leading AI implementations show that 57% of vendor negotiations yield positive results when buyers use data-driven approaches to demonstrate market positioning and shared opportunity.
Capability evolution clauses. Build agreements that adapt as AI technology advances. Include provisions for model upgrades, integration with new tools, and performance improvement benchmarks that recognise AI's rapid development cycle.
This approach works because it recognises AI's current market reality: most valuable AI capabilities are still being invented, not just implemented. Vendors that help you discover new applications create more value than vendors that efficiently deliver predictable services.
Here's the challenge: This is uncomfortable for procurement teams trained on adversarial negotiation, but it reflects current AI market dynamics. Organisations getting real value from AI vendor relationships treat them like joint ventures—shared risk, shared upside, shared learning.
Board governance around AI reveals a knowledge gap that's affecting budget approval processes across industries. Deloitte's latest board survey found that 31% of directors report AI isn't even on their board agenda, while 66% acknowledge having limited to no AI knowledge or experience.
Classic innovation bottleneck.
This creates the predictable challenge we've seen with every platform technology adoption. Boards trained on traditional business evaluation criteria struggle to assess investments in rapidly evolving technologies they don't understand. Only 13% of S&P 500 companies have directors with AI expertise, yet these same boards are approving AI budgets and evaluating AI strategies.
What becomes apparent is a pattern of misaligned responses:
Over-caution. Rejecting valuable opportunities because risk frameworks designed for traditional technology don't account for AI's learning capabilities and iterative improvement models.
Over-enthusiasm. Approving poorly conceived initiatives because the technology sounds transformational without understanding implementation requirements or realistic timelines.
Delegation without oversight. Pushing AI decisions down to management while maintaining ultimate accountability—a recipe for misalignment between strategic vision and tactical execution.
The solution isn't better board education—it's better communication frameworks.
Here's what works in practice:
Translate AI investments into familiar business language. Frame AI budgets as strategic capability investments, not technology purchases. Instead of explaining machine learning algorithms, position AI alongside other platform investments that took years to show full value—ERP implementations, cloud migrations, digital transformation initiatives.
What's effective: Present AI investments using the same portfolio language boards use for other strategic initiatives. Show how the 40/35/25 allocation model (proven use cases, promising experiments, exploratory capabilities) mirrors successful venture capital approaches boards already understand.
Provide concrete benchmarking context. Boards need reference points for AI investment levels and timelines. Share industry-specific data: "Companies in our sector investing 70% in people and processes, 20% in data infrastructure, 10% in algorithms achieve 2x higher success rates than those with traditional software allocation ratios."
Create structured decision-making frameworks. Rather than asking boards to evaluate technical capabilities, provide frameworks that assess AI investments against familiar criteria: competitive positioning, market expansion opportunity, operational resilience, customer experience enhancement.
Forward-looking boards are restructuring their approach to AI oversight. Some are expanding risk committee mandates to include AI governance alongside cybersecurity and compliance. Others are adding external advisors with AI expertise rather than trying to educate entire boards on technical details.
Most effective approach: Establish clear reporting structures where management presents AI initiatives using business impact language while technical teams provide detailed implementation oversight. This creates accountability without requiring boards to become AI experts.
As Jean-Dominique Senard, chairman at Renault, puts it: "There should be a close link between the board and management—one that is transparent, candid, and open." The key is making AI strategy discussions as clear and actionable as traditional business strategy conversations.
Success Metrics That Actually Matter
The measurement challenge in AI reflects a deeper confusion about what AI actually does for businesses. Most companies are trying to measure AI like software implementations—efficiency gains, cost reductions, process improvements.
But AI's primary value proposition isn't operational efficiency; it's decision-making enhancement.
Only 19% of companies report revenue increases greater than 5% from enterprise AI investments because they're measuring the wrong outcomes. Teams capturing real value from AI focus on capability development metrics: learning velocity, decision quality improvement, and business model expansion possibilities.
The insurance company that reduced underwriting time by 50% didn't just automate existing processes—they enabled entirely new risk assessment capabilities. The measurement question isn't "how much did we save?" but "what decisions can we now make that were impossible before?"
Measure capability building, not cost cutting.
The current AI investment cycle parallels previous technology adoption patterns with remarkable consistency. The 4% of companies creating substantial AI value aren't innovating new business practices—they're applying proven technology adoption principles to AI's specific characteristics.
The practical implication for enterprise leaders is surprisingly straightforward: treat AI investment decisions like you would treat any other platform technology adoption. Focus on capability building over feature acquisition. Prioritise organisational learning over technical implementation. Structure vendor relationships around partnership rather than procurement.
The technology feels new, but the playbook isn't.
Worth Your Time: What I'm Reading This Week:
The conversation around AI budgets and vendor decisions is evolving rapidly. Smart product teams are moving beyond the hype to focus on practical implementation challenges, evaluation frameworks, and learning from early failures. Here's what caught my attention this week.
The Essentials
Teresa Torres' Step-by-Step Guide to AI Product Discovery | Aakash Gupta
Teresa Torres breaks down how continuous discovery habits apply in the AI age, covering both traditional discovery enhanced by AI tools and discovery for AI features themselves. She emphasises testing assumptions early and cheaply, using AI to augment workflow without replacing human empathy, and her "excavate the story" interview technique for getting chronological customer accounts. What's compelling: her practical warning that AI summaries can miss 20-40% of important detail, making human oversight crucial.
AI Ethics Case Studies: Real-World Failures | Avi Perera
Analysis of major AI failures including IBM Watson for Oncology, Amazon's biased hiring algorithm, and other high-profile cases. The piece examines what went wrong, legal implications, and lessons for responsible AI development. Essential reading for understanding implementation pitfalls and building better governance frameworks around AI product decisions.
Into the Abyss: Examining AI Failures | Harvard Ethics Center
Academic analysis of AI misalignment cases that provides practical insights for product teams. The piece examines instances where AI initiatives went off course, from healthcare to hiring, and extracts lessons about data quality, validation protocols, and the importance of human oversight. What makes this valuable is its focus on organisational learning rather than just technical fixes, emphasising how these failures should influence other AI initiatives.
Industry Intel
The AI Vendor Evaluation Checklist Every Leader Needs | VKTR
A comprehensive guide examining red flags, green lights, and must-haves when evaluating AI vendors. The piece covers data governance practices, ethical AI policies, and long-term partnership potential beyond technical specifications. Particularly valuable are the sections on identifying vendors who treat AI deployment as partnerships rather than traditional procurement relationships, aligning with themes from successful AI implementations.
For Our Consideration
These pieces reflect a maturing conversation about AI in product development—moving from "can we do this?" to "should we do this, and how do we do it responsibly?" The emphasis on evaluation frameworks, failure analysis, and human-centred approaches suggests the industry is entering a more pragmatic phase of AI adoption.
Outside the Terminal:
Sometimes, inspiration comes from places far from dashboards and workflows. This is just a little personal flavour on who I am, where I’m going to be, and what I’m thinking about right now.
Events
The speaking circuit calls again. I’m excited to be sharing thoughts and ideas at events again. If you're around Oxford or interested in enterprise search and data fundamentals, here's where you'll find me:
Product Tank Oxford - September 23rd, 6PM
NexGen Enterprise Search Summit – September 24th, 9AM
Film

Movie Poster for Jurassic Park
Jurassic Park (1993)
Still one of the greatest films ever made. Beyond the dinosaurs, it’s a cautionary tale: complex systems, poorly managed, will always outpace your control. There’s a lot here that applies to generative AI—security, scale, humility—but the real takeaway is simple: execution matters. Get it right, or get eaten.
Thanks for making it all the way to the end. I’d love to know what’s sparking your curiosity this week—hit reply or share your own “Outside the Terminal” pick. Until next time, keep learning fast and building well.