Here’s the uncomfortable truth: failures in the first generation of enterprise AI agents won’t be because the models aren’t good enough—they’ll fail because enterprises don’t understand the value hidden in their own workflows. We saw the same blind spot sink the first wave of cloud adoption. Companies rushed to “lift and shift” legacy systems into the cloud, only to discover that without rethinking workflows, costs ballooned and agility disappeared. The lesson? Software is hard work, and skipping the unglamorous foundations always catches up with you.
Right now, I’m seeing three recurring anti-patterns:
1. Agent washing instead of agent building
Vendors are rebranding chatbots, RPA tools, and assistants as “agents” when they don’t demonstrate genuine agentic behaviours: autonomous decision-making within defined bounds, self-correction, multi-step reasoning, and contextual adaptation. This isn’t just semantics—it creates false confidence inside enterprises. Leaders think they’re buying the future, but they’re really inheriting old tech with new stickers. The organisations that succeed will be the ones asking harder questions: Can this system adapt when context changes? Can it sustain performance at scale? If the answer is no, you’re not building an agent—you’re bolting a new label onto an old workflow.
2. Technology-first thinking instead of workflow-first design
Marina Danilevsky puts it bluntly: humans are bad communicators, so why are we expecting AI agents to decode messy, unstructured behaviour on their own? The successful path starts with mapping workflows—where decisions are made, what context matters, and where autonomy creates value instead of risk. Ignore this, and you’ll spend millions on “intelligent” systems that collapse under the weight of human ambiguity. Product teams that succeed will think like anthropologists first and technologists second.
3. Weak data foundations
Data is the oxygen of agents, yet most enterprises still operate with stale, fragmented systems. Gartner estimates more than half of organisations admit their data isn’t AI-ready. Poor quality and poor integration turn every AI initiative into a tax rather than a multiplier. It’s tempting to race ahead because agents look exciting—but without strong data pipelines, real-time context, and auditability, you’re just accelerating bad decisions.
The pattern here is simple but often ignored: agents don’t fail because the models can’t think—they fail because organisations haven’t done the hard work of preparing their own systems and workflows. The ones who take that seriously will be the ones still standing in three years, while the rest quietly wind down their “transformative” pilots.
Why European Regulation Creates Strategic Advantage
While US companies rush headlong into autonomous agents, European regulators are insisting on human oversight through the AI Act, which took full effect in August this year, with fines up to €35 million or 7% of global turnover.
At first glance, regulation might feel like friction—but it’s really a strategic lever if you have the right mindset about it. Organisations that bake governance into their agent architecture from day one gain a hard to copy moat: decision limits define exactly what an agent can do, action logs provide accountability, and automated escalation ensures safety. Those are hard to replicate, and if you’re executing those processes well, your governance process might be some of your most valuable IP in a decade.
Those who see compliance as a box to tick will struggle; those who see it as a design principle will create a foundation for durable, trustworthy AI.
The Cloud Migration Parallel: Learning from the Past
The current AI agent adoption wave echoes the cloud migration patterns of the 2010s. Back then, enterprises ran into familiar obstacles: legacy systems that resisted change, manual processes slowing velocity, and a failure to optimise for new paradigms.
Take Netflix’s 2008 hardware failure: a two-day outage forced a hard rethink of infrastructure, which eventually enabled one of the most successful cloud migrations in enterprise history. The lesson? Failures in infrastructure are opportunities for transformation—but only if organisations are prepared to act strategically.
We’re seeing a similar evolution in AI: early deployments will shift from chatbot rebranding to genuine agentic capability. MIT research shows buying tools from specialised vendors succeeds 67% of the time, whereas internal builds succeed only one-third as often. The pattern is clear: infrastructure, governance, and workflow alignment—not just the technology itself—determine whether organisations capture the value of agents or watch their pilots quietly fade.
Three Infrastructure-First Patterns Differentiating Successful Agent Deployments
Across industries, the difference between agents that fizzle and agents that create lasting impact isn’t the model—it’s how organisations structure infrastructure, workflows, and decision-making around them. Here are three patterns that demonstrate this principle in practice:
1. Retail Supply Chain Intelligence
Walmart’s Trend-to-Product system tracks social media trends, generates mood boards, and feeds insights directly into prototyping processes. Success comes from building agents around decision points—turning trend signals into actionable product choices—rather than automating isolated tasks like inventory tracking.
2. Healthcare Revenue Cycle Automation
UT Southwestern’s autonomous voice-driven workflow transcribes, summarises, and writes patient interactions back into Epic systems. Staff time is freed while eligibility and authorisation processes are verified automatically. The pattern here is workflow-centric design: agents enhance human decision-making rather than replacing it.
3. Financial Services Compliance Intelligence
Leading financial institutions deploy multi-agent systems that monitor transactions in real time and adjust compliance rules dynamically. This demonstrates contextual adaptation—agents react to changing regulatory requirements without requiring constant manual updates.
Across these examples, the shared lesson is clear: durable value comes from infrastructure-first thinking, not chasing the latest AI model. Agents succeed when organisations intentionally design around decision-making patterns, integrate robust data, and embed governance at every layer.
Product Management's Critical Role in Agent Success
Product management uniquely positions organisations for agent success through strategic technology evaluation, workflow integration expertise, and stakeholder alignment capabilities. Unlike IT leaders focused on technical implementation or business leaders focused on ROI, product managers understand the intersection of user needs, technical feasibility, and business value.
As my friend Martin Eriksson has written, “We're about to witness the Cambrian Explosion of software. 99% of it will go extinct.“ That observation captures exactly why product managers are critical in this era: they’re the ones who bridge exciting capabilities with the outcome strategy and business value of technology.
Product managers live at the intersection of workflow, technology, and business value. They know software is hard work—not just building models, but integrating them into messy human systems so they actually deliver results. Modern product managers are emerging as orchestrators of agentic workflows, designing holistic systems that support entire product lifecycles.
IT leaders often focus on infrastructure, and executives on ROI. Product managers, by contrast, translate real business needs and technical capabilities into valuable products. They bring it together, and are the best placed experts to help businesses start to leverage operational processes underlying software as key IP for Agentic adoption.
In short: effective product management is the orchestration layer that will determine which companies survive the Cambrian explosion—and which will fade away.
What You Can Do Right Now
Sceptics are right to question agent reliability and organisational readiness. IBM’s Marina Danilevsky points out that we haven’t yet figured out ROI on LLM technology more broadly, while MIT Technology Review warns against hype overtaking reality.
But inaction comes at a cost. MIT research shows that 95% of generative AI initiatives fail not because the models are weak, but because organisations lack the capability to deploy them effectively. The solution isn’t avoiding agents—it’s building the foundation to use them successfully.
Here are three core priorities to focus on right now:
1. Data Infrastructure
Audit whether your systems provide real-time business context, ensure data quality, and establish audit trails for all decisions. Agents can only be as good as the data that feeds them.
2. Governance Framework
Map where agents create versus destroy value, define bounded authority, implement escalation mechanisms, and clarify stakeholder roles. Governance is not a compliance checkbox—it’s a competitive advantage.
3. Workflow Pilots
Start small with rule-based, high-frequency processes. Measure outcomes carefully, iterate, and expand gradually. This ensures agents enhance decision-making without introducing systemic risk.
Organisations that take this approach consistently outperform peers who rush to production—often by 6–12 months of measurable business impact.
The Bottom Line: We have roughly 36 months before 15% of everyday work decisions are made autonomously. The real race isn’t about being first—it’s about building infrastructure, governance, and workflows that make every future agent deployment faster, safer, and higher quality.
Play the long game.
Worth your Time: What I'm Reading This Week
The AI landscape is shifting faster than most product teams can adapt. Here's what's actually worth your attention—and why it matters for the decisions you're making right now.
The Essentials
Matt LeMay breaks down the “low-impact PM death spiral” and the single question that predicts team survival. It’s a reminder that impact is measured in outcomes, not activity, and that even teams following all the “best practices” can stumble if they lose sight of what really moves the needle. You can follow Matt’s Newsletter here.
A refreshingly practical lens on the AI landscape: “2025 isn’t about AI versus humans—it’s about organisations that harness AI effectively versus those that don’t.” Their framework highlights capability over hype, a recurring theme I see across successful teams.
Industry Intel:
How CIOs Are Building and Buying Gen AI in 2025 | Andreessen Horowitz Survey data shows enterprise AI budgets have matured: 37% of companies now run 5+ models in production. The insight? Successful adoption isn’t about having the flashiest model—it’s about building multi-model operational frameworks and decision workflows.
EU Rules on General-Purpose AI Models | European Commission
The EU AI Act is now live. Any product team operating in the EU must account for transparency, copyright, and risk assessment requirements. Compliance is no longer a checkbox—it’s a strategic lever.2025 AI Safety Index | Future of Life Institute
An independent evaluation of leading AI companies on safety and governance. The takeaway: organisational capability and process maturity often matter more than the sophistication of the AI itself.
What I'm Thinking About:
Across all these pieces, the pattern is clear: success comes from building capability, not chasing technology. The companies pulling ahead aren’t necessarily using the most advanced models—they’re the ones who have invested in sustainable AI operating models, reliable data pipelines, and feedback loops that actually improve outcomes over time.
Outside the Terminal: What I'm Into This Week
Sometimes, inspiration comes from places far from dashboards and workflows. These aren't AI-related, but they're things I’ve engaged with I thought might be worth mentioning. Here’s a snapshot of what’s shaping my thinking and keeping me curious this week:
Music
“GeiSha” by Uki captures that same magic that makes the best Gorillaz tracks irresistible. In this song, it’s got ancient-sounding chants layered over crisp hip-hop production that feels both timeless and completely modern. The drumline hits with precision while subtle arrangement shifts and instrumental breaks keep you locked in without ever feeling repetitive. It's the kind of track that makes you move without thinking, where old soul meets new beats in a way that just feels right.
Film
Watching films often gets me to open up and think bigger about things, which I appreciate. Our work can pull us into really complex details and films can be a good reminder of the wideness of the world. I like thinking about the production, the script writing process, the complexity that goes into making something people read as a single narrative.
I recently watched Materialists (2025): a young, ambitious New York City matchmaker is caught between the perfect match and her imperfect ex. The film is gorgeously shot and styled, though it’s not a blockbuster or a transformative cinematic experience. I walked out of the theatre a little disappointed in the way it was positioned, but a few days later I appreciated some of the brighter moments.
It’s not quite as affecting as Past Lives (2023), but it’s a reminder that even humans don’t always know what they want. More than that, it highlights human nuance and imperfection—things no AI agent can fully anticipate or replicate. You can read my full Letterboxd review if you want the details.
Human complexity is messy, subtle, and sometimes unpredictable—just like the systems we try to build. Software is hard work, but it’s work worth doing carefully, thoughtfully, and with an eye toward how real humans experience it, whether they’re making it, using it, or some mix of both.
– Saielle