Sustainable AI success depends on building adaptive data stewardship relationships and organisational capabilities. In this week’s piece we’ll explore how to build those effectively.
Every organisation talks about becoming "data-driven." Most struggle to move beyond reporting dashboards to systems that actually change decisions.
Add AI into the mix? The complexity multiplies exponentially.
AI systems don't just need data—they need data that flows differently, updates more frequently, and maintains quality standards traditional business intelligence never required. Leaders find themselves designing relationships between systems that never anticipated these demands.
Here's the catch. Better infrastructure doesn't automatically enable better decisions.
Why Data Relationships Trump Data Access
Traditional data transformation focuses on technical integration: APIs, pipelines, warehouses, lakes. Here's what most miss—infrastructure alone doesn't drive decision-making.
At Cazoo, Nicola Sedgwick developed a framework that shifted our thinking entirely. Her Quality Radar put relationships front and centre, recognising that code exists within systems generating products for real users.
This insight applies directly to AI implementation. Success depends on sustainable relationships between AI systems and data stewards who understand how these systems consume information differently than traditional analytics.
The hidden reality: What barely affected quarterly summaries can break operational systems. Latency that didn't matter for dashboards becomes critical for customer experience.
Cloudian's research confirms this as the most overlooked success factor: "Effective data management practices are necessary to maintain quality over time—including robust governance policies, version control for datasets, and mechanisms for tracking data lineage."
Here's what this means practically: You need data stewards who understand that AI systems surface inconsistencies traditional reporting masked. They must recognise that AI models can fail silently when data distributions shift in ways that don't affect human analysis.
Most organisations still treat this as a technical challenge.
The Relationship Building That Actually Works
Smart organisations get the relationship dynamics right first. Three patterns emerge consistently:
Data Quality Partnerships, Not Access Requests
Maersk's supply chain demonstrates how data relationships enable AI success. Their logistics coordinators didn't just provide data—they became partners in understanding how AI systems interpret global supply patterns differently than regional human analysis.
The shift? Instead of quarterly data access reviews, they established weekly partnership meetings. Logistics experts surfaced operational nuances that data might miss. Data engineers explained how AI systems needed information structured differently for real-time optimisation.
Key insight: Create shared incentives showing data stewards how their improvements directly impact AI performance. Make the connection explicit between data quality and business outcomes.
Collaborative Discovery Instead of Top-Down Requirements
AI transformation works better with continuous discovery where business experts and technical teams learn together. Siemens' predictive maintenance illustrates this beautifully.
Rather than dictating data requirements for AI diagnostic tools, they created collaborative workflows where technicians and data scientists discovered optimal patterns together. Technicians contributed contextual knowledge about equipment behaviour. AI surfaced patterns across thousands of similar machines.
What emerged: AI needed different data granularity and timing than traditional maintenance reporting—insights that top-down requirements gathering would have missed entirely.
What could we learn from that? Robust discovery processes reveal how AI systems change decision-making workflows in ways that affect data requirements. Plan for emerging data sources and evolving needs.
Systems Integration Thinking, Not Feature Addition
AI doesn't just automate existing analytics—it changes how information flows through organisations and affects decision patterns across departments.
Consider customer service getting AI-powered sentiment analysis. This changes how marketing interprets campaign feedback, how product teams prioritise features, how sales approaches relationship management. The data transformation extends far beyond the original AI implementation.
ING Bank's approach succeeded because they mapped workflow impacts three degrees out from direct users. They redesigned approval processes, meeting structures, and information sharing protocols across departments to accommodate new decision-making rhythms that AI-powered customer insights created.
Bottom line: Map how AI changes data consumption patterns across the entire organisation, not just direct AI users.
Most data transformation approaches miss a critical factor: AI systems evolve continuously, which means data requirements evolve continuously. Traditional data governance assumes relatively stable business intelligence needs.
AI governance requires adaptive data stewardship.
Stanford's research reveals the "capability evolution effect"—AI systems don't just consume data differently than traditional analytics, they change how organisations understand their own information needs over time.
The practical implication: Your data stewards need escalation processes when data problems affect AI system reliability. They must understand that AI systems surface quality issues that traditional reporting never revealed.
Amy Edmondson's learning anxiety research applies directly here: data stewardship teams simultaneously learn new AI requirements while questioning whether traditional practices remain relevant.
What this looks like operationally:
Regular data quality reviews with source system owners, not just access requests. Shared monitoring showing how data improvements directly impact AI performance. Clear communication channels when data issues affect AI system reliability.
The strategic shift is pretty clear: From data management to data relationships.

Building Change Capability for Continuous Evolution
Organisations succeeding at AI-powered data transformation treat it as capability building rather than system implementation. They build organisational capacity for continuous adaptation to evolving AI requirements.
This requires different metrics than traditional data projects. Track data relationship quality, not just availability. Measure collaborative problem-solving between business experts and data engineers, not access requests. Monitor innovation contributions from cross-functional partnerships, not compliance rates.
Create organisational capacity for ongoing adaptation rather than managing people through specific AI implementations.
Where Leadership Makes the Difference
Leading data transformation in the AI era requires balancing experimentation with new capabilities while maintaining quality standards that traditional analytics never required.
Copenhagen Business School research across 200 European organisations found that transformation success correlated with leaders' ability to maintain "productive ambiguity"—providing clarity about data governance principles while accepting uncertainty about specific AI implementation outcomes.
This differs from traditional data projects, where leaders typically provide detailed requirements and clear milestone definitions. AI's rapid evolution makes such specificity counterproductive.
The approach that works: Provide clear data governance principles while embracing tactical flexibility about how AI systems consume that data.
INSEAD research reveals that leaders who treat data governance challenges as collaborative problem-solving opportunities achieve 40% higher long-term adoption rates for AI-powered systems.
Worth noting here: the pattern extends beyond technology adoption to organisational learning capability.
Practical Starting Points for Data Leaders
Three approaches prove consistently effective:
First month: Relationship mapping, not technical architecture Before designing data pipelines, understand how AI will change decision-making patterns across teams. Who needs different information? How do existing data stewards understand AI requirements?
Ongoing: Collaborative data discovery Establish regular forums where business experts share what they're learning about AI data needs while data engineers explain how AI systems interpret information differently than traditional analytics.
Always: Connect data quality to business outcomes Help data stewards understand how their work enables AI systems that deliver measurable business value. Make the connection explicit between data improvements and customer impact.
The Strategic Opportunity
Most data transformation approaches focus on building better technical infrastructure—faster pipelines, more storage, cleaner architectures.
Here's what organisations achieving sustainable AI success recognise: the constraint isn't technical capacity—it's human capability to adapt data practices to AI requirements. Treat data infrastructure investments as enablers for better human-data relationships, not replacements for collaborative data stewardship.
AI-powered data transformation isn't about technology deployment—it's about building organisational capabilities that evolve alongside technological capabilities. Technical solutions are increasingly commoditised. Human systems for data stewardship and cross-functional collaboration determine whether organisations capture AI's strategic potential.
Leaders who understand this distinction build competitive advantages that extend beyond specific AI implementations. They create organisational capabilities for continuous data evolution, collaborative problem-solving, and value creation that serve them regardless of how AI technology develops.
AI systems will keep evolving their data requirements. Organisational capability for adaptive data stewardship determines whether that evolution creates business value or technical debt.
The choice isn't whether to invest in data infrastructure—it's whether to build the human systems that turn data investments into sustainable competitive advantages.
Worth Your Time: What I'm Reading This Week
The reality of building AI products is messier than the hype suggests. While everyone talks about AI transformation, the practical work happens in the details—measuring what matters, designing interfaces that actually work, and figuring out how teams adapt to new tools and workflows.
This week's selection focuses on the implementation reality rather than the theoretical promise. These are the pieces helping product professionals navigate the actual challenges of building, measuring, and scaling AI products.
The U.S.I.D.O. Framework for AI Product Managers | Curtis Savage
A practical five-step framework (Understand, Strategize, Ideate, Define, Optimize) for integrating AI into every stage of product management work. Savage breaks down how AI can enhance traditional PM tasks rather than replace them, with specific examples for user research, competitive analysis, and feature prioritization. The framework addresses the common PM challenge of knowing where to start with AI tools while maintaining strategic thinking and human judgment.
KPIs for Gen AI: Measuring Your AI Success | Google Cloud Blog
Moving beyond accuracy metrics to business-relevant measurements for AI initiatives. This comprehensive guide covers model quality, operational efficiency, user engagement, and financial impact metrics specific to generative AI. Particularly valuable for its breakdown of how traditional KPIs fall short for AI systems and what operational metrics actually predict long-term success. Essential reading for anyone struggling to demonstrate AI ROI to stakeholders.
A living collection of interface patterns specifically designed for AI interactions. Unlike generic design pattern libraries, this focuses on the unique challenges of AI UX—handling uncertainty, showing AI reasoning, managing user expectations during processing. The patterns are crowd-sourced from real AI products, making it immediately applicable for teams designing conversational interfaces, AI-powered features, or entirely AI-native products.
Enhancing KPIs With AI | MIT Sloan Management Review
Research-backed analysis of how organizations use AI to discover hidden performance drivers and create more predictive metrics. Goes beyond measuring AI itself to show how AI transforms measurement systems across business functions. The Wayfair case study demonstrates practical application—using AI to identify better customer satisfaction predictors that improved recommendation engines and logistics decisions. Essential for leaders rethinking performance measurement in AI-augmented organisations.
For Our Consideration:
These pieces reinforce this week's central theme: successful AI transformation happens through better relationships, not just better technology. Whether it's measuring what matters (MIT Sloan), designing interfaces that build trust (Shape of AI), or frameworks that augment rather than replace human judgment (U.S.I.D.O.), the pattern is clear—the organisations winning with AI are those that strengthen the human systems alongside the technical ones.
Worth Your Time curates the highest-value content for product professionals working with AI, filtering signal from noise so you can focus on building rather than browsing.
Outside the Terminal:
Events
This month, I'm returning to the speaking circuit and looking forward to sharing insights on AI implementation and data fundamentals. If you're working on anything in this space, I'd welcome the chance to connect. Here's where you can find me:
Product Tank Oxford - September 23rd, 6PM
NexGen Enterprise Search Summit – September 24th, 9AM
Thank you for reading and for the work you're doing to build more thoughtful AI products. The future depends on practitioners like you who think deeply about implementation. These conversations matter.
– Saielle