89% of leaders in IBM's global study of over 3,000 CEOs say they cannot have full confidence in AI insights until the underlying data is trusted and verified. That is not a technology problem. It is a data problem that sits upstream of every AI decision your organization makes. C-suite executives who have invested heavily in AI models but lightly in data quality, data governance, and data observability are building on an unstable foundation. And in 2026, the stakes of that instability have gone up. Agentic AI systems are no longer just generating text. They are making independent decisions about portfolios, loan underwriting, and operational workflows. The data those systems rely on is now load-bearing.
89%
Of leaders say they cannot fully trust AI insights until the underlying data is verified, per IBM's global CEO study of over 3,000 executives.
95%
Of executives believe transparency in how AI uses data will determine the success of their products in 2026, per IBM's CEO study.
70%
Of organizations are expected to have deployed autonomous agents managing complex workflows without human intervention by end of 2026, per IBM IBV.
80%
Of multinational firms are expected to adopt AI sovereignty strategies by 2027, keeping data within specific geographic or corporate boundaries for compliance and reliability.
Trust is now a measurable business asset, not a feeling
95% of executives in IBM's study say transparency in how AI uses data will determine product success in 2026. That is not a soft metric. It is a commercial one. Customers, regulators, and partners are all asking the same question: how do you know your AI is working from accurate, unbiased, and traceable data? Organizations that can answer that question clearly have a real advantage. Those that cannot are carrying reputational and regulatory exposure they may not have formally priced into their risk frameworks. C-suite executives need a concrete answer to that question, not a general assurance that their data team is on top of it.
Generic infrastructure produces generic intelligence
IBM's infrastructure COO Barry Baker has stated that the era of using identical universal servers for AI is over. The shift in 2026 is toward specialized, co-created infrastructure where hardware and software are designed together for specific use cases. What this means for data and intelligence specifically is that the quality of your outputs depends on the quality of the pipeline they run through. Latency and reliability have overtaken raw compute power as the primary performance metrics. A fraud detection system built on generic infrastructure and generic data pipelines will perform worse than one built with the specific data flows and latency requirements of fraud detection in mind. The same is true across use cases.
AI sovereignty is moving from policy conversation to operational requirement
80% of multinational firms are expected to adopt AI sovereignty strategies by 2027. This means keeping data and processing within specific geographic or corporate boundaries. For C-suite executives, this is no longer just a compliance question for the legal team. It is a data architecture decision that affects which AI use cases you can deploy, where you can deploy them, and which partners and cloud providers you can work with. Organizations that have not mapped their data flows against their sovereignty requirements are likely to discover conflicts when they try to scale specific AI applications. It is better to find those conflicts in planning than in production.
If you cannot see inside your AI, you cannot manage your risk
IBM's Think 2026 briefings introduce a concept called Observability as Code. The core problem it addresses is that generative AI tools are often black boxes. When they fail, it is hard to see why. The solution being adopted by leading enterprises is to manage AI monitoring through the same CI/CD pipelines used for software development. This enables what IBM calls Agentic Mesh architectures, where multiple AI agents monitor each other's health and reliability in real time. For C-suite executives, the practical implication is this: if your AI systems do not have monitoring built into their deployment pipeline, you are finding out about failures after they have already affected customers or operations.
At Marchcroft
Innovating Today,
Shaping Tomorrow
7 in 10
Leading operations using predictive analytics as a data discipline
Seven in ten pioneering utilities use predictive analytics to manage energy supply and demand. The data discipline behind that capability, clean inputs, reliable pipelines, and real-time access, is the same discipline that makes AI intelligence trustworthy in any sector. Self-optimization at scale starts with data you can act on without second-guessing it.
67%
Managing distributed data as a coordinated system
67% of optimizing utilities manage microgrids as both local services and grid-wide assets. For data and intelligence, the equivalent is treating distributed data sources as a coordinated system rather than isolated repositories. Organizations that have unified their data access across functions make faster, better decisions than those managing separate data lakes for each team.
~65%
Using forecasting to guide where data investment is needed
Nearly two-thirds of utilities create asset failure forecasts to evaluate network impact before problems occur. Applied to data strategy, this means using simulation and scenario planning to identify where data gaps will become decision-making failures before they do. Planning capabilities tell you where your data investment is actually needed, not where it was last requested.
01. Establish what data your AI is actually working from
This sounds basic. But 89% of leaders say they cannot fully trust their AI insights because they are not confident in the underlying data. So the first step is not a technology decision. It is an audit. Which data sources are feeding your AI systems? How fresh is that data? Who owns data quality for each source? Is the data being used for autonomous decisions the same data that has been validated for that use case? C-suite executives who can answer these questions specifically are in a fundamentally different position from those who assume their data team has it covered. The assumption is where the risk lives.
02. Build transparency into how AI uses data, not as a feature but as a requirement
95% of executives say transparency in AI data use will determine product success in 2026. That means external stakeholders, customers, regulators, and partners, are going to ask how your AI makes decisions and what data it relies on. Organizations that have built explainability and auditability into their AI data pipelines from the start can answer those questions. Organizations that treated transparency as a future consideration are going to find it expensive to retrofit. C-suite executives should be asking their data and technology teams right now: if a regulator or a major customer asked us to demonstrate how a specific AI decision was made and what data drove it, how long would that take us?
03. Treat AI observability as operational infrastructure, not a monitoring afterthought
IBM's Observability as Code framework addresses a specific failure mode: AI systems that work fine in testing and fail in production in ways nobody saw coming. The organizations managing this well have monitoring built into their deployment pipelines from day one. Multiple agents checking each other's outputs. Automated alerts when data quality degrades. Version control on the data inputs as well as the models. C-suite executives who have signed off on AI deployments without asking how failures will be detected and traced are carrying operational risk that is difficult to quantify until something goes wrong. And at that point, it is already a customer problem, not just a technical one.
Get Access To Audit Sheet
Unlock valuable insights with our complimentary audit sheet. Streamline your processes, identify areas for improvement, and boost efficiency—all at no cost.
Q: Our AI outputs look reasonable. How do we know if the underlying data is actually reliable?
Q: What does AI sovereignty actually mean for how we manage our data?
Q: We are moving toward agentic AI. What does that change about how we need to think about data?
Latest Blogs
Marchcroft Editorial - 2026-03-22
The Agentic Pivot: What Autonomous AI Decisions Mean for Your Data Strategy
Agentic AI
Data Strategy
Intelligence
Marchcroft Editorial - 2026-03-10
AI Trust Is a Data Problem. Here Is Where to Start.
Data Quality
AI Trust
Governance
Marchcroft Editorial - 2026-02-26
Observability as Code: Why AI Monitoring Needs to Live in Your Deployment Pipeline
Observability
AI Infrastructure
Data
Want to understand where your data strategy is actually holding your AI back?
Here is what working with us on data and intelligence looks like.