Data & Intelligence
7 MINUTE READ
APR 2026
Most organizations have data. Far fewer have data they actually trust.

89% of leaders in IBM's global study of over 3,000 CEOs say they cannot have full confidence in AI insights until the underlying data is trusted and verified. That is not a technology problem. It is a data problem that sits upstream of every AI decision your organization makes. C-suite executives who have invested heavily in AI models but lightly in data quality, data governance, and data observability are building on an unstable foundation. And in 2026, the stakes of that instability have gone up. Agentic AI systems are no longer just generating text. They are making independent decisions about portfolios, loan underwriting, and operational workflows. The data those systems rely on is now load-bearing.

89%

Of leaders say they cannot fully trust AI insights until the underlying data is verified, per IBM's global CEO study of over 3,000 executives.

95%

Of executives believe transparency in how AI uses data will determine the success of their products in 2026, per IBM's CEO study.

70%

Of organizations are expected to have deployed autonomous agents managing complex workflows without human intervention by end of 2026, per IBM IBV.

80%

Of multinational firms are expected to adopt AI sovereignty strategies by 2027, keeping data within specific geographic or corporate boundaries for compliance and reliability.

2026 is the year AI stopped asking for permission and started making decisions.IBM's Institute for Business Value calls this the Agentic Pivot. Organizations are moving away from AI that responds to prompts toward AI that acts independently. High-frequency portfolio adjustments. Automated loan underwriting. Real-time fraud decisions. These are not future scenarios. They are live deployments happening now. And here's the problem: when an autonomous agent makes a bad decision because it was working from bad data, the consequences are not a wrong answer in a chat window. They are a financial loss, a compliance breach, or a customer who does not come back. C-suite executives who treat data quality as a back-office concern are taking on risk that now sits at the operational level.
Four things the data tells us about where organizations are getting this wrong
These findings come from IBM's 2026 research across CEO studies, infrastructure reports, and Think 2026 briefings. Each one points to a specific gap between what C-suite executives assume about their data and what is actually true.

Trust is now a measurable business asset, not a feeling

95% of executives in IBM's study say transparency in how AI uses data will determine product success in 2026. That is not a soft metric. It is a commercial one. Customers, regulators, and partners are all asking the same question: how do you know your AI is working from accurate, unbiased, and traceable data? Organizations that can answer that question clearly have a real advantage. Those that cannot are carrying reputational and regulatory exposure they may not have formally priced into their risk frameworks. C-suite executives need a concrete answer to that question, not a general assurance that their data team is on top of it.

Generic infrastructure produces generic intelligence

IBM's infrastructure COO Barry Baker has stated that the era of using identical universal servers for AI is over. The shift in 2026 is toward specialized, co-created infrastructure where hardware and software are designed together for specific use cases. What this means for data and intelligence specifically is that the quality of your outputs depends on the quality of the pipeline they run through. Latency and reliability have overtaken raw compute power as the primary performance metrics. A fraud detection system built on generic infrastructure and generic data pipelines will perform worse than one built with the specific data flows and latency requirements of fraud detection in mind. The same is true across use cases.

AI sovereignty is moving from policy conversation to operational requirement

80% of multinational firms are expected to adopt AI sovereignty strategies by 2027. This means keeping data and processing within specific geographic or corporate boundaries. For C-suite executives, this is no longer just a compliance question for the legal team. It is a data architecture decision that affects which AI use cases you can deploy, where you can deploy them, and which partners and cloud providers you can work with. Organizations that have not mapped their data flows against their sovereignty requirements are likely to discover conflicts when they try to scale specific AI applications. It is better to find those conflicts in planning than in production.

If you cannot see inside your AI, you cannot manage your risk

IBM's Think 2026 briefings introduce a concept called Observability as Code. The core problem it addresses is that generative AI tools are often black boxes. When they fail, it is hard to see why. The solution being adopted by leading enterprises is to manage AI monitoring through the same CI/CD pipelines used for software development. This enables what IBM calls Agentic Mesh architectures, where multiple AI agents monitor each other's health and reliability in real time. For C-suite executives, the practical implication is this: if your AI systems do not have monitoring built into their deployment pipeline, you are finding out about failures after they have already affected customers or operations.

At Marchcroft

Innovating Today,
Shaping Tomorrow

At Marchcroft, we don't just meet expectations - we exceed them.

7 in 10

Leading operations using predictive analytics as a data discipline

Seven in ten pioneering utilities use predictive analytics to manage energy supply and demand. The data discipline behind that capability, clean inputs, reliable pipelines, and real-time access, is the same discipline that makes AI intelligence trustworthy in any sector. Self-optimization at scale starts with data you can act on without second-guessing it.

67%

Managing distributed data as a coordinated system

67% of optimizing utilities manage microgrids as both local services and grid-wide assets. For data and intelligence, the equivalent is treating distributed data sources as a coordinated system rather than isolated repositories. Organizations that have unified their data access across functions make faster, better decisions than those managing separate data lakes for each team.

~65%

Using forecasting to guide where data investment is needed

Nearly two-thirds of utilities create asset failure forecasts to evaluate network impact before problems occur. Applied to data strategy, this means using simulation and scenario planning to identify where data gaps will become decision-making failures before they do. Planning capabilities tell you where your data investment is actually needed, not where it was last requested.

01. Establish what data your AI is actually working from

This sounds basic. But 89% of leaders say they cannot fully trust their AI insights because they are not confident in the underlying data. So the first step is not a technology decision. It is an audit. Which data sources are feeding your AI systems? How fresh is that data? Who owns data quality for each source? Is the data being used for autonomous decisions the same data that has been validated for that use case? C-suite executives who can answer these questions specifically are in a fundamentally different position from those who assume their data team has it covered. The assumption is where the risk lives.

02. Build transparency into how AI uses data, not as a feature but as a requirement

95% of executives say transparency in AI data use will determine product success in 2026. That means external stakeholders, customers, regulators, and partners, are going to ask how your AI makes decisions and what data it relies on. Organizations that have built explainability and auditability into their AI data pipelines from the start can answer those questions. Organizations that treated transparency as a future consideration are going to find it expensive to retrofit. C-suite executives should be asking their data and technology teams right now: if a regulator or a major customer asked us to demonstrate how a specific AI decision was made and what data drove it, how long would that take us?

03. Treat AI observability as operational infrastructure, not a monitoring afterthought

IBM's Observability as Code framework addresses a specific failure mode: AI systems that work fine in testing and fail in production in ways nobody saw coming. The organizations managing this well have monitoring built into their deployment pipelines from day one. Multiple agents checking each other's outputs. Automated alerts when data quality degrades. Version control on the data inputs as well as the models. C-suite executives who have signed off on AI deployments without asking how failures will be detected and traced are carrying operational risk that is difficult to quantify until something goes wrong. And at that point, it is already a customer problem, not just a technical one.

Get Access To Audit Sheet

Unlock valuable insights with our complimentary audit sheet. Streamline your processes, identify areas for improvement, and boost efficiency—all at no cost.

Download Audit Sheet
Questions C-suite executives ask us about data and intelligence
These are the conversations that come up most often when we work with leadership teams moving from AI pilots toward production systems that make real decisions.

Q: Our AI outputs look reasonable. How do we know if the underlying data is actually reliable?

Reasonable-looking outputs are not the same as reliable outputs. The most dangerous data problems are the ones that produce plausible but wrong answers rather than obvious errors. Here's how to start checking: trace one significant AI decision back through its data pipeline and document every source, every transformation, and every assumption made along the way. Most organizations that do this find at least one point in the chain where data quality is assumed rather than verified. IBM's CEO research found that 89% of leaders lack full confidence in their AI insights for exactly this reason. The fix is not a new tool. It is a data audit discipline applied to the pipelines feeding your most consequential AI systems.

Q: What does AI sovereignty actually mean for how we manage our data?

In practical terms it means knowing where your data lives, where it gets processed, and whether those locations are consistent with your regulatory and contractual obligations. 80% of multinational firms are expected to have formal AI sovereignty strategies by 2027. For most organizations, the work starts with mapping. Which AI applications process personal data? Which ones cross jurisdictions? Which cloud providers or partners are involved in those processing flows? C-suite executives often discover that their data architecture evolved organically and does not map cleanly onto their regulatory obligations. That gap is manageable when you find it in planning. It is a serious compliance problem when you find it after a regulatory inquiry.

Q: We are moving toward agentic AI. What does that change about how we need to think about data?

Everything. When AI moves from generating recommendations to making autonomous decisions, the quality, freshness, and traceability of the data it works from becomes a direct operational risk, not just a quality concern. IBM IBV estimates that 70% of organizations will have autonomous agents managing complex workflows by end of 2026. High-frequency portfolio adjustments, automated underwriting, and real-time operational decisions all depend on data that is not just accurate in general but accurate right now and traceable to a specific source. The confidence gap IBM identifies, where only 33% of leaders are optimistic about the global economy but 84% are bullish on their own firms, is largely because those firms believe their autonomous systems can handle volatility. But that confidence is only justified if the data feeding those systems is genuinely trustworthy. That is the question C-suite executives need to be able to answer.

Latest Blogs

Marchcroft Editorial - 2026-03-22

The Agentic Pivot: What Autonomous AI Decisions Mean for Your Data Strategy

Agentic AI

Data Strategy

Intelligence

Marchcroft Editorial - 2026-03-10

AI Trust Is a Data Problem. Here Is Where to Start.

Data Quality

AI Trust

Governance

Marchcroft Editorial - 2026-02-26

Observability as Code: Why AI Monitoring Needs to Live in Your Deployment Pipeline

Observability

AI Infrastructure

Data

View more

Want to understand where your data strategy is actually holding your AI back?

Here is what working with us on data and intelligence looks like.

Ready to Get Started?
Let's discuss how we can help your organization audit its AI data readiness and build intelligence you can actually trust.
Consult with us

Get In Touch

+44 20 3286 8065

contactus@marchcroft.com