December 29, 2025

How Businesses Evaluate Their Data Across Operations

Have you ever wondered how some businesses seem to make confident decisions while others stumble through uncertainty?

Dorian Trevisan

The difference often comes down to one critical practice: data evaluation. In 2024, JPMorgan Chase learned this lesson the hard way, paying approximately USD 350 million in fines for providing incomplete trading data to surveillance platforms (Reuters, February 2024). Meanwhile, organizations that systematically evaluate their data are achieving remarkable results—over 90% of them report measurable value from their data and analytics investments (Coherent Solutions, 2024).

Yet despite data being recognised as the world's most valuable resource, over 80% of companies still rely on stale data for decision-making (Agility PR Solutions, May 2022). The gap between those who evaluate data effectively and those who don't is widening, particularly as artificial intelligence transforms both the evaluation process and its strategic importance.

So how do successful businesses actually evaluate their data? Let's explore the methodologies, metrics, and modern practices that separate reactive organisations from strategic ones.

How Businesses Evaluate Data: The Framework Approach

Data evaluation isn't a one-time audit—it's a continuous cycle built on structured frameworks that ensure quality across multiple dimensions.

Research consistently identifies four core evaluation dimensions that appear across all successful frameworks: accuracy, completeness, consistency, and timeliness (Mohammed et al., 2024). However, comprehensive evaluation extends far beyond these basics. Current research recognises 29 representative data quality dimensions encompassing accessibility, credibility, relevance, and security (Data Quality Assessment: Challenges and Opportunities, 2024).

The typical evaluation lifecycle follows five key stages (Data Science Central, 2022):

Assessment involves defining data quality standards and measuring current performance against these benchmarks. Organisations identify their data sources, determine which attributes are necessary, and establish acceptability criteria. For instance, a business might decide that customer contact information must be 100% accurate and complete, while preference data can tolerate 90% accuracy.

Design focuses on architecting data pipelines that ensure quality standards are met. This includes selecting the right data quality processes—parsing, cleansing, standardisation, matching, and deduplication—and determining how they'll be applied.

Implementation executes these quality processes, transforming incoming data into the state defined during assessment. This is where the rubber meets the road.

Monitoring provides continuous observation of data quality metrics. Leading organisations use data observability tools that assess data across ecosystems and manage incidents through single dashboards (IBM, 2024).

Improvement closes the loop through iterative refinement based on findings. The Australian Government Data Governance Framework (September 2025) emphasises this cyclical approach, requiring agencies to review and evaluate data strategies regularly to ensure they remain fit-for-purpose.

The first Australian Public Service data maturity assessment, conducted in 2024, revealed that many agencies still struggle with governance basics (ANAO, 2025) - a reminder that even large organisations are still building these capabilities.

What Gets Measured: Beyond the Obvious Metrics

Understanding what to evaluate is as critical as understanding how to evaluate it. While every business is unique, certain dimensions prove consistently valuable.

The Essential Four form the foundation:

Completeness measures whether all required fields and records are present. Missing data creates gaps that undermine analytics and can result in costly penalties—as JPMorgan discovered.

Timeliness assesses currency and availability when needed. In an era where information delays yield increasingly negative consequences, this dimension has become critical. The sobering reality? Over 80% of companies currently rely on stale data for decision-making (Agility PR Solutions, May 2022)

Accuracy evaluates correctness against known reference values. This matters more than you might think: research shows manual data entry error rates range from 0.55% to 26.9% (IBM, 2024). Even small percentages translate to significant problems at scale.

Consistency ensures uniformity across systems and over time. When different departments interpret data differently, confusion and errors multiply.

But comprehensive evaluation doesn't stop there. Advanced frameworks incorporate additional dimensions (MDPI, 2022): integrity (preservation of relationships between data elements), accessibility (ease of obtaining data when needed), security (protection against unauthorised access), ease of manipulation (usability for intended purposes), and relevancy (alignment with business objectives).

Organisations also evaluate beyond the data itself—assessing data lineage and transformations, governance maturity, infrastructure capability, and how stakeholders actually interact with and utilise data.

Here's the challenge: while researchers have defined over 50 data quality dimensions, practical measurement typically remains limited to approximately 11 dimensions (MDPI, 2022). This gap between theoretical frameworks and practical implementation represents both a challenge and an opportunity for businesses willing to go deeper.

Why Evaluation Matters: The Business Case

If you're wondering whether systematic data evaluation is worth the investment, the evidence is compelling.

The Performance Impact is Measurable

Companies employing data-driven decision-making increase their operational productivity rates to 63% (MicroStrategy, 2024). Organisations transitioning from basic to advanced business analytics experience an 81% boost in profitability (Kearney, 2024). These aren't marginal improvements—they're transformative advantages.

Risk Management and Compliance Are Non-Negotiable

Regulatory frameworks increasingly demand quality assessment. The EU's AI Act explicitly requires measures to ensure data quality (European Parliament, 2024), while GDPR mandates documented compliance with data quality standards. In Australia, government AI usage grew from 27 entities in 2022-23 to 56 entities in 2023-24, requiring enhanced governance frameworks (ANAO, 2025).

The consequences of poor evaluation extend beyond fines. Incomplete or inconsistent data leads to flawed decisions, missed opportunities, and operational inefficiencies that compound over time.

AI Success Depends on Data Quality

We're witnessing a paradigm shift from model-centric to data-centric AI, where data quality's impact on model performance takes center stage (Neutatz et al., 2021, 2022). Poor-quality data leads to inaccurate predictions and flawed AI insights, no matter how sophisticated your algorithms.

The correlation is clear: companies with AI-led processes achieve 2.5 times higher revenue growth and 2.4 times greater productivity compared to their peers (Accenture, 2024). But this advantage only materialises when the underlying data is properly evaluated and maintained.

Common Problems: What Trips Businesses Up

Even with the best intentions, organisations encounter predictable obstacles when evaluating data.

Data quality issues create the most immediate problems. Data silos and fragmentation scatter information across systems and departments, making consistent evaluation difficult (Acceldata, 2024). Stale information plagues the majority of organisations, with over 80% relying on outdated data. Incomplete and invalid entries persist, with error rates reaching as high as 26.9% in some contexts (IBM, 2024).

Implementation challenges compound these technical issues. Governance gaps—the lack of clear structures and processes—hinder effective quality management (Acceldata, 2024). The complexity of evaluating big data at scale, dealing with its volume, variety, and velocity, creates assessment difficulties that overwhelm traditional approaches (Cai and Zhu, Big Data Era research).

Cultural resistance remains a stubborn barrier. Many organisations spread their efforts thin with small, sporadic bets rather than implementing systematic evaluation programs (PwC, 2026). Without executive commitment and cross-functional buy-in, even well-designed frameworks fail to deliver results.

The measurement gap persists: despite 50+ defined dimensions, practical measurement remains limited to approximately 11 (MDPI, 2022). This disconnect between theory and implementation highlights how difficult it is to translate frameworks into daily practice.

How AI Is Transforming Data Evaluation

The explosion of AI into business contexts represents the most significant transformation in data evaluation practices we've seen in decades.

The Paradigm Has Shifted

We've moved from model-centric to data-centric AI, where emphasis on data quality and its impact on underlying models takes precedence over algorithm optimisation (Data Quality Assessment, 2024). This shift makes evaluation central rather than peripheral to AI initiatives.

The acceleration is remarkable. Technology now matches median human performance in natural language understanding in 2023—four years earlier than the 2027 estimate made just a few years ago (McKinsey, 2023). This rapid advancement means AI-enabled evaluation capabilities are arriving faster than anticipated.

AI Enables New Capabilities

No-code AI platforms have revolutionised data cleaning and matching processes since 2020, making evaluation more user-friendly and efficient (BARC, 2024). These tools liberate data engineers from tedious tasks, allowing them to focus on strategic roles.

Modern AI enables automated anomaly detection that identifies quality issues escaping manual review, predictive quality forecasting that anticipates where problems will emerge, intelligent profiling at previously impossible scales, and real-time evaluation through data observability dashboards (IBM, 2024).

Agentic AI Represents the Next Wave

By 2028, an estimated 33% of enterprise software applications will incorporate agentic AI, up dramatically from less than 1% in 2024 (Coherent Solutions, 2024). These autonomous systems set evaluation goals, plan assessment tasks, execute quality checks, and adapt based on feedback—all without continuous human oversight.

The Performance Impact Is Proven

In manufacturing specifically, AI-driven evaluation contributes to median improvements of 30% in forecast accuracy, 25% reduction in product defects, and 20% reduction in excess inventory (IBM, 2024). These aren't hypothetical benefits—they're results organisations are achieving today.

But Success Requires More Than Technology

AI introduces new evaluation requirements. Explainability assessment ensures that AI-driven evaluations remain transparent and auditable. Bias detection identifies algorithmic bias in evaluation processes. Human validation protocols determine when AI outputs need human review—a practice more common among high-performing AI adopters (McKinsey, 2025).

Here's the reality check: while 88% of survey respondents report regular AI use, most organisations haven't embedded AI deeply enough into their workflows to realise material enterprise-level benefits (McKinsey, 2025). Only 51% of companies have mature AI and analytics capabilities (Kearney, 2024).

In Australia, the Australian Taxation Office adapted its existing data governance arrangements to support AI adoption, recognising that AI introduces specific risks requiring additional evaluation frameworks (ANAO, 2025). This practical approach—building on existing strengths while addressing new requirements—offers a model for organisations navigating AI implementation.

Building Your Evaluation Capability

Data evaluation has evolved from a compliance checkbox to a strategic imperative that directly impacts competitive advantage.

The fundamentals remain constant: accuracy, completeness, consistency, and timeliness form the foundation of quality data. But the scope and sophistication of evaluation continue expanding, particularly as AI transforms both the processes and the importance of getting evaluation right.

Organisations face persistent challenges—data silos, governance gaps, cultural resistance, and the complexity of evaluating data at scale. Yet emerging trends in AI-driven automation, real-time evaluation, and comprehensive frameworks offer pathways to more effective, scalable assessment practices.

At Via, we view AI as an enabler, not the ultimate decision-maker. Technology is at its best when it empowers humans to make informed decisions. The key is starting systematically: identify what matters most to your business, implement consistent measurement practices, and build evaluation into your operational rhythm rather than treating it as an occasional audit.

The gap between leaders and laggards is widening. Companies with AI-led processes achieve 2.5 times higher revenue growth compared to their peers (Accenture, 2024). For NDIS and aged care providers specifically, strong data evaluation enables better care delivery, regulatory compliance, and operational efficiency—outcomes that directly impact the people you serve.

The question isn't whether to invest in data evaluation. It's whether you can afford not to.

Ready to build evaluation capabilities that drive better decisions? Via specialises in identifying the signals that matter most to your business and optimising systems to capture, interpret, and act on high-quality data. Get in touch to discover how we can help you move from reactive to strategic.

Dorian Trevisan

Dorian Trevisan

About the Author

Dorian is an expert software advisor with a development background that provides a detailed and comprehensive understanding of systems and processes.

Dorian Trevisan

Dorian is an expert software advisor with a development background that provides a detailed and comprehensive understanding of systems and processes.

Email

Keen to continue the conversation?

Further Reading