CIOReview Recognized Lightup as
Enterprise Data Quality Monitoring Solution Company of the Year
2024

Table of Contents

Revisiting Data Quality and Data Observability: What’s (E)merging and Why That Matters

In 2023, I wrote a blog highlighting the similarities and differences between Data Quality and Data Observability technology platforms, Myth-Busting Straight Talk About Data Quality Monitoring and Data Observability. Back then, there was still a fair amount of confusion about the terminology and categorization of products in the Data Quality realm. “Data Quality” has been the accepted category, which was disrupted by new vendors that carved out a new space, “Data Observability.”

But, as anticipated, as more and more companies learned about modern Data Quality Monitoring and new Data Observability platforms, we started seeing customers’ perceptions and expectations converge — requiring features and capabilities of both technologies in one solution.

This convergence was solidified by Gartner in 2024 as a joint category was unveiled, “Data Quality and Data Observability,” supported by five pillars of capabilities. The first pillar in this blended category was Data Quality, emphasizing the importance of data content itself.

Data Quality and Observability Gartner Pillar 1 Observing Data Content
Data Quality and Observability Pillar 1: Observing Data Content

Gartner’s validation re-energized me, knowing that Lightup was born to support the number one pillar: Data content.

Data Content Is King

It’s settled: Data content is king, which I shared on LinkedIn, along with a few key takeaways from the 2024 Gartner Data and Analytics event.

What’s (E)merging?

So what’s emerging — and merging — at the same time? What we’ve seen is vendors that originally offered pure-play Data Observability solutions are starting to expand into Data Quality as defined in my previous blog, expanding their platforms to include deep data scans versus just observing and analyzing metadata. On the flip side, the vendors that started out with Data Quality as their core offering have started including metadata metrics.

That all makes sense because, ultimately, data teams care about whether or not the data is healthy or not. Is it fit for purpose or not? That’s their primary concern.

The next set of concerns include being able to explain the problem in order to triage and remediate issues faster. Once a data issue actually exists, the concern shifts to knowing the root cause of that problem. And to understand the root cause, it helps to have signals coming from a broad set of sources in the data stack. That’s where metadata metrics, logs, and lineage start to be very useful in explaining and localizing the problem.

Mirroring how industry analysts have explained the hierarchy of concerns, vendors have applied that framework of requirements to the way feature sets are built and prioritized, captured in these three capabilities:

  1. Monitoring: Detect bad data or data that fails to conform to expectations.
  2. Root Cause Analysis: Enable users to analyze and pinpoint where the issue originated and why.
  3. Data Remediation: Fix the problem so it doesn’t reoccur.

Sensible and practical, supporting all those capabilities in one platform for Data Quality initiatives paves the way for a convergence of technologies — putting an end to the ongoing debate between Data Quality versus Data Observability.

At the end of the day, we’re all trying to solve the same problem and deliver trusted data that’s fit for purpose for data products and data consumers.

But, What About Users?

While this convergence of Data Quality and Data Observability continues to mature, there’s still a vast difference in user focus. Generally speaking, we’ve noticed a distinct separation between the target user groups for Data Quality and Data Observability platforms:

  • Data Observability tools, especially open source, tend to be code-heavy, requiring extensive time and engineering resources.
  • Traditional Data Quality tools can be code-heavy or require specialized skills to learn a proprietary rule engine.
  • Modern Data Quality Monitoring tools offer no-code, low-code, and custom SQL options for creating checks, aimed to support non-technical business users, developers, and engineers.

So the question is, what happens to the target user groups now that these two categories are merging?

At Lightup, we strongly believe that the best approach is to remove traditional technical barriers and democratize the process to include business users. That’s non-negotiable now. It’s the only sustainable way to scale for enterprise success and create the most value.

That’s why we’ve eliminated the technical barriers of writing Data Quality checks by providing no-code, low-code, and custom SQL options. With our approach, Data Quality isn’t just relegated to the data team anymore. Now, a spectrum of users can participate in defining good and bad data — from data engineers, junior support engineers, to data stewards and analysts to business users.

Lightup Data Quality and Observability

As we look ahead to 2025, we’re proud to be part of the Data Quality and Observability category now as we’ve also been expanding Lightup to support Observability features and capabilities, such as metadata metrics and importing lineage mapping (coming soon in early 2025).

The market has spoken, and since “data content is king,” Data Quality and Observability is the newly combined category shaping the evolution of Lightup. And we couldn’t be more excited about it!

Cheers to a bright future as Lightup rings in the new year with a major “glow-up” in 2025!

Related Posts

Related Posts

Scroll to Top