Can Data Governance Lead to Data Transparency and Understanding?

33% of C-Level Execs Don’t Trust Their Data. Can Data Governance Help?

Jeff ShortisJuly 13, 2017

Download Data Sheet

Data is a critical part of any organization’s DNA, moving from countless sources and flows through multiple systems in support of numerous mission critical business processes. Yet it’s rare to find anyone who will identify themselves as data owners, let alone take on that responsibility for data within their business line. The fact is, most people don’t know where the data that they rely on to make decisions originates, nor the level of trust they can put behind it. This is one of the main challenges and reasons why they don’t take ownership. This level of ambiguity makes it increasingly difficult for data users to understand how much they can trust and who to go to when they have questions. If you can’t trust the quality of your data and have no objective metrics to back up any quality claims, then you also can’t trust the insights you’re trying to gain by using it.

All it takes to discredit an analytical insight is for someone to present an issue with the ground breaking insight.  When this happens, the overall morale is lowered and the culture of innovation is inhibited.  Good data leads to better insights and increases the entrepreneurial spirit.  This phenomenon should answer any questions on the ROI for data governance.

Why it’s Crucial to Understand the Quality of Your Data 

Business users need transparency into the definitions and quality of their data to assess its fit for purpose for solving business problems. Without transparency, data users are left in the dark and will likely make incorrect assumptions about what and how data should be used.  For data consumers, understanding the level of your data quality is a big deal! 

The reality is that measuring and communicating meaningful conditions around data quality is no simple task unless you have the right approach. Determining whether data quality is “good or bad” is not a binary condition – it depends on the expectations of each business process that is consuming the data.  Let’s take a look at an example:

Let’s assume you work at a bank and part of your job is creating monthly bank statements. It would be critical to have the correct address for each individual statement, including the correct name, street, street number, Zip code, etc. To ensure having the correct addresses, business rules around address information will be defined and implemented. Because we are sending these statements to a bank client’s address, the threshold for errors would be extremely low.

But what if we were doing some repurposing of the data, something that had nothing to do with publication of bank statements? What if we were reporting on interests for certain types of products within certain areas or regions?  In this scenario, the address information accuracy would have a lot more tolerance where only accurate Zip codes were required. Inaccurate address information, bar the zip code, would not affect the outcome of this initiative.

We have established that the thresholds for good or bad data, regarding data quality, are completely dependent on the expectations of the business function using the data. This isn’t typically how we would think of data quality, and it raises the bar with regards to the types of measurement, articulation, dashboarding, and communication capabilities required to achieve the result of an empowered data consumer across the various business functions. So how do we dive deeper into the quality needs for organizational data and figure out the thresholds of quality needed?

Measuring Data Quality 

To provide a comprehensive measurement for the quality of data, an organization needs to have capabilities in place around data governance, quality, and analytics. As information becomes available, these capabilities help detect, measure and report quality issues as well as provide an easy-to-reference business glossary that can model the business concepts and thresholds which give data quality context, impact and business meaning.  To achieve tangible metrics we need the ability to calculate data quality scores. Once scores are defined and measured they can be leveraged to notify data consumers and data owners when thresholds are breached.  Once notified, data consumers can use the glossary to get rich definitional information, ensuring a proper understanding of the data including the data’s lineage. Full visibility allows your team to gain valuable insights into not only the details of your data assets, but also with the associated risks with its use across various business applications.

Having a fit for purpose practice and solution around data provides users with the ability to choose the right data at the right time, gain a full understanding of the data sets, its definition, ownership and associated quality, knowing that their quality levels may differ from other consumers of the same data. This inventory of knowledge can provide straightforward answers to fundamental questions being asked by all data users, such as, “Who owns the data?” “Can you trust the data for our particular business function?” “What’s the definition of the data?” “Are the definitions the same across all systems?”

By providing data governance, quality, and analytical capabilities, organizations can gain a broad and comprehensive understanding of the data, enabling all data consumers to extract maximum value, knowing the quality levels and minimum expected quality level needed to meet their own particular business needs. 

To learn more about ensuring transparency in your data, download the data sheet below.

Get Insights

For a deeper dive into this topic, visit our resource center. Here you will find a broad selection of content that represents the compiled wisdom, experience, and advice of our seasoned data experts and thought leaders.

Download Data Sheet