Skip to main content

On a joint journey: the evolution of Corporate Data Quality

CTO & Co-Founder CDQ

It all started in the early 2000s: Competence Center Corporate Data Quality was established at the University of St. Gallen as a research program in 2006 with the goal to explore, develop and test solutions that advance data and analytics management in enterprises.

While pursuing his PhD in Competence Center Business Networking, Dimitrios Gizanis realized what a crucial success factor for business networking master data quality is. At the time, I was doing my PhD in the Competence Center Corporate Data Quality, and so when Dimi reached out, we quickly found a common ground to explore further on the topic.

Our formula was quite simple – expert know-how combined with research framework in a community-based approach. We adjusted our research agenda to the relevant topic of master data management, and were able to bring professionals from different industries to collaboratively tackle these challenges and build solid relationships as we kept the conversation going. What started as knowledge exchange activities, soon proved to be the sharing mindset that helped us consistently accelerate the results of collaboration.

The CC CDQ objective to transfer theoretical preliminary work and scientific research results in the domain of data management to everyday business practice hit the bull’s eye. The growing number of partners joined the discussion and our efforts were bringing even more ideas to the table. We started thinking boldly about data sharing as a way to improve the data quality, but it wasn’t until late 2010 that the idea was officially spoken out and picked up by the members of the CC CDQ.

A December to remember

Surrounded by picturesque Lake Lucerne and mountains rising out of steep waters, we were discussing approaches and best practices around improving data quality with data management practitioners from large European companies. Another workshop organized by the CC CDQ brought together data experts – for the past four years, the discussion on data management gained momentum.

After an intense workshop day, the data hype did not go to rest. Participants gathered for the informal discussions and kept exchanging their pains and concerns around daily data struggles.

Suddenly, one of the experts said: 'Why is it we all tackle data maintenance alone? We have so many of the same suppliers! Why am I updating a supplier record with a new tax number, and my colleague here in Novartis does the same, and the guys at Syngenta do it as well? That’s so painfully inefficient! If only we could share the burden!' The others agreed, and shared many war stories of painful manual data correction and inconsistent executive reports in the following hours.

From shared pains to shared results

Collaborative approach shared by so many peers did not come out of nowhere. It’s been four years of joint projects and critical know-how exchange in the regular workshops, and the trust was a natural consequence of going hand in hand with business to advance data management activities for the shared benefit of all.

If I think back to this evening, one of our big challenges at Nestlé in those days was to change the mindset from 'My Data' to 'Our Data' as we were implementing a global master data environment for Customer, Material, and Vendor records. The idea of sharing the maintenance between different partners sounded really attractive. It also supported our idea of harmonized processes – why do we need more complex rules than Novartis, Roche, or Syngenta? This was the starting point to re-design our maintenance processes, and helped us get our mindset to 'Our Data'.

As 15 member companies quickly agreed that this obvious inefficiency is a great topic to explore, we started researching further on how companies are keeping data up to date for business partners like suppliers and B2B customers.

The situation was indeed ripe for innovation: not only was the supplier or customer overlap between companies much higher than initially anticipated (we now typically see an overlap between 20-60%). The inefficiency was also very costly – to decide which data to trust, a highly skilled employee is required. I’ve seen people with master's degrees in Engineering pouring over Google Maps, trying to verify a supplier location.

Jointly excelling corporate data quality

So, the idea was born to pool and better leverage the most expensive data quality "tool" known: human judgment. With a shared data pool where data experts from different companies can collaborate, one person could make an update, a second person validate it, and the entire community would benefit in the end? We developed an initial prototype and it showed promise, but soon, the vast complexities of business partner data emerged: an American and a European trying to decide which legal entity codes are known in Japanese business law?

It became clear why nobody had built this before…

But we were not discouraged, quite on a contrary actually. We kept expanding the prototype and were moving toward our first software-based service. It was not yet a fully automated cloud services we offer today, but it sure was an important step in building great synergies with the community. Soon, a growing data quality rules base could soon check legal entity identifiers from all countries on the globe. To further increase the size and quality of the data pool, we started integrating public and official sources, known as reference data.

In 2015, we officially initiated our software business and formed the first data sharing community of companies to jointly maintain business partner data. Back in the day, the name of the community was Corporate Data League. Many of today’s Data Sharing Community members have been with us from day one. From one time-critical cleansing project to the next, the Data Sharing platform proved its value, with more and more customers from the original "pioneers' club" asking for a subscription model. So, in 2017, CDQ officially launched the platform as a cloud-based software product.

We could build on the trust between our data management pioneers from different companies – sharing company-internal data sounds scary to everybody who hears it for the first time, but our experts trusted us and each other. We preserved the trust by baking it into our governance rules – who can see what on the platform? Plus: The pain of manual maintenance was just too big.

We started with Data Sharing as a vision. Today it is reality – our Cloud Platform is used by leading companies like Bayer, Schaeffler, or Bosch, and they enjoy receiving 'data quality as a service'. We deliver and measure real business value from Data Sharing and our customers save up to 60% of data maintenance costs, plus prevent risks like falling for invoice fraud.

Some call the approach 'Interenterprise MDM', some call it 'Sharing of Commercially Identifiable Information' - we simply call it Data Sharing. Our multinational customers are benefiting from the Data Sharing Community already very successfully as the best way to better business data.

What seemed a crazy idea born on an early winter evening in the beautiful Swiss Alps, is now a recognized and successful approach to improving corporate master data. When companies accept that other enterprises have good data management and understand the value of joining forces, we can share the efforts of cleaning and updating client and vendor data, each and every one keeping data sovereignty.


Get our e-mail!

Power of community: become a data hero too!

Twice a year the CDQ Data Sharing Community gathers for a 2-day onsite workshop to discuss all things master data in quite a unique format. During the event,…

Data Mesh and the case of Data Sharing

Fueled by the increasing demand for distributed, scalable, and agile data platforms that can enable

Where remote-first means people-first: how we live CDQ values in our daily work

Just like CDQ software solutions harmonize your master data scattered across systems, our employees work in a virtual harmony despite dispersed geographies. How…