It all started in the early 2000s: Competence Center Corporate Data Quality was established at the University of St. Gallen as a research program in 2006 with the goal to explore, develop and test solutions that advance data and analytics management in enterprises.
While pursuing his PhD in Competence Center Business Networking, Dimitrios Gizanis realized what a crucial success factor for business networking master data quality is. At the time, I was doing my PhD in the Competence Center Corporate Data Quality, and so when Dimi reached out, we quickly found a common ground to explore further on the topic.
The CC CDQ objective to transfer theoretical preliminary work and scientific research results in the domain of data management to everyday business practice hit the bull’s eye. The growing number of partners joined the discussion and our efforts were bringing even more ideas to the table. We started thinking boldly about data sharing as a way to improve the data quality, but it wasn’t until late 2010 that the idea was officially spoken out and picked up by the members of the CC CDQ.
A December to remember
Surrounded by picturesque Lake Lucerne and mountains rising out of steep waters, we were discussing approaches and best practices around improving data quality with data management practitioners from large European companies. Another workshop organized by the CC CDQ brought together data experts – for the past four years, the discussion on data management gained momentum.
After an intense workshop day, the data hype did not go to rest. Participants gathered for the informal discussions and kept exchanging their pains and concerns around daily data struggles.
Suddenly, one of the experts said: 'Why is it we all tackle data maintenance alone? We have so many of the same suppliers! Why am I updating a supplier record with a new tax number, and my colleague here in Novartis does the same, and the guys at Syngenta do it as well? That’s so painfully inefficient! If only we could share the burden!' The others agreed, and shared many war stories of painful manual data correction and inconsistent executive reports in the following hours.
From shared pains to shared results
Collaborative approach shared by so many peers did not come out of nowhere. It’s been four years of joint projects and critical know-how exchange in the regular workshops, and the trust was a natural consequence of going hand in hand with business to advance data management activities for the shared benefit of all.
As 15 member companies quickly agreed that this obvious inefficiency is a great topic to explore, we started researching further on how companies are keeping data up to date for business partners like suppliers and B2B customers.
The situation was indeed ripe for innovation: not only was the supplier or customer overlap between companies much higher than initially anticipated (we now typically see an overlap between 20-60%). The inefficiency was also very costly – to decide which data to trust, a highly skilled employee is required. I’ve seen people with master's degrees in Engineering pouring over Google Maps, trying to verify a supplier location.
Jointly excelling corporate data quality
So, the idea was born to pool and better leverage the most expensive data quality "tool" known: human judgment. With a shared data pool where data experts from different companies can collaborate, one person could make an update, a second person validate it, and the entire community would benefit in the end? We developed an initial prototype and it showed promise, but soon, the vast complexities of business partner data emerged: an American and a European trying to decide which legal entity codes are known in Japanese business law?
It became clear why nobody had built this before…
But we were not discouraged, quite on a contrary actually. We kept expanding the prototype and were moving toward our first software-based service. It was not yet a fully automated cloud services we offer today, but it sure was an important step in building great synergies with the community. Soon, a growing data quality rules base could soon check legal entity identifiers from all countries on the globe. To further increase the size and quality of the data pool, we started integrating public and official sources, known as reference data.
In 2015, we officially initiated our software business and formed the first data sharing community of companies to jointly maintain business partner data. Back in the day, the name of the community was Corporate Data League. Many of today’s Data Sharing Community members have been with us from day one. From one time-critical cleansing project to the next, the Data Sharing platform proved its value, with more and more customers from the original "pioneers' club" asking for a subscription model. So, in 2017, CDQ officially launched the platform as a cloud-based software product.
We started with Data Sharing as a vision. Today it is reality – our Cloud Platform is used by leading companies like Bayer, Schaeffler, or Bosch, and they enjoy receiving 'data quality as a service'. We deliver and measure real business value from Data Sharing and our customers save up to 60% of data maintenance costs, plus prevent risks like falling for invoice fraud.
What seemed a crazy idea born on an early winter evening in the beautiful Swiss Alps, is now a recognized and successful approach to improving corporate master data. When companies accept that other enterprises have good data management and understand the value of joining forces, we can share the efforts of cleaning and updating client and vendor data, each and every one keeping data sovereignty.
Get our monthly e-mail!
Step into the world of master data management with our CDQ Data Sharing Community workshop, held on April 19-20 in Düsseldorf. Over two invigorating days, 45…
Learn how our clients are benefitting from data sharing approach in these selected use cases.