DTCC Connection

Apr 17, 2013 • DTCC Connection

Data Age

Mark Davies, General Manager and Head of Avox

Data has become, for many captive insurers, all consuming. The task of managing their own data, as well as that of their parent group or groups, has become increasingly time intensive and costly. At the same time, insurers are under pressure to collect and prepare data faster than they do today, as a result of forthcoming regulations such as Solvency II. One of the biggest yet relatively unreported issues around data management relates to the vast amount of inaccurate data that exists in the insurance industry. This article will explore the growing importance of data accuracy for captive insurers in the context of business pressures and regulatory drivers.

When it comes to data, anyone operating outside of the back office function within captive insurance firms would be forgiven for thinking that the single biggest issue or topic is big data, a term which means different things to different people but ultimately relates to data sets which grow too large for commonly used software tools to capture, manage and process in a timely manner. However, despite the hype surrounding big data, a more pressing issue is data accuracy. In short, a serious problem throughout the insurance industry is that the data firms use to describe themselves and their parent company or companies – known as business entity reference data – is often inaccurate, meaningless and therefore untrustworthy. Business entity reference data includes, as best practice, corporate hierarchies, registered address information, industry sector codes and company identifiers.

“Unclean” entity reference data exists in almost every commercial industry. It was highlighted as an issue in the financial markets in 2008 when Lehman Brothers collapsed, and has subsequently been examined by regulators in the context of improving risk management. Policymakers have become aware that all types of risk – including counterparty, operational, credit risk – can only be properly managed if they are based on sound information, derived, in part, from accurate business entity reference data. It is a telling sign of the growing importance of data vis-a-vis new regulations that as the implementation deadlines for key regulatory initiatives (think Dodd-Frank Act in the US and the European Market Infrastructure Regulation in European) draw near, we, at Avox, have seen a 25 percent increase YoY* in the number of entities’ reference data we have validated – a process whereby we check, correct and maintain information.

From a regulatory, client and counterparty perspective, attention in this area is not confined to the financial markets. Solvency II is focusing insurers’ minds on data quality, especially around Pillar 3 of the directive, which relates to disclosure and transparency. Pillar 3 requires the provision of high quality information and the ability to report and publish this data according to regulatory requirements. In relation to some of the other Pillars of Solvency II, data quality has, to date, received less attention, but there is a growing realisation that it is a key component in the context of the entire regulation.

It is no great surprise that data inaccuracy exists as unclean data is something that appears over time. Organisations tend to use a variety of methods to gather, organise, and manage data. This inevitably leads to duplicates, inconsistencies, and erroneous mappings. Records, over time, become stagnant and the processing of actions becomes increasingly complex. This is compounded by the fact that entities, and their relation to their own subsidiaries, may have undergone multiple changes in their legal structures which have not been updated in the information currently held by corporates, leading to further data irregularities and heightened business risk.

These issues mean that data management has become costly and time-consuming and, too often, has failed to keep up with the new demands and challenges of the insurance industry. And this needs to be addressed.

To begin with, firms need to undertake a data cleansing exercise, which involves accessing their legal entity structures and checking this against the entity reference data they hold – a process which we can be managed by external providers, such as Avox. Once the data is checked, repaired and updated, it then needs to be properly maintained on an ongoing basis. This way, on a practical level, firms can improve their data quality and facilitate better access to this data across the business. By doing so they can demonstrate a commitment to implementing robust risk management processes and best practice in data management, bringing reputational as well as business benefits.

Many firms have already begun the process of analyzing their current data architectures, determining where entity data is present, and identifying data quality issues within their data. Those who move most quickly to ensure the accuracy of their data will emerge as the winners.

*As of March 2013

Article first appeared in Captive Insurance Times, Issue 20, 17 April 2013

dtccdotcom