Management of business entity reference data is a crucial but often complex task for financial
institutions. With attention on risk and compliance continuing to mount, Mark Davies, head of Avox,
discusses why the industry is finally waking up to the importance of data quality, and how firms can
take the pain out of the data validation process through collaboration.
This year, further progress will be made towards the implementation of new and revised regulations,
including the European Market Infrastructure Regulation (EMIR) and Solvency II in Europe, the Dodd-
Frank Act in the US, and the Foreign Account Tax Compliance Act (FATCA) – each of which have key
dependencies surrounding data quality, governance and management.
The contribution that good data management makes to how efficient – and even how compliant – an
institution is should not be underestimated. It is a critical part of the new risk dynamics, whereby
institutions must prepare, report and manage data pertaining to their exposure to clients and
counterparties, as well as their disclosure to regulators. Poor quality legal entity data will directly
impact a firm’s ability to track risk concentrations across geographies, sectors and within corporate
Data management and governance should be a strategic priority for financial institutions, not least
because it helps them to manage their operational, counterparty, regulatory and even reputational
risk. But, importantly, data management also impacts on processing efficiency, particularly in relation
to how quickly firms can extract, collate and communicate data, and in turn perform their core
However, data management in financial institutions is severely hamstrung by inaccurate legal entity
reference data (which includes, for example, registered address information, industry sector codes,
company identifiers and corporate hierarchies). It stands to reason that if the data used is out-of-date
and incorrect, then, at best, it is meaningless and, at worst, it results in an erroneous measurement of
the institutions’ risk exposures.
Regulators and investors are increasingly cognisant of this fact which, in turn, is providing institutions
with the incentive to improve data quality and data governance. Indeed, as an illustration of this trend,
Avox has seen a 25 percent year-on-year increase in the number of entities’ reference data we have
validated – a process that involves checking, enriching and maintaining data – on behalf of our
The new regulatory reporting rules aimed at improving the oversight of risk in the market are a clear
driver of the need for data validation as part of the overall data management process. So too is the
desire to improve the onboarding process and performance of know-your-customer (KYC) and due
diligence checks, as well as internal controls for risk management.
The concept behind data validation is simple: the data should be checked, repaired and constantly
maintained to avoid records becoming stagnant. Over time, entities are extremely likely to have
undergone multiple changes to their legal structures and company details which, subsequently, have
not been updated in the information held by institutions. At best this results in significant operational
clean up effort, at worst this leads to heightened business risk.
In practice therefore, the task of data validation is becoming increasingly complex, primarily as a
result of the burgeoning volume of data that needs to be validated. Institutions must manage their own
business entity reference data, as well as data relating to potentially thousands of clients located
across various geographies and jurisdictions, some of which – for example the Cayman Islands – do
not provide open access to all business entity data needed for certain reporting requirements and other business processes. Moreover, as mentioned, since business entity data changes frequently,
data validation is a continuous process.
Data management, despite perceptions, cannot survive on automated processing alone. It requires
dedicated teams and expertise, knowledge of where data can be sourced and checked, ability to
adapt to hundreds of languages and, ultimately, human oversight to find the definitive answer. Given
budgetary constraints and squeezed margins, the ability for institutions to properly resource and
manage this process in house is becoming increasingly constrained.
As a result, more and more institutions are outsourcing their requirements around data cleansing,
validation and maintenance. And, in equal measure, they are also turning to each other, and
embracing a collaborative approach to data management. Indeed, a recent report on wholesale
banking by Oliver Wyman and Morgan Stanley highlighted some interesting observations on how
banks faced with a drastic need to cut their cost bases should be focusing on more collaborative
The collaborative approach makes sense for legal entity data management due to the fact that the
data on its own is not sensitive or confidential. And, in the collaborative model, data which is checked
and corrected on behalf of one client is then shared among the other members of the pool, resulting in
increased efficiency, and reduced costs for its users.
Since the global financial crisis and the collapse of Lehman Brothers, there has, undoubtedly, been a
shift in attitude towards data functions, data accuracy and data management. However, in order to
realise the efficiency and risk mitigation benefits of data validation as part of the data management
process, and in order to convince internal audiences that progress towards data quality improvement
has been made, firms need to start reviewing their internal data standards and governance sooner
rather than later.
Article first appeared in ISS-Mag on 28 May 2013