The global financial crisis of 2008 highlighted a lack of understanding on the part of banks, investment
managers and corporate treasury departments about the legal structures of their counterparties. In
particular, the collapse of Lehman Brothers demonstrated that if legal entity information had been
more readily available and trusted, the failure of the entire supply chain across financial institutions
could have been lessened, or avoided.
This experience has resulted in a regulatory initiative, endorsed by the G20, known as the Legal
Entity Identifier (LEI). The LEI – a unique identification system for parties to financial transactions – is
intended to help regulators and market participants monitor and mitigate the build-up of systemic risk
in the system. But it will have business benefits too. Post-crisis, the lack of standardised identifiers for
entities, coupled with the absence of an authoritative source of legal hierarchy structures are driving
up the cost of doing business at a time when institutions can ill-afford further demands on their bottom
Implementing the LEI must start with accurate and compliant data, which is fast becoming the holy
grail of the financial markets. The sheer volume and questionable quality of data that exists in the
financial markets is causing both reputational and operational harm. Indeed, according to a study by
the Bank of International Settlements, four of the top seven reasons for failed settlements are linked
to flawed counterparty data.
Currently, organisations use a variety of methods to gather, organise, and manage entity identifier
information for the huge number of defacto standards that exist today. This inevitably leads to
duplicates, inconsistencies, and erroneous mappings seeping into the data. Records, over time,
become stagnant and the processing of actions becomes increasingly complex. This is exacerbated
by the fact that entities, and their relation to their own subsidiaries, may have undergone multiple
changes in their legal structures which have not been updated in the information currently held by
corporates, leading to further data irregularities and heightened business risk. Together, these issues
mean that data management has become costly and time-consuming and, too often, has failed to
keep up with the new demands and challenges of the financial markets.
However, the LEI is now encouraging market participants to review their legal entity structures and is
driving forward an emphasis on the need for clean, accurate data. For many corporates, preparing for
the LEI will involve a data cleansing process that will undoubtedly throw up fundamental issues that
have to be dealt with. For example, if duplications are found and exposures consolidated, this may
lead to larger or smaller exposures than expected and a firm may need to trade its way into bringing
the exposure to an acceptable level.
The anticipated result of the LEI initiative is that, eventually, all firms will require and register for an
LEI. Obviously, those with clean data will spend less time and resources on implementation and
therefore, one of the things that organisations should be focusing on now is to examine the quality of
their data in order to build a comprehensive view of their database, their product usage and
requirements. To ensure compliance with the post-crisis regulatory measures that will require an LEI
or equivalent, such as the trade reporting requirement in the European Market Infrastructure
Regulation (EMIR), corporates will need to know the LEI of the parent company of the
As the global LEI becomes more widely used, organisations should see cost reductions and improved
risk management, at the firm level and across their data systems. These savings would come
primarily from operational efficiencies, such as reducing the volume of transaction failures, lowering
data reconciliation and reducing regulatory reporting costs while simultaneously providing better tools
to do enterprise risk analysis, the importance of which cannot be overstated.
Many institutions have already begun the process of analyzing their current data architectures,
determining where entity data is present, matching their internal records to LEIs, and identifying data
quality issues within their data. Those who move most quickly to ensure the accuracy of their data
will emerge as the winners.
This article first appeared on gtnews, 5 March 2012.