Financial markets never sleep.
The trade in financial instruments has long been a global marketplace, operating round the clock. But increasingly, market continuity is maintained not only by human intervention, but also by algorithmic tools. This has brought with it huge client benefits: executing trades at an unprecedented pace and volume, lowering costs, increasing accuracy, and removing human fallibility and emotional biases.
In short, algorithms are helping to deliver financial services better and faster, providing a huge boost to the wider economy.
This shift brings with it a new risk landscape, requiring a new taxonomy. While prudential management of the financial system has traditionally focused on the strength of banks balance sheets, regulators are increasingly looking at banks operational resilience and the adequacy of their systems.
As the Bank of England pointed out in its latest Financial Stability Review, the pace of financial markets through algorithmic trading (“algo trade”) and the liquidity within these automated markets are amplifying the impact of any mis-programming or technology glitches. This poses threats to the stability of both individual banks and the wider financial system.
The investigation by the Prudential Regulation Authority (PRA) into the sterling flash crash of October 2016, during which the pound depreciated by nine per cent in few minutes against the dollar, raised the importance of managing both excessive volatility and the potential risks to the marketplace.
The PRA has since issued a supervisory statement on the risk management and governance of algo trade, and called for accountability among banks management body, making “algo risk” a matter in the boardroom as well as on the trading floor. This follows the FCAs review of algo trade compliance in February, published just weeks after the implementation of MiFID II, under which algo trade became a regulated activity for the first time in Europe.
The regulators expectations on mitigating risks associated with algorithmic trading are high. But making algorithms behave is no easier than restraining the behaviour of rogue traders.
There are myriad risks associated with algorithmic trading. Threats could arise from mis-specifications of models (due to flawed assumptions, for example), coding errors, or the misuse of such models.
Equally, such risks could be a result of a failure in the development, testing, or deployment of the IT systems used for algorithms trading, or due to the risk management team not fully understanding complex algorithmic models and failing to identify potential conduct or concentration risks.
The management of these sources of “algo risk” rarely sit within one team at banks. But anything which falls short of a holistic oversight could mean that connections between risks fall through the cracks, potentially threatening the banks stability should an unexpected market jitter occur. Knight Capitals near-bankruptcy experience, when a glitch in its trading systems cost the firm $440m worth of damage, should be a powerful reminder.
Containing algo risks is key to meeting regulators expectations and to maintaining the operational resilience of financial firms and systems. Doing so requires different departments at financial institutions to work together more effectively.
Identifying and closing gaps in risk and control frameworks will be key, and so will a rethink of the cultural dynamics at financial institutions.
As algorithms and machine learning drive more decisions, traders need to assume a more active custodian role of algo-driven activities. Equally, risk managers need to step up their challenge to the design and development of algorithms used in front offices.
As with voice-trading, firms that embed a risk-based culture in the construction, deployment, and ongoing use of their algorithms will be the clear long-term winners as competitors struggle to catch up with regulators supervisory requirements.