THE REGTECH TAKEOVER
Digitisation is constantly presenting the financial industry with new challenges.
When we try to classify or rank the Corona outbreak and its impact on financial markets in a retrospective ‘crises hitlist’, we might be tempted to say: “whenever we believe that things can’t get any worse than this…”. Interestingly, as human beings, we are programmed to do the opposite of what the above phrase suggests. You may be familiar with the rapid rebound (“yo-yo effect”) after a weight reduction resulting from a diet: Our brain is trying to trim our body to better resist the next unexpected food deprivation torture by eating even more than before.
That is, we are trying to be prepared by nature. While nobody would deny that for some form of extreme anomalies it will be very hard, if not impossible, to take eligible preventative measures, this won’t stop us from trying though. And it should by no means imply that we have to accept disasters of this kind as fateful. However, the analysis (in a market context) is often much more complex than it appears from the surface and answers are not always satisfying. In other words, what is a relatively easy exercise for our brain to tell our body (“eat more than you’ll weather the next hunger”), is a bit more complex in the case of market mechanisms. Here, I refer to mechanisms which are attributable to statistical effects and that, under specific conditions, amplify the oscillations a certain behaviour would have anyway.
As you know too well, the size of global equity markets, in terms of trading volumes, is, on average, just half of what it had been before the financial crisis of 2008/ 2009. A significant part of this pull-back can be attributed to unnaturally active trading before the crises but there were paradigm shifts, too: The fragmentation of liquidity (lit and dark) as a result of regulatory-driven competition, the massive increase in passive index-tracking and the rise of high-frequency traders. While we think that fragmentation must be differentiated (we consider on-venue fragmentation overall liquidity-neutral), equity funds – in trading less price-sensitive – trade less actively compared with active managers. High-frequency traders have been welcomed as replacing parts of the liquidity drained over the past ten years or so, but can this view be held up when discounting their momentum-driven disappearances in times of severe stress?
Here’s what we think should be paid much more attention to going forward: How will liquidity patterns develop and how should they be interpreted against the background of a changed market structure in stress scenarios. This change in the market structure is not just marked by the emergence of new subjects within that structure. There is behavioural change, too, that needs to be considered. Unlike during earlier crises, uncertainty seems to no longer lead investors into what they then referred to as safe havens. In March 2020, prices for investment grade bonds fell sharply despite every effort of central banks to slow-down the fixed income assets sell-off that had run rampant across all ranks of credit. At the same time, equity markets – the poor liquidity of which we had bemoaned just one paragraph above – were sold off just very temporarily to see heavy inflows and a non-crisis mode of differentiation only three weeks after we thought we had seen Horseman No. 1 of the Apocalypse.
On a more objective and analytical tone, this underpins the strong correlation between the new (dominance of the) investment types of index-tracking and high-frequency trading and (based on their exploitation of cutting-edge technology) their amplifying impact on the distribution of liquidity.
It has become evident that the changes to the market structure as described above increasingly also change liquidity patterns and they have contributed to the increased occurrence of outliers in the provision of liquidity (which has always been a statistics-influenced business anyway).
Prior to the flash crash of 2010, such event was statistically calculated to occur with a probability of once in 400 billion years. We then witnessed similar shocks in 2015, 2018 and finally, the one in March 2020. The number of standard deviations from the normal distribution – above ten in March ’20 for the S&P 500 – may illustrate how common a phenomenon statistical outliers have become. During the period of such extreme market volatility, it had been extremely difficult for providers of liquidity to make a market, if not impossible at some points in time, predominantly due to statistical effects as we believe.
Though it’d be highly inappropriate to discuss the pillars of a market maker’ business model here, it will be helpful to be kept in mind in order to make more obvious what can be best described by a rather vintage term: statistical arbitrage. By this, I refer to the strategy of detecting and exploiting market anomalies for the purpose of generating immediate revenue in a straight-line manner. Adding the stochastic process of jump-diffusion to mean-reversion, volatility clusters, and drifts, it had become possible to consider overnight price gaps and exploit market anomalies that occur just for minutes at the beginning of the trading session.
While the model of statistical arbitrage had been developed around 35 years ago, the incorporation of discontinuities (“jumps”) into the statistical analysis of financial price changes is a younger concept; which makes sense given that the relevance of jumps has been increasing proportionally to an increasing market efficiency that, in turn, increased in line with technological progress.
Barndorff-Nielsen and Shephard (2004)* are referred to as being the first to have developed a method to distinguish jumps from the diffusive component of volatility*. Their jump test compares two variance measures to assess, whether in a sample period, there have been statistically significant jumps (Realized Variance, converging when the sample approaches infinity to the integrated variance including a jump component, and Bipower Variation, that converges to the integrated variance without a jump component).
Although providing statistical evidence for the existence of jumps based on analysed sets of data, the early approaches returned incoherent results. While there has been significant development of the models since and admitting that they’re quite complex, it is not so difficult to figure that their application in trading algorithms produce a lot of the microstructural noise they were aimed at eliminating in the first place. I.e., while designed to discount deviations from the fundamental value element of the price – induced by aspects as plain vanilla as the bid-ask bounce or by latency effects or other asymmetries – they add further deviations from fundamental value (‘microstructural noise’), erroneously identified as momentum by another programme, and so on.
Now, why did I refer to the market maker business model above? In theory, we all consider a lower income from smaller bid/ offer spreads to be offset by larger volumes at the lower spreads. However true this may prove for a particular case in practice – when low-latency trading automatisms come into play, a third side to this equation gains momentum. That third side is the fact, that, as a market maker, your losses are set to increase the tighter your bid/ offer spreads become. When getting hit at your best bid and your follow market is just tick sizes away, you will certainly get hit there, too, with volumes increasing.
Curtailing the trade size in an attempt to diminish the impact of this pattern – if I may jump back to the diet analogy cited at the beginning of this note – will only perpetuate the problem as the seller couldn’t execute its full size in the first place.
If you combine these dynamics, this may explain lots of the “air pocketing” effects which occur more and more frequently, which are statistical in nature and which pretend a perfect, normally distributed and deep liquidity at one moment – only to suddenly evaporate and to then re-occur at a different level where the same mechanism starts.
*It’s Measuring the impact of jumps in multivariate price processes using bipower covariation by Ole E. Barndorff-Nielsen and Neil Shephard, Copyright 2004, Oxford University Press.