TEST YOUR TRADING INFRASTRUCTURE

Test your trading infrastructure header
Thibault Gobert interviews Steven Townsend

Welcome to a new edition of Thibault’s Tech Blog. This time, I have the pleasure of discussing the relevance of testing in the context of trading venue infrastructures with our Operations Manager, Steven Townsend.


Steven, the importance of testing technology stacks should be considered a no-brainer, be it in manufacturing industries or any private or public services sector. Why is it nevertheless a special topic in association with trading infrastructures?

Well, I think a particularity of testing within trading architectures is that the technology has evolved at a pace that processes couldn’t cope with.

Can you please elaborate on that?

If you take a car, for example, you could rightly argue that today’s vehicles have not much in common with those of 30 or 40 years ago. However, you would also not dispute that car manufacturers are accustomed to testing since the invention of the Benz Patent Motorwagen1 and that testing requirements have developed alongside technological progress, while the relevance of all external factors - from individual security to environmental issues – has increased in a linear manner. It is true that development cycles have shortened significantly here, too. But so many requirements standardised throughout the industry mean that there is always a natural linearity between what is technically possible and what will go into production when and how. Testing is much more firmly rooted in the overall process design here, not least due to the immense liability risks emanating from components or features that don’t ultimately meet all relevant requirements. Everybody can easily grasp the intensity of automotive testing when thinking of autonomous driving.

In trading, the development was much less linear and incentives to test were rather rare; a failed trade as a result of a technical bug was less expensive both in monetary terms as well as from a reputational perspective. Put simply, the ultimate test method was production. Regulation wasn’t as strict either. Then there was an entirely new world within a comparably short period of time. Tech has revolutionised trading in terms of volume, frequency, complexity and automation. Heterogeneous systems landscapes, more complex end-to-end process designs, various taxonomies and a no less complex regulatory regime have shaken up the industry within not much more than a decade. Firms that could once afford to outsource the design of any single solution or that built extremely complex and non-standardised proprietary systems struggle today to ever get “ahead of the curve” regarding the oversight of their operation’s backbone.

Can you explain the regulatory challenge in more detail, please?

Regulation had identified the problems we’ve discussed as systemic to the trading industry with a lack of process transparency being one of the most crucial issues creating and perpetuating many other ones. In a base case scenario this would be harmful to competition and discriminatory to investors, in a worst-case scenario, marketwide dysfunction could be exacerbated. MiFID II2 imposed a pre-trade transparency and risk management regime requiring that trading venues must at any time be able to identify which human being or algorithm decided to trade. They have to impose price collars, limits on the value of single orders or the total volume or the number of messages to be received, order cancellation or kill switch functionalities. In addition to the need to test these market disorder prevention measures - which becomes worse when algorithms come into play - there are monitoring requirements under MiFID II as well as under the MAR3 or the various testing requirements that emanate from the relevant IT governance regulations at national or supranational level. Regulatory-driven dynamics have added to the pressure on testing cycles which, due to shorter times to market, have become shorter anyway.

What kind of tests or which aspects of testing would you consider the most challenging ones?

A very significant effort is, of course, attributable to algorithmic trading which requires trading venues to provide members with a testing environment that offers simulation facilities for testing all relevant order and order flow scenarios. Capacity testing can be challenging which includes testing upstream connectivity, order submission capacity, throttling capacities and whether orders can be balanced by receiving them through different gateways. The trading engine must be tested in the course of the capacity tests and must show that it can match orders with acceptable latency as well as the downstream connectivity and the monitoring infrastructure that measures the performance of these functions. A trading venue must test whether its systems performance is still adequate when the number of messages per second has exceeded the highest number of messages recorded within the last five years. Where this is not adequate, the competent authority must be informed of the measures and the horizon envisaged to fix any capacity shortcomings. While these tests regularly prove challenging, the most serious problems in the course of testing, in my view, are associated with integration aspects.

What are these integration aspects?

Well, as discussed, trading infrastructures are often grown, multi-application environments where different technologies, APIs4 and protocols collide. This can make test design and performance complicated. But even if we discuss a modern venue with a state-of-the-industry technology stack such as Spectrum, integration remains key since this is not only involving your own infrastructure but includes the seamless processing of all kinds of flows from your members‘ infrastructures to your own platform. That is, members must have access in the first place, then their trading system, algorithm or strategy must comply with the trading venue’s conditions; testing these consistency aspects is called conformance testing. As part of the conformance tests, trading venues are regulatorily required to request evidence from their members that their system interacts smoothly with the trading venue’s matching and that the bidirectional data flow runs adequately. Members must verify that the submission, modification or cancellation of an order, IOIs, static and market data downloads and all business data flows are working frictionless. Aside from these basic functionalities, members must show that their systems perform the cancel on disconnect command, market data feed loss and throttles, including their recovery and including the intra-day resumption of trading and the handling of suspended instruments or non-updated market data.

You mentioned our own platform – to what extent are integration issues relevant for Spectrum?

Integration with member infrastructure is about connectivity while in the context of our and our clients‘ systems, there aren’t any technology issues. Putting a strong emphasis on all aspects around connectivity, we have consciously avoided using any proprietary formats or protocols to the largest possible extent. We use the FIX5 protocol, FIXT1.1 for the session layer and FIX 5.0SP2 for the application layer. However, we notice that there are various versions of FIX in use and even slight differences can pose connectivity challenges, the testing and detection of which takes time and effort.

What is your approach to dealing with that challenge?

We have decided to deploy an automated testing solution that allows our clients the maximum flexibility for testing with a wide variety of features that make testing efficient and convenient and, at the same time, provides the reporting and analysis functionality needed to gain really insightful results from the testing exercise. The solution we’ve opted for, Verifix from Itivity6 , provides a sandbox environment where the core matching engine functionality can be tested before connecting to the production or even the test instance. Effectively, this allows for separating client conformance/on-boarding from core exchange functions by simulating exchange operations – which is a very helpful tool for clients to iteratively plan and configure their testing. A dedicated test role allows us to simulate client flow and thus self-test in the course of our mandatory regular testing obligation or whenever deployments or incidents require testing whereby performing repeatable back stress testing from real life FIX traffic is possible at any time. Last but not least, the audit-trail capacity enables us to evidence that substantial testing was really conducted before connecting for supervisory purposes.

Steven, Thank you very much!

1. In January 1886, Carl Benz applied for a patent for his “vehicle with gas engine operation” (the patent specification DRP 37435 is considered to be first automobile in the world
2. Directive 2014/65/EU, the Markets in Financial Instruments Directive
3. Regulation (EU) No. 596/2014, the Market Abuse Regulation
4. Application Programming Interface (a set of commands, functions, protocols, and objects programmers can use to interact between systems)
5. Financial Information Exchange
6. A Broadridge Business

Download Download