LiquidMetrix to Short Articles

Oct 2014

Who is my Peer?

Short Article e

Following any presentation to a buy side of the top line performance metrics in a TCA / Execution Quality Report, the natural question is 'So Is This Performance Good or Bad?'

An absolute Implementation Shortfall (IS) number of 8.6 BPS may sound good - most people talk about IS numbers in double digits - but is it really good? Or is it simply that the types of orders you are doing are relatively 'easy'. Similarly if Broker A has a shortfall of 4.6 BPS and Broker B has a shortfall of 12.6 BPS, is this simply because you've trusted Broker B with your tougher orders?

One approach to the difficulty of interpreting performance metrics is to compare each of your order's outcomes to some kind of pre-trade estimate of IS and Risk based on characteristics of your order such as the instrument traded, percentage ADV etc. This may work fairly well when looking at the relative performance of Brokers for your own flow where you can take relative order difficulty into account. However, in terms of your overall market wide TCA performance, how can you be sure that the pre-trade estimates you're using are at the right absolute levels? Many pre-trade estimates themselves come from Broker models. How realistic are these for all Broker's trading styles and how can you be certain that they're not over or under estimating costs relative to the market as a whole and thus giving you a false picture of your real performance?

To get an idea of how well your orders are performing versus the market you need to compare your own performance to orders done by other Buy Sides.

The principal difficulty with any kind of Peer Comparison lies in the fact that you will be comparing your orders with orders from many different Buy Sides; each with different investment styles in different types of instruments given to different Brokers with different algorithms. So if you compare your orders to some kind of 'market average' how meaningful is it? Who exactly are your Peers?

For the results to be meaningful and believable it's necessary to be open on how exactly Peer Comparisons are constructed so as to be sure that we're comparing like with like.

Order Similarity Metrics

The starting point in any type of Peer Analysis is that your orders should be measured against other 'similar' orders. But how do we measure similarity?

Consider an order to buy 6% ADV of a European mid cap stock, starting at around 11AM and finishing no later than end of day. Apart from the basics of the order, pre or post trade we can also determine many other details such as: the average on book spread of the stock being traded, the price movement from open of the trading day to start of the order, the price movement in this stock the previous day, the annual daily volatility of the stock, the average amount of resting lit liquidity on top of the order book for this stock and the number of venues it trades on etc. It's easy to come up with a list of with many different potential 'features' that can be extracted and used to characterise an order.

Each of these features may or may not be helpful in determining how 'easy' it might be to execute an order at a price close to the arrival price. Some features make intuitive sense. Trying to execute 100% ADV in a highly volatile stock with a wide spread is likely to be much more expensive and risky than executing 0.1% ADV in a highly liquid stock with tight spreads and little volatility. But how relatively important might yesterday's trading volume in the stock, or market beta be in predicting trade costs? Which are the best features to use?

Assume we've come up with 100 different potential features that might help characterise an order. If we're designing a similarity metric that uses these features to find similar orders to compare ourselves to, we need to do one or more of the following:

  • Identify which amongst the 100 features are best at predicting outcomes such as Implementation Shortfall (Feature Selection).
  • Combine some of our selected features to reduce any data redundancy or duplication of features which are telling us basically the same thing ( Dimension Reduction).
  • Come up with a weighting of how important each remaining feature is when looking for similar orders (Supervised Statistical Learning).

The good news is that there are decades of academic research on how to do most of the above. Methods such as stepwise regression, Principal Components Analysis, Discriminant Analysis and statistical learning (KNN, Support Vector Machines, Neural Nets) all lend themselves well to this type of analysis.

The upshot of all this is that for each Buy Side Order we wish to analyse, we produce a similarity measure that can be used to find, from a market wide order outcome database, a set of the, say, 100 most-similar orders done by other Buy Sides. They do not have to be necessarily orders done on the same stock, just on the same type of stock! Assuming we've done our analysis well, the TCA outcomes of these orders should represent a good target to compare our order with. If we do this for each of our orders, we discover how well we've really done versus 'The Market'!

An Example of Peer Analysis Done Well

What might this kind of analysis look like in practice? Figure 1 shows one way of presenting peer results. A set of Client orders has been matched, using a similarity metric as described above, against a market wide order database. We're looking in this example at Implementation Shortfall; one can look at other TCA metrics such as VWAP or post order reversions in exactly the same way. Using the matched orders we're able to see the distribution of our buy side Client IS outcomes with market wide IS outcomes (green).

This tells us how well both the IS and risk (standard deviation of outcome) of our Client orders compare to the market average for similar orders. Based on the number of orders analysed and a measure close to a statistical 't test' we can then also translate the differences in performance into a significance scale from 0 to 100 to qualify how much better or worse than average a Client's performance is.


A fundamental question buy sides want to know from any kind of top level TCA analysis is how the costs associated with their orders compares to their Peers. The danger of any kind of Peer Analysis is being certain that you are really being compared fairly to other participants. The solution to this is to ensure that any kind of Peer Comparison only compares orders with like orders rather than orders from similar companies, preferably from a large database of orders from different buy sides.

The above analysis was done using a LiquidMetrix WorkStation. Please click here to find out more.


The information contained within this website is provided for information purposes only. IFS will use reasonable care to ensure the accuracy of the information within this site. However, IFS will not be held liable for any errors in the information provided within this website or for accuracy or completeness of the information, or for delays, interruptions or omissions therein, any difficulties in receiving or accessing the website and/or for any loss direct or indirect (including without limitation, loss of profits or consequential loss and indirect, special or consequential damages) howsoever arising and whether or not caused by the negligence of IFS, its employees or agents. The information contained within this site may be changed by IFS at any time.

The information available within this website may include ‘Evaluations’ which are not reflections of the transaction prices at which any securities can be purchased or sold in the market but are mathematically derived approximations of estimated values. Nevertheless, reference may sometimes be made to Evaluations as pricing information, solely for convenience or reference. Evaluations are based upon certain market assumptions and evaluation methodologies reflected in proprietary algorithms and may not conform to trading prices or information available from third parties. No liability or responsibility is accepted (and all such liability is hereby excluded) for any information or ‘Evaluations’.

The copyright of this website and all its content belongs to IFS. All other intellectual property rights are reserved. Redistribution or reproduction of the information and data contained within this website is prohibited without the prior written permission by IFS. is an Intelligent Financial Systems Service: ©Copyright IFS 2009

Data Protection

We take our obligations under the following Data Protection legislation very seriously and have taken steps to ensure full compliance

EC Directive 95/46/EC (up to and including 24th May 2018); and
(ii) the Data Protection Act 1998 (up to and including 24th May 2018); and
(iii) the GDPR (from and including 25th May 2018); and
(iv) Replacement National Legislation; and
(v) the Privacy and Electronic Communication Regulations 2003; and
(vi) any judicial or administrative interpretation of them, any guidance, guidelines, codes of practice, approved codes of conduct or approved certification mechanisms issued by any relevant Supervisory Authority,

This is a statement of the data protection policy adopted by IFS Ltd.

As a company that spans the fields of Market Share Analysis and Sales Data analysis, IFS Ltd can be defined as both data controller and data processor. The collection of data for our own database products, plus the need to hold information about individuals, employees, clients and suppliers, defines our responsibility as a data controller. Parallel to this, the work undertaken for many of our customers requires us to hold and manipulate our clients' data. In this capacity we are a data processor.

Specifically, the Principles of the Data Protection require that personal data:
Therefore, IFS Ltd will, through appropriate management, and strict application of criteria and controls:
In addition, IFS Ltd will ensure that: