Transaction cost analysis and pre-trade analysis

[This article was first published on Quantitative thoughts » EN, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Transaction cost analysis (TCA) is the framework to achieve best execution in trading context. TCA can be split into three groups: pre-trade analysis, intraday analysis, and post-trade measurement.

Pre-trade analysis allows us to get insight about the future volatility of the price, forecast intra-day and daily volumes, market impact. It evaluates all strategies and advises the strategy that is most consistent with manager preferences for the risk.

Intraday analysis is real time analysis, where the system ensure that the strategy performs in line with the forecast.

Post-trade analysis measures the implementation of the investment decision to ensure, that pre-trade models are accurate and best execution is delivered.

If you want to learn more about the transaction cost analysis, I highly recommend Optimal Trading Strategies: Quantitative Approaches for Managing Market Impact and Trading Risk by Robert Kissell. The book not only covers all type of mentioned analysis, but also considers the practical aspects of the implementation of trading strategy and algorithms. The book includes very interesting part about Principal Bid Transactions or blind bids.

To build a forecast model we need some input parameters. Pre-trade analysis can be split into two parts – fundamental part and statistically based part. It is much ease to derive fundamental facts such as market capitalization, company profile, the sectors in which company operates, than statistically based parameters. For the latter I’m going to use R and the raw stock price data to derive it. The former can easily be found on Internet, Bloomber or Reuter.

The first input parameter that I’m are going to derive is average daily trading volume of the traded security. It is very easy to obtain it – only daily data is necessary. I used 20 days rolling window to get average volume. Here’s an example of IBM security:

The graph shows, how the 20 days average volume evolved since last September – the minimum was 3.5 millions and the maximum was 6 millions traded a day. But the average can be misleading – it takes into account last 20 days and then it derives one number. In most of the case, we are going to deal with short time prediction such as 1 day or 1 hour, so approximation of 20 days does not make a lot sense.
One of possible solutions would be using confidence intervals estimate daily volume. 95% confidence interval is wide used in finance – I will use the same interval answering the following question: based on this interval, what volume will be traded next day.

The density diagram shows distribution of daily volume of IBM security. As you can see most of the days the volume was above 3 millions. However, 5% of the days the volume was below 3 millions. Based on this diagram, we can predict, that tomorrow’s volume will be higher than 3 millions.

Before we finish with the daily volume, we have to test for weekly seasonality. If there is weekday seasonality, then the volume forecast has to be adjusted.

The chart above clearly indicates, that the traded volume on Monday is below (~5%) the day average. There is increase in the volume on Friday, but the significance is under question. Let’s check density diagram to get rid of any doubt about the volatility on Monday and Friday.

The density diagram above shows, that Monday’s peak and the body are shifted to the left. On the other end we can see, that Friday’s body is aligned with others days, only it has a fat tail. Based on this diagram we can eliminate Friday’s volume adjustment and apply Monday’s adjustment only.

Once we finished with daily data we can move to intra-day. Then liquidating (or acquiring) a position in a security, the position has to be sliced into smaller parts to avoid influence of the market and to hide the intentions. For this reason, intra-day volume patterns have to be known. With that in mind, let’s look at how the volume is distributed in relation to the trading hour.

The graph above shows hourly volume pattern, where the volume is grouped by hour. The black dots indicate the median of the hour. Please keep in mind, that the trading starts at 9:30 and the first trading hour has only 30 minutes (if you want align the first hour to the others, then you need to multiply the volume of the first hour by two). As we can see, the first and the last are most traded and the volume drops in the middle of the day.

The next useful thing then liquidating a big position is average trade size. We need to know, how behaves average Joe and what he trades.

The chart above shows all trades grouped by hour – the black dots indicate median of the trades. The following table is supplementary to the chart – here you find median of the hour.

Hour 10 11 12 13 14 15 16
Volume 842.5 565.5 394.0 300.0 297.0 369.5 708.5

Once again, average trade size is much higher in the first and last hours and it drops ~2.5 times during the trading session.

The final chart shows the volatility grouped by hour. There is a lot jittery, then the market opens, it becomes calmer during the lunch time and slightly increases then the market closes. These are the numbers for each hour:

Hour 10 11 12 13 14 15 16
Volatility

(standard deviation)

0.16% 0.09% 0.08% 0.05% 0.05% 0.06% 0.07%

In this article we analyzed a list of potential input parameters for pre-trade analysis and further forecast. The list is not static and can be extended with supplementary parameters, such as bid-ask spread distribution and etc. The next step is to aggregate these parameters and build the models to forecast volatility and volume.

To leave a comment for the author, please follow the link and comment on their blog: Quantitative thoughts » EN.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)