Its been a long time that I reviewed one of these papers before but reading about it made a frisson of excitement run down my spine. Ok ok, so call me sad!. Once upon a time in the dim and distant past, I used to muck around on the trading floor. One of the main problems that people on the trading side is that its tough to find out the amount of losses that you might occur. In other words, you make money when you exit a position, not when you enter a position. So you want to know when you can exit the position. So one of the biggest pieces of information that you can have is the potential level of losses that you can suffer given certain parameters such as the confidence intervals, time periods, etc.
So if somebody can sit, like Jimminy Cricket, on your shoulder, while you are trading, and keep on telling you the potential maximum loss you can suffer, then that provides you with an independent way of judging how much risk you can take. Putting it in another way, if you were a racing driver and you had a small voice in the back of your helmet constantly calculating the probability of doing a skid or ruining your tyres, then you know how much you can push your car.
Value At Risk or VaR for short, does this for you. It has faults and deficiencies, but by and large, its a good market risk management indicator. But the major problem from an operational perspective is that its a pigging pig to run and execute. It is highly data intensive and takes a bit of time to calculate, which means that you are lucky if you get it calculated on an overnight basis for large portfolio’s. So for intra-day traders, market makers and the like who operate on a tick by tick basis, its almost impossible to determine it. But this paper seems to present a good solution to this problem.
What I found interesting (and it makes perfect sense) is that there is informational content in the time distance between ticks. And its perfectly intuitive, more the difference, less is the depth and density of the market, liquidity risk has arisen (because you are missing one side or both sides of the spread) and therefore the price formation process is coasting a bit. Now how much it coasts is perhaps a discussion and research for another day. Microstructure buggering around is so much fun and just think about the sheer amount of data that you have to play around with. Nice paper, makes me go all nostalgic for the days of LISP programming and tick by tick terrabytes of data :)
This paper investigates the use of tick-by-tick data for intraday market risk measurement. We propose a method to compute an Intraday Value at Risk based on irregularly spaced high-frequency data and an intraday Monte Carlo simulation. A log-ACD-ARMA-EGARCH model is used to specify the joint density of the marked point process of durations and high-frequency returns. We apply our methodology to transaction data for three stocks actively traded on the Toronto Stock Exchange. Compared to traditional techniques applied to intraday data, our methodology has two main advantages. First, our risk measure has a higher informational content as it takes into account all observations. On the total risk measure, our method allows for distinguishing the effect of random trade durations from the effect of random returns, and for analyzing the interaction between these factors. Thus, we find that the information contained in the time between transactions is relevant to risk analysis, which is consistent with predictions from asymmetric-information models in the market microstructure literature. Second, once the model has been estimated, the IVaR can be computed by any trader for any time horizon based on the same information and with no need of sampling the data and estimating the model again when the horizon changes. Backtesting results show that our approach constitutes reliable means of measuring intraday risk for traders who are very active in the market.