Regime Discovery in High-Frequency FX and Futures
Jeff S., Student at Queensland University of Technology
Tuesday, September 29, 2015
I will shortly begin conducting research into HFT, specifically in FX and futures markets, and would like to begin a discussion with the ATA community about how best to pursue it. My current thesis is that the two simplest, most universally investigated and verified phenomena--momentum and mean-reversion--are significant to classify the regimes present in any of these markets; if it's not undergoing mean-reversion, then it's experiencing momentum and vice versa.
There are myriad trading strategies that quite profitably trade mean-reversion and momentum separately. I'm not looking to combine them (I'm not that foolish), but I am interested in quantitatively identifying which is more appropriate at any given time.
There are a number of quantitative measures that can be used to identify mean-reversion and momentum in a time series, including the Hurst exponent and variance ratio test. I have also recently read (https://quantivity.wordpress.com/2011/02/24/delay-embedding-as-regime-signal/) that volatility is a good indicator, as periods of low volatility are indicative of mean-reversion, and periods of high volatility are indicative of momentum. In the above link, the authors develop a delay embedding scheme that extracts the dynamics of time series volatility, and propose an indicator that signals momentum when volatility increases and mean-reversion when volatility decreases, period-on-period.
This volatility-based regime discovery method makes some intuitive sense. It also has the added benefit of prescribing mean-reversion strategies for times of low volatility. Since mean-reversion strategies have a natural profit cap but no natural stop loss, it would be beneficial to use them only in times of low volatility to ensure that losses don't blow out. Conversely, since momentum strategies have a natural stop loss but no natural profit cap, they're best used in times of high volatility to capture extreme market moves. I am aware, however, that mean-reversion at lower frequencies (daily, weekly, etc.) is more profitable when there is high volatility, so that's something that I'll have to reconcile.
In terms of implementation, I have thought to calculate the period-on-period absolute log return, abs(r(t))/abs(r(t-1)) to identify whether volatility is increasing/decreasing. Suppose you use an adaptive Kalman filter to estimate:
y(t) = Beta(t) + e(t)
where y(t) is the above measure, abs(r(t))/abs(r(t-1)), Beta(t) is the regression coefficient and e(t) is the error term. The Kalman filter estimate, Beta_hat(t), would therefore track the dynamic mean of the ratio of period-on-period absolute log returns. If this estimate were greater than 1, then volatility would be increasing and you would employ momentum strategies, and if it were less than 1, volatility would be decreasing and you would employ mean-reversion strategies.
It seems to me that an implementation of this kind would be the most sensible way to track volatility, rather than having to specify some sort of lookback window for an ARCH/GARCH model that could potentially introduce data snooping bias. I also feel that this method has an implicit connection to the variance ratio test and possibly the Hurst exponent, but that's something I'll have to investigate further.
So what does the ATA community think about my proposed method? Can anyone identify faults or problems with my logic, or propose an alternative? Has anyone used a regime discovery scheme similar to this, or perhaps other methods (such as Hurst and variance ratio) that have proved successful? All feedback is welcome and much appreciated.