Search
× Search
Saturday, December 21, 2024

Archived Discussions

Recent member discussions

The Algorithmic Traders' Association prides itself on providing a forum for the publication and dissemination of its members' white papers, research, reflections, works in progress, and other contributions. Please Note that archive searches and some of our members' publications are reserved for members only, so please log in or sign up to gain the most from our members' contributions.

How do you prevent curve fitting?

photo

 Ken Duke, VP Operations at Beyond Organic | Start-ups | Change Management | M&A | Strategic Partnerships

 Thursday, December 25, 2014

How do you keep yourself honest when back testing? In other words, how do you personally differentiate between just adding rules to your backtested trading system to improve historical results vs trying to determine which rules will perform best in the future?


Print

100 comments on article "How do you prevent curve fitting?"

photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Friday, December 26, 2014



The standard way is to optimize using dates that leave a bit of the future out and then run the system on the remaining part and evalute performance. When working with Neural nets a standard procedure is to divide the data into 3 parts. Optimize and Validate where Optimize is used to etablish what inputs to select and Validate is used to establish coefficients and a bit of future to run the system on OOS (Out Of Sample data). I am using MetatTrader 5 where inputs are given so I only divide into 2 parts.


photo

 Rob Terpilowski, Software Architect

 Friday, December 26, 2014



I agree with Ingvar. I do the initial strategy development and optimization using 1 subset of data, and then do a walk-forward test with the optimized parameters on a different subset of data.

Keep in mind also that as you increase the number of rules and input variables to a strategy the greater the risk of curve fitting the strategy's historical performance to the data that is used in testing.

I try to distill the trading strategy rules to the bare minimum and then use test data that covers a variety of different types of market environments in order to develop something that has a higher probability of being robust in different market conditions.


photo

 Anton Vrba, Partner and Director of iMarketSignals.com

 Friday, December 26, 2014



Models with many parameters are prone to curve fitting and will bomb out in the future - full stop. The robust models that will perform in the future are those with the fewest parameters.



Here is an interesting study: Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-of-Sample Performance http://www.ams.org/notices/201405/rnoti-p458.pdf



E.g. take iMarketSignals' MAC, a moving average crossover system that was developed with 1965 to 2012 data, and later the model was tested from from 1950 to 2014 and it still performed well. Subsequently, the MAC was further analyzed by an unrelated party, and he concluded "It’s my opinion the MAC system will likely work into the future" see http://systemtradersuccess.com/mac-system-overly-optimized/


photo

 Alexander Horn, Private Investor | Entrepreneur | Market Analyst

 Friday, December 26, 2014



For me the best article on this topic is "Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-of-Sample Performance" by Marcos Lopez de Prado, see here http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2308659 Heavy stuff, but smart solutions outlined.


photo

 Alexander Horn, Private Investor | Entrepreneur | Market Analyst

 Friday, December 26, 2014



For me the best article on this topic is "Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-of-Sample Performance" by Marcos Lopez de Prado, see here http://ssrn.com/abstract=2308659 Heavy stuff, but smart solutions outlined.


photo

 Martin Gay, Risk Management, Developer & Trading experience

 Friday, December 26, 2014



Agree with the idea of a small number of parameters. The strategy needs to be based on something that makes sense for that market or that price-action so quite often the research starts with a hypothesis and your backtesting is a process of proving that hypothesis. In addition, if possible, produce as many trades as possible. Not just because you want to have as many opportunities as possible to make money ( success proportional to opportunity times edge ). But also because more trades creates lower standard error in estimated statistics. This then will determine how much in-sample data you will need. Then look for ok performance - not outstanding performance. Discount trades that dominate the PL if possible. Then look for parameter settings such that a shift in any of the params does not alter the performance too much -> robustness. Also, check some of the trades to ensure that there is no look-ahead. eg, that your code does not have an order to buy at the low where it should have been buying at yesterdays low. Its nice also if your idea works on a few markets and not just one but that is not always the case. And finally ( but not exhaustive ) come up with a lot of strategies so you diversify across markets and strategies and there is market order offsets that help smooth volatility in your overall performance ( provided of course you have enough capital base ) as well as lower commissions.

Very rough as I just woke up ;-) Have fun.


photo

 Leonardo O., at

 Friday, December 26, 2014



You definitely needs OOS to check your optimization. But you have to be careful not to check your OOS too many times so you end up optimizing indirectly. One discussion that sometimes appear is if you should use the more recent information as OOS. In one hand, this info is more likely to happen in the future, so you have more certainty that you system will work as you want. On the other hand, this recent info has a lot of meaning, so it will be excellent to train your system more properly.

What we generally do is to leave future data for OOS, if the historical data is very long and the market is more uniform. If there's not enough data or the market has changed its behavior recently, use as OOS the oldest data.

Another thing you may want to look is Walk Forward Optimization. It's a way to always keep optimizing and checking OOS.


photo

 Noah Walsh, Construction Director.

 Friday, December 26, 2014



Has anyone here done any walk forward optimization? Definitely your testing needs to be done with out-of-sample and in-sample data but in order to really put a strategy through its paces to see if you have something thats worth progressing then you need to be aware of and employ WFO procedures. As far as I have seen, this is currently the best way to test the robustness of your strategy. After that you can get into cluster analysis and all that stuff but dont bother unless you get the WFO pinned down first to see what robustness numbers are like.

One other thing to be aware of is getting advantageous fills on your test. Some of the historical fill processes in some platforms will not be very accurate on this point and this has the potential to pull your strat to bits. On this point you will need to pick the test apart trade by trade to be very sure that wysiwyg.

Anyone out there fancy sharing some decent short or medium term strategies with me via pm?

As the song goes, you show me yours and I'll show you mine...!!!!! Anyone?

Happy New Year to all...!!


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Friday, December 26, 2014



Ken Duke: "How do you prevent curve fitting?"

The following techniques are applied to prevent over fitting: Bayesian optimization, regularization, cross-validation, early stopping, pruning.

Their penetration to trading and developing trading systems and investigation of their applicability is going on with an acceleration. At the same time markets is a rich source of information for developing these and new methods.

Best Regards,

Valerii


photo

 Brian Nichols, Currency and commodities futures trading

 Friday, December 26, 2014



Kevin Davey's relatively recent book "Building Automated Trading Systems" summarizes issues that arise in systems development, including overfitting, and proposes a development methodology that incorporates a number of best practices to deal with them. No surprise that testing with out-of-sample ("OOS") data plays a significant part. (I'm not affiliated with KD or his book--for what it's worth his methodology simply matches my experience).

IMO however models that depend primarily on parameters derived from raw statistics of price and/or returns are overly subject to interpretation, which IMO is the root cause of overfitting if not the plethora of academic articles on the topic.

In an effort to translate some of the art of trading into machine code my own approach lately has been to interpret price action the way we interpret images (from a spatial perspective) or language & music (temporal perspective), reducing it to a hierarchy of patterns as images can be reduced to scale-invariant features, music to a composer's or performer's style or speech to phonemes ("The smallest contrastive [] unit of price action which may bring about a change of meaning", to paraphrase Noam Chomsky). From this perspective models become a sort of grammar that can incorporate context explicitly (e.g., fundamentals, if we don't want to infer them from price behaviour), which in turns allows us to complete sentences (so to speak), for better or worse.

In regard to expectation of future results based on past performance, trading constantly reminds us the future remains unknown except perhaps to clairvoyants and occasionally to very large banks, and money management is what separates experienced traders who make money from experienced traders who lose money. All we can hope is that price will keep doing what it's doing long enough for us to squeeze a trade out of it, to the extent we can deduce what it's actually doing at all.


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Friday, December 26, 2014



I have read a few messages on this thread, where division of a data set in two or more parts is considered as a norm. One part is used for parameters optimization and "unseen" data is given to a trading system in order to make a final conclusion about its performance. This is viewed as a method to overcome over fitting. An obvious disadvantage of this approach is uneconomical use of available data. However, application of entire data set for optimization of parameters is considered as a reason of over fitting.

These messages make me thinking that it would be a surprise for their writers to know that there are methods overcoming over fitting but using entire data set. One is Bayesian optimization. It has the following attractive properties: sequential learning, using entire data set for the machine learning of the model parameters, avoiding over fitting for insufficient data sets, where the number of parameters is close to the number of data points.

Best Regards,

Valerii


photo

 Volker Knapp, Consultant bei WealthLab

 Saturday, December 27, 2014



I guess whatever I say here now has been said here before in one way or another. However, in one way or another you will see different opinions from different experts (even within my team). You might have a year with little or no profits and still your strategy might be right.

Here the list:

1. Do not use too many parameters or indicators.

2. When optimizing see if the combinations at a certain range are all profitable. I call it Parameter Stability Testing.

3. Use walk forward optimization with the expanding window. WealthLab has it nicely integrated and gives you a better feel for the results you can expect.

4. Get clean data! I have not seen one single data provider that has really clean "tradable" data. That is why we created our own data which produces much more realistic results. I talk from years of experience!!!!

5. Finally test you strategy on foreign markets. If you worked on US stocks you should try it on the German market (DAX or MDAX), or may be even the Singapore market. Don't expect the results to be just as good but they should be good. Each market has different characteristics based on local rules and local behavioral/habits. But in the end it should work out.

6. Forgot to mention money management settings. One of the things people keep forgetting - and I also assume that you are testing on a portfolio of symbols with a certain money management settings. One thing I permanently do is use Worst Case Scenario. This is also a very amazing feature that allows you to test your money management method expecting to be the unluckiest guy that always gets the worst trades...

Puh, there is so much more to say with just one question asked... I stop here.

Merry Xmas.

VK


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



By basing the trading mechanism on a set of manual rules that has proved to work reasonably good over a long period one can basically say that you have "done" the optimizing part manually. Or someone has. The method is already defined. As a simple example if you have the method defined (crossing moving averages) what is left is to run validating (juggling parameters for the moving averages). I then run the validating for 2 years up to the current date and do the OOS testing by running a demo account for a month. If it lives up to expectations

It is bending the strict rules but it works for me


photo

 Volker Knapp, Consultant bei WealthLab

 Saturday, December 27, 2014



I would call or your periods too short. I wouldn't even say that it depends on the time frames you choose. Are you saying you are just using two years of data?


photo

 Volker Knapp, Consultant bei WealthLab

 Saturday, December 27, 2014



wanted to say: "all of your periods" ....


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



The method is based on Daily, 4H, 1H, 5M, 1M

The basic is buying dips in an uptrend and selling tops in a downtrend. Daily, 4H and 1H is use to determine trend, 5M and 1M to determine entry.

Yes, 2 years is plenty enough. It is a multicurrency EA and it uses different parameter settings for short and long trades and in some instances more than one set of rules for a specific forex pair.

@Volker.

Stating that 2 years are to short implies that it is possible to create an EA that can adapt to very different market conditions. The Holy Grail. I do not agree. I adapt to varying market conditions in 3 ways. By incorporating volatility in the code, by rerunning the validation once a month and by continously evolving the EA


photo

 Volker Knapp, Consultant bei WealthLab

 Saturday, December 27, 2014



EA?


photo

 Marc Verleysen, founder at TSA-Europe -systematic trading

 Saturday, December 27, 2014



"How do you prevent curve fitting ?"

By allowing yourself to fail.

We use automated strategies to get rid of the human emotions of fear and greed. These emotions however only come into play when we are already trading.

Another human emotion however is PRIDE. If you think of a possible strategy and you do not accept failure on your part because of pride, you will start optimizing and curve fitting a (failing) strategy just to make it work (usually in the short run) and not admit you got it wrong. If the basic framework does not work well, accept that your idea is incorrect and move on to another idea. Don't fiddle with parameters until you come up with the right one.

As someone once said (I think it was Newton) :" i have not failed, I have discovered 100 ways how not to do it"

kind regards


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



EA. Expert Advisor. A program written in C-like language specifically for the MetaTrader platform. "Robot". Excellent platform, especially the new MT5


photo

 Volker Knapp, Consultant bei WealthLab

 Saturday, December 27, 2014



If only I would know when there is an uptrend and when there is a downtrend. That alone would be good enough for me. Wonder how your AE was doing on the USD-RUB lately?


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



@Volker

:-)

Knowing the trend. It is "unknowable". I have not checked it but I think that you can get it correct more than 50% of the time and with a good profit/loss ratio that will be enough.

I have not tried to model the USD-RUB but I could give it a try later. Set OOS date at Sept. 1 2014?


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Saturday, December 27, 2014



Marc Verleysen: "As someone once said (I think it was Newton) :" i have not failed, I have discovered 100 ways how not to do it".

Thomas Alva Edison is cited for saying: "I have not failed. I've just found 10,000 ways that won't work".

Best Regards,

Valerii


photo

 Marc Verleysen, founder at TSA-Europe -systematic trading

 Saturday, December 27, 2014



Thanks Valerii. Getting older and memory does not get any better :-)

But I hope people understood what I was trying to convey here.


photo

 Marc Verleysen, founder at TSA-Europe -systematic trading

 Saturday, December 27, 2014



@Volker

"If only I would know when there is an uptrend and when there is a downtrend. That alone would be good enough for me"

Our trend model (daily data analysis) is hitting home about 75 % of the time in eurusd since its inception (model created in 2003). Getting even better odds on stock market indices. So getting the trend right is not the big question. The key issue is whether one has the guts to ride it (and this is often more difficult if one is too close to the market).


photo

 Andrey Gorshkov, Lead Quantitative Researcher, C++ Developer

 Saturday, December 27, 2014



My opinion is we should NOT use OOS. I suggest adding some noise to the prices instead. And, of course, vary the parametrs as well.

Why? Because leaving too much data to OOS makes us miss the most recent events and thus fit to "old" market, whereas holding only a few OOS data forms an illusion of safety.

Which kind of noise to add and which amount of noise is enough depends on the algorithm type. Adding a reversal process (eg. Ornstein-Uhlenbek) as noise can give even extra profit to a reversion type strategy, so the noise shoud be relevant.

Yet one thing - stress tests. It's the same about noise but this time you take bigger price shifts and mesure not the profit stability but the way how the shifts influence your strategy.


photo

 Yoshiharu (Josh) Sato, C++ Algo Quant Developer

 Saturday, December 27, 2014



I believe my previous thread on the same topic would be informative for you all.



Ways to mitigate curvefitting in trading algorithm development


https://www.linkedin.com/groupItem?view=&gid=1813979&type=member&item=5934034290051424259



Best regards,


Josh


photo

 Rob Terpilowski, Software Architect

 Saturday, December 27, 2014



@Valerii do you have any white papers or books that you could recommend on Bayesian Optimization in relation to trading strategy development and optimization?


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



@Andrey

Are you saying that after you have optimized up to the current date you start trading with real money? Or do you first run it on a demo account to see how it behaves in "real life"?

If the later then it is basically the same as leave an OOS period


photo

 Graeme Smith, Investment Manager at The Tourists Portfolio

 Saturday, December 27, 2014



I think one of the big difficulties in financial market models is that the markets are constantly evolving. This means OOS can become just a sample of one and thereby insufficient (and insignificant) to confirm or condradict a model.


photo

 Noah Walsh, Construction Director.

 Saturday, December 27, 2014



Ingvar & Volker.. it would be interesting to see how someone would get on trying to trade the usdrub. I looked at it a few months ago but found it almost untradeable from a retail standpoint purely due to the massive spread.

Well said Marc...!!

I have found it always best to do a lot of what I call bar browsing. This is basically a whole load of screen time just watching the world go by and absorbing whats going on in front of you. While a longer process and manual, it is a far healthier way to determine if you have something that works well rather than just trying to optimize the snot out of it hoping to god it works. If after a good amount of browsing then you decide to let the optimizer do some work then let it be minimal. Remember the less optimizing and mucking about you do with it the better. Do not allow yourself to be seduced by the optimizer on your platform.

The folks over at Ninjatrader are on the record for saying that while the WFO is probably the most important tool in strategy development, it is also the least understood. I use NT and have found it to be an excellent platform for development and trading.


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



@Graeme

That is true. But. If I have an algorithm that shows a reasonably straight equity curve for 2 years of history I judge that is capable to handle some variation in the market and I take a chance of sticking onto it for the near future. I will then at regular intervals rerun the optimzation and keep a keen eye on the performance. That takes computing power since I am running approx 30 forex pairs and several "variations" on each pair. I use 5 Windows (3 local and 2 server) to run optimization agents. Each pair takes approx a day to run so it runs more or less 24/7 to get a refresh run for each pair once a month. With MetaTrader 5 you can buy this very cheap on the net. Unfortunately I discovered a bug with this method so currently I am stuck with my own computer capacity. When that gets fixed I can reduce this time with a factor of 5 -10 for a about $50 a month


photo

 Noah Walsh, Construction Director.

 Saturday, December 27, 2014



Ingvar, for your optimization runs how big is your in sample and your out sample data set in terms of days?Do you optimize right up to the latest data/day?

Also, how often do you think is best to re-optimize? I have heard of some people that will do this on a daily basis:-)


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



@Noah

I use Daily, 4H, 1H, 5M, 1M data starting at 2013-01-01. Yes I optimize up to today or within a couple of days. As I mentioned above each optimization for a pair takes around a full day and my system runs about 30 pairs reoptimization will be approx once a month. I do not think that there can be any fixed rule as to how often to rerun optimization. It all depends on how "robust" the method is. It also depends on how long your optimization period is. If your optimize period is 2 years, one or a couple of days at the end of the period will hardly influence the result. It is even doubtful if a month will make much of a difference but I do it anyway. I do noy use an OOS period at all other than running it on a demo account for a while to see that it behaves in "real time"


photo

 Joe Duffy, Principal at SabioTrade Inc.

 Saturday, December 27, 2014



Most of the discussion seems to centered around trading 1 market. It is frankly not a difficult task to create a set of algorithmic rules to fit one market, at least historically. To take those exact same rules and exact same parameters and run them on say 500 different stocks over a long test period of decades --- if 95% plus stocks are winners, and every year is a winner, we would be comfortable in saying we have not optimized. Always open to hear why we might be wrong though.


photo

 Volker Knapp, Consultant bei WealthLab

 Saturday, December 27, 2014



@Marc

Very interesting results. It wasn't clear when you really started trading the various strategies? Also I wonder why you aren't using futures data for the indices? I remember in the back of my head the indices themselves were easy to find trends, but when I wanted to convert that to the futures it failed. Some strategies aren't updated for this year.

@Noah

I haven't even looked into were to trade the USDRUB forex. I haven't tested it and I really have no interest. It was just a question towards Ingvar.

@Ingvar

Since you offered, stop optimizing on 1. August. 2014. I know results will be hypothetical since spread seems to be to high to make it really tradable.

I also took the chance to look at your website. Haven't figured out your opinion on climate change but did see your blog from 2008. How did that one end, couldn't find it either.

... and I still think that using a short period of time (like 2 years) optimize it and apply it to whatever period is a form of optimization or better picking a lucky number. Why should the market behave for another day like it did the past 2 years?

@Graeme

I don't believe in measurable circles but I believe that markets go through various phases and that those repeat themselves in different forms and shapes. That is why I believe the WFO with an expanding window is the way to go. That way you cover as many market conditions as possible. The goal is to not lose any money when not favorable conditions are there.

@ Joe

I was not talking about single markets, for me it is always a group of symbols or a portfolio. One of the reasons why I love Wealth-Lab so much. I kind of agree on what you say, still one could use too many indicator and parameter to achieve exactly this goal.


photo

 Noah Walsh, Construction Director.

 Saturday, December 27, 2014



Ingvar very interesting. So when you run your final set up on a demo, provided you are still happy with how that performs, how long will you stay on demo before going to a real money account?


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Saturday, December 27, 2014



Rob Terpilowski,



"Valerii do you have any white papers or books that you could recommend on Bayesian Optimization in relation to trading strategy development and optimization?"



A friendly introduction to Bayesian optimization and other machine learning techniques with multiple examples, illustrations, and literature references is Bishop, Christopher, M. Patterns Recognition and Machine Learning. New York: Springer, 2006. This does not consider trading applications but the idea will be understood.



Two examples related to trading, where Bayesian inference and optimization were applied, are a paper and patent:



Bai-ling Zhang, Richard Coggins, Marwan Anwar Jabri, Senior Member, Dominik Dersch, Barry Flower. "Multiresolution forecasting for futures trading using wavelet decompositions", IEEE Transactions on Neural Networks, Vol. 12, No. 4, July 2001, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.454.3205.



Ferris, Gavin. "Method Of Lowering The Computational Overhead Involved In Money Management For Systematic Multi-Strategy Hedge Funds.", http://www.google.com/patents/US20080097884.



The ideas described on this discussion thread, in my opinion, mainly relate to the walk-forward analysis, out-of-sample testing, or double-blind testing. The walk forward tests and analysis applied to development of trading systems and selection of robust candidates were greatly popularized by Robert Pardo "Design, Testing, and Optimization of Trading Systems", New York: John Wiley and Sons, 1992 (after 2008 this is the first edition). To some extent in terms of trading applications he can be considered as an author of the method. I have "graduated this school" at the beginning of 1990th. I very respectfully relate to Pardo's books and studied the first edition, which we named the "Black Bible" in a team developing a Robot trader software. It has many pen and pencil marks. Bob and I maintain friendly relationships.



If you want go beyond the walk-forward analysis and extend the scope of methods potentially interesting for fighting with the over fitting problem, then the list of techniques in one of my previous messages might serve as a guide. You should be prepared that their adaptation to trading will require thinking and working.



Best Regards,


Valerii


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



@Noah

About a month. Like I said before this is a work in progress where changes and improvements are still being done. The total lines of code is around 4300 (including comments). The latest addition is a "variation" doing trailing stops and code to handle "add on" positions. Doing optimiztion now and will start a demo run when that is finished. Will take some time. Contemplating doing an "Anti Martingale" module


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Saturday, December 27, 2014



@Volker.

August 1. See if I can get some time to do it.

"Climate Change", Global Warming", "Climate Disruption". Biggest scam ever. I do not post anything about it any longer. Lots of much better sites worldwide. Same with blog. Concentrating on EA development.

"Why should the market behave for another day like it did the past 2 years?"

Obviously it does to some degree. Otherwise I would not get a reasonably straight rising equity curve for 2 years. It behaves basically the same in the first year and the second. Since there are traders around doing well using various "patterns" it also tells that it does.

Some Forex pairs do it more than other. Curve Fitting is a real and serious problem when trying to use Neural Nets and Genetic Algorithms. Been there done that. To many freedom dimensions. Any net using more than 4 inputs is very prone to curve fitting. The net I developed had 2 inputs and worked pretty good until one of the inputs was discontinued

Use of "patterns" in a wider sense is what many traders make money on. The specific pattern I use is "Buy dips in an uptrend and..". Of course it is not foolprof but with the right hit percent and profit/loss ratio it will make money. Rules and patterns does not result in as many freedom dimensions as Neural nets and genetic algorithms


photo

 Federico M. Dominguez, Founder & Boardmember at GAANNA

 Sunday, December 28, 2014



Besides the obvious clean data, orthodox back, OOS and forward testing, a sound strategy has to be mature, that is, commit money once several sifnificant periods of time have passed in proper paper trading. IMHO the most important thing to consider in order to prevent data snooping or curve fitting is chaos.

How do you introduce chaos and chance in your models?, how my "animal" has to be different in order to compete in an environment full of "out of the mill" systems resting above the same "out of the mill" technical indicators?

System prey, because you rack in blood of greedy people/systems and blood of fearful people/systems that interact with each other.

The most simple systems are the most profitable, but the most innovative outperform. Ask your child how he would beat a cheeta, clean, novel, out of the field minds provide fabulous feedback. Cant say more than #BETHEHUNTER, #DONTBETHEPREY, #GAANNA


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Sunday, December 28, 2014



@Volker

Can not model USD-RUB. Alpari has removed it from the MT5 platform. I know they stopped new orders some time ago, only allowed close


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Sunday, December 28, 2014



Ask yourself why Fibonacci levels work. There is absolutely no rational reason why it should. So why does it work? Lots of traders use it so it becomes a self fullfilled prophesy. Greed fear and phsychology are powerful forces. Why do support and resistance work? Same reason. And key levels. And pivots. Are there no "real" factors? Yes. There are some fundamentals too. What should small time traders like me stay out of? High Frequency Trading. Try riding the back of the big movers


photo

 Marc Verleysen, founder at TSA-Europe -systematic trading

 Sunday, December 28, 2014



@volker

If interested, let's discuss this outside this thread. Contact details on website

kind regards


photo

 Volker Knapp, Consultant bei WealthLab

 Sunday, December 28, 2014



@Ingvar

You can not even get the historical data? What kind of service is that, which changes tradable markets? That sounds very unprofessional and unreliable. I would rethink all MT5 and Alpari (whatever they are).


photo

 Volker Knapp, Consultant bei WealthLab

 Sunday, December 28, 2014



@Marc

Yes interested...will do.


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Sunday, December 28, 2014



@Volker

First time ever I have seen it happen. I have traded live with Alpari and MT4/MT5 a couple of years and never had any problems. The USD-RUB is a very special case


photo

 Volker Knapp, Consultant bei WealthLab

 Sunday, December 28, 2014



@Marc

Not even data available?


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Sunday, December 28, 2014



@Volker

No data. Since USDRUS is not even in th list I can not select it. So I cannot get at historical data. Getting historical data from "outside" does not help either. It is integrated


photo

 Graeme Smith, Investment Manager at The Tourists Portfolio

 Sunday, December 28, 2014



I'm probably developing different models than a lot of people on this thread. 15 years ago I would have called my models short term, they try to predict price a month or a quarter ahead. Nowadays with HFT, even a month can seem like an eon, so I will call my strategies medium term.

I use all of the data to create my final model, I can't see any reason anyone wouldn't. But prior to that I run two In-sample/out-of-sample tests. The first divides companies randomly into IS and OOS. This gives an indication of whether curve-fitting is happening. Generally my models have similar statistics in the IS and OOS. In fact I occasionally have better stats in the OOS which is a pretty good indication that I'm not over-fitting.

The second test is a lot more difficult, and a lot less promising. I divide the data based on date, IS is pre-2008, OOS is post 2008. I chose this date because it stress tests a model. Prior to the year 2008 I used to use 2000 for the same reasons. When stress teasing models like this, no model has performed better than 50% of the IS. And some promising models have OOS close to zero.


photo

 Volker Knapp, Consultant bei WealthLab

 Sunday, December 28, 2014



@Marc

Time to move on. ;) I would find it highly awkward to be so dependent on things like that. I believe you could test it if you have ever downloaded the data, right?


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Sunday, December 28, 2014



@Volker

I guess your last comment actually was ment for me

Probably.


photo

 Graeme Smith, Investment Manager at The Tourists Portfolio

 Sunday, December 28, 2014



@ingar, I'd be quite worried about period of only two years. I deal mostly in equity markets, which have been in a bull market since 2009. So I am likely to be sceptical of backtests/optimisations that don't include 2008. Admittedly 2008 was mostly remarkable in equities, but some very strange things happened in currencies also, just look at my Aussie dollar.


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Sunday, December 28, 2014



@Graeme, I am not to worried. I only trade a specific pattern (setup) that should get me into trending pairs and I only bet 1% of my free margin on each setup. I trade appox 25 pairs and the maximum concurrent trades is around 6. There is a also variations on each pair so the total number of trade possibillities are currently 97. Each setup created between 20 and 70 trades for the one year optimization period.(not many) All have a point where SL is set to break even and som use trailing stops. In order to get into a trade there are a number of criteria that has to be fullfilled so there are not many trades per forex pair. This is compensated by trading many pairs and it gives a better risk spread. Since I dont have a super computer each pair takes something like 12 - 14 hour to optimize (8 runs) so doubling that time by optimizing on 2 years are really not realistic.


photo

 James Tann, Commodity Trading Advisor, Commodity Pool

 Tuesday, December 30, 2014



The only thing that is missing in the above discussions from my point of view is a major sensitivity study using Monte Carlo methods to the Out of Sample period.

I agree that the more parameters the more potential for "curve fitting" but it also depends upon what those parameters are for.

As an example, I never use any testing that uses limit trades. I'm paranoid that I will not get a fill in real time. I always use market orders at the start of the bar and then use a significant amount of slippage to manage the fills.

Here is a piece of advise over the last 10 years. With trading at the open, 1/2 tick slippage for the E Mini ES has proven to be just about right. That is, 50% of the time you get the open fill and 50% of the time you get one tick worse. It is a little better than this, but want to be conservative here. This is on top of any commissions and exchange fees. I see more people selling "crap" that has limit orders and no slippage. Beware!!!

Now back to the parameters. If you have 20 input parameters and you are optimizing them in back testing, the first reaction would be, great chance for curve fitted results. But lets say only three parameters are for the key oscillator, and two more for the trend. The other 15 are for stops, ratchets, profit targets and money management. Now if the original 5 parameters is acceptable in the Out Data but the volatility is too high (big drawdowns to live through), then the rest of the parameters are to mitigate this, my thoughts may change about the number of parameters.


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Tuesday, December 30, 2014



@James

On backtesting. Agree with your approach on dividing parameters in that manner. Like "Principals" and "Moderators"


photo

 Graeme Smith, Investment Manager at The Tourists Portfolio

 Tuesday, December 30, 2014



The number of inputs you can include depends on the size of your data. I am happy having several hundred inputs into my equity model since i have tens of millions of data points to optimise it.

I'm having more problems trying to develop a market timing model. My first sensible idea was to use 12 different markets to increase my sample size. This seemed like a pretty good starting point, until the worst possible outcome occurred. There is a lot more data available for the US. I wasnt developing a multi-country model, i was trying to develop a robist model


photo

 Graeme Smith, Investment Manager at The Tourists Portfolio

 Tuesday, December 30, 2014



I was trying to devolop a robust model that i could apply to the US. As it turned out i couldnt. In the thelve country model, rather acwardly (spelling, yes im aware) it turned out that the us was the exception to the rule. It was the one country that lost money based on the rules that had been profitable for the other 11 countries


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Tuesday, December 30, 2014



@Graeme

Do I understand you correctly that you are trying to developing a model that is to handle 12 different markets? And within each market many different securities?.The Holy Grail?

What tools do you use? You have a Craig super computer or something equal?

Does "input" mean the number of securities? The restriction on the number of inputs mentioned in this contect is not the number of secuities. It is the number of "parameters" used, like for example the period of a moving average and other parameters involved in the trading algorithm


photo

 Noah Walsh, Construction Director.

 Tuesday, December 30, 2014



USDRUB yes I just had a quick look over at my ninjatrader/fxcm usdrub chart. The last bit of data they are showing was on evening of 16th December before they discontinued it too.


photo

 James Tann, Commodity Trading Advisor, Commodity Pool

 Tuesday, December 30, 2014



Ken, good question. I never do that. I have a standard set of stops, profit targets that I have used for 10 years. The same ones are used that I use in real trading. The actual proof of the concept is that no degradation in the trading in real time has been seen.


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Tuesday, December 30, 2014



James Tann: "I'm paranoid that I will not get a fill in real time. I always use market orders at the start of the bar and then use a significant amount of slippage to manage the fills."

To be conservative often pays back. You are likely talking about testing and not real trading. The mentioned S&P 500 E-mini contract, if it is a nearby one and there are at least three weeks before expiration, is one of the most liquid futures contract. Execution of an order depends on your broker and/or access (I assume that you are a retail trader trading 1 - 3 contracts) and a few other factors. Practice shows that a "stop limit" entry order being placed by a retail trader with online access at least two - three minutes before the actual event works well with one tick apart between the stop and limit price. To be concrete, if there is an intention to sell short (entry) at 2076.50 when the price is around 2078.00, then there is enough time to place a sell stop-limit with the stop price 2076.75 and limit price 2076.50. This will normally put you in during active hours (08:30 - 15:00). In the night time (17:00 - 08:30) using market orders, even, on E-mini can create unreasonable slippage. The slippage will be less, if the order is placed many minutes earlier than the actual event. Of course, this is the market and there are times, when events develop in a very fast manner. For a back testing system it is better to trace size (volume) associated with prices and times around the execution point. This gives additional information about reliability of a fill.

Best Regards,

Valerii


photo

 Oscar Cartaya, Insurance Med. Director

 Tuesday, December 30, 2014



I would like to add a brief comment in the use of NNs. You have to do more than split the data into several groupings and see how the results work in a portion of the data unused in the training portion. Many other conditions have an effect upon curve fitting. For example, the number of iterations you allow the NN to run during the training period (large runs increase curve fitting), the number of hidden nodes used (more nodes more likelihood of curve fitting). In general, the more exhaustive the parameter setup (number of iterations, number of nodes, number of operations allowed per iteration, number of constants allowed, etc...) the greater the likelihood of curve fitting. The general idea in using NNs is the KISS principle. These are general rules only, and they are applicable only to the operation of NNs, not to the wide world of trading.


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Tuesday, December 30, 2014



Oscar Cartaya: "You have to do more than split the data into several groupings and see how the results work in a portion of the data unused in the training portion."

Robert Pardo popularized this technique under the names "walk-forward test", "walk-forward analysis", "out-of-sample testing", "double-blind testing". This is general technique applicable not only to neural networks training and application. In my list posted earlier on this thread a more generic terms is applied: "cross-validation". Cross-validation is one of the methods to fight with over fitting but certainly not the only one. Many recommendations on this thread reduce to the walk-forward analysis.

"...the number of iterations you allow the NN to run during the training period (large runs increase curve fitting)"

This has named "early-stopping" and assumes development of "early-stopping rules". This technique is also in my list. It is also not specific to neural networks.

"... the number of hidden nodes used (more nodes more likelihood of curve fitting."

The number of nodes and their connections (topology) of a neural network characterize its complexity. It is a like a degree of polynomial fitting a set of data points. An interpolating polynomial exactly fits the number of N points "predicting them exactly" but fluctuates between points considerably showing poor generalization properties - curve over fitting. It is known that for polynomials it is needed considerably to increase the number of data points comparing the degree of polynomial.

However, it is also well known that regularization methods and Bayesian optimization are good in solving the latter problem. In other words a data set can be equal in size to the number of optimized parameters of, even, less (!) and still the system may start generate reasonable results. Regularization and Bayesian optimization are in my list posted earlier.

This is why it not quite exact (soft form) to say that "more nodes more likelihood of [VS over] curve fitting". For some training methods it is so. Others can select a complexity of a model based on consideration of the problem but not the size of available data set. Some view this property as an advantage of the method.

"The general idea in using NNs is the KISS principle."

This is an example conforming my old axiom: In order to become wrong it is enough just to say or write something. This axiom fully relates to my own words too.

The biggest problem in trading is not how to train a network or optimize another method or strategy. It is known that at the limit (increasing data set) all methods and training/optimization variations will bring us to the same set of optimal parameters. Trading complication is that this set is not constant. It is likely not so that one has a small sample from a general population. It is likely so that the general population changes, where an alternative conclusions would be that it does not exist. If the last sentence "These are general rules only, and they are applicable only to the operation of NNs, not to the wide world of trading.", then I would feel comfortable with the sentence.

This is one of the reasons, why I have pressed "Like" for Graeme Smith stating "I think one of the big difficulties in financial market models is that the markets are constantly evolving". In different forms I (and others elsewhere) was expressing this. But emphasizing this one more time within a discussion on optimization, training, and over fitting make sense to me. Because the words "over fitting" hide the actual problem. If it would be over fitting, then as Valerii Salov has written: "The following techniques are applied to prevent over fitting: Bayesian optimization, regularization, cross-validation, early stopping, pruning." The problem is deeper. It is still possible to relate to it with the KISS principal. The market as a tool for that answer as well.

Best Regards,

Valerii


photo

 Oscar Cartaya, Insurance Med. Director

 Tuesday, December 30, 2014



@ Valeri, thank you. I assume you know what the KISS principle means: "Keep It Simple, Stupid." The more complexity you add to the NN the more difficult it is to get a robust result.


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Tuesday, December 30, 2014



Oscar,

I assumed that "KISS principle" is "Keep It Short and Simple", "Keep It Simple and Straightforward", or "Keep It Simple, Stupid."

Valerii


photo

 Oscar Cartaya, Insurance Med. Director

 Tuesday, December 30, 2014



My primary experience with the KISS principle came from the military, and it was drilled into my head in an unforgettable way.


photo

 Alex Krishtop, trader, researcher, consultant in forex and futures

 Sunday, January 4, 2015



It's interesting to see that very quickly any discussion of the overoptimization problem turns into a discussion of IOS/OOS vs all data set backtesting and any mathematical methods to work with sufficient/insufficient data sets. I wonder why no one in this thread (if I missed someone's post please accept my apologies, I've looked through the discussion only quickly) bases his argumentation on the fact that any market model should serve as a representation of an underlying market process. Or, rather, we can put it the following way: there are models within which even their creator doesn't know why they are profitable, and there are models that do represent something that indeed happens in the market. Normally the former include machine learning of all kinds, Bayesian models, AI, neural networks and similar. The latter are mostly 3-5 lines of code that shoot the apple of the eye of a particular market process.

The key problem for both approaches is that market regime changes from time to time, and with rapid development of electronic trading as well as simplified access to markets for very vast audience make these changes happen faster. Generally speaking (and very roughly speaking) we could represent the price time series as sequences of "understandable" and "transitional" periods. An "understandable" period is time of common prosperity; it's time when virtually any classical model works if applied to an appropriate time frame (resolution). A "transitional" period is when the very market structure is undergone changes and big boys prefer to stay away from the market, therefore making the market efficient. During such a period it's impossible to employ any single approach such as trend-following or breakout or mean reversion, as any of them would suffer from the market efficiency.

An "understandable" period normally lasts for 6-8 years and a "transitional" period lasts for 1-2 years. The key to success is the ability to properly identify the transitional period and do it in as timely manner as possible. I am not aware of fully automated systems that could do that since it is definitely not possible to do analysing price/volume data. It is far easier to do just keeping the hand on the pulse of the markets and especially changes in their environment (by environment I mean liquidity, ease of access, infrastructure and regulations altogether). Therefore in case you understand why your strategy makes money during this particular period of time you can quite safely make an educated guess about what to do with exposure and perhaps strategy "rotation" depending on conditions.

Regardless of the model, you wouldn't be able to properly model the USDRUB lately; on the other hand, you don't need any model to trade such a unique opportunity. So, this is a great example of whether you understand what is going on in the market or not. Indeed, you only needed to exit a couple of days before December 15 to be safe.


photo

 Oscar Cartaya, Insurance Med. Director

 Sunday, January 4, 2015



Hello Alex, yes indeed you are right to say that when you feed data into an AI engine or an NN the results are generally not intelligible in term of a broad understanding of the market. Like everything else there are degrees in this "solution obscurity" if you wish to call it this way. I think that feeding these systems fairly raw data gives very obscure solutions that then have to be processed further. That is fine with me, however it may not be fine with many others.

A NN or AI never ever gives you a firm answer, they produce approximations. Remember, what you are really doing is to produce a solution for the "training portion" of the curve. The solution can be close to optimal or perfect if you let the engine go for an unlimited amount of time. But if this is the case all you are doing is to make a perfect curve fitting for the "training portion" of the curve. This is not useful for trading.

You may ask how is it that such a thing as an NN generated solution, which appears illogical in terms of your personal market view, may be useful for trading. My own personal answer is that the market is a system of infinite complexity and that all attempts to rationalize it into understandable phases or periods produce exactly the same thing the AIs do: produce approximations or educated guesses.

This is not a criticism of your post, but when you divide the market into rational and transitional periods, you are just bundling broad characteristics shown by the market into rational bundles that remain to some degree or the other more or less applicable to broad periods of time. It should also be said that the shorter the time period you use in your trading and your vision, the more the variability will be observable within the periods. From my point of view, there is absolutely nothing wrong in applying NN solutions after you filter them through a logical construct as you define. Indeed, doing such oftentimes improves the results. However, the use of the AI or NN means that the trading depends much more on machine produced results than upon your own educated guesses or emotions.

Lets take the USDRUB pair (I do not trade currencies by the way). Any analysis will show USD gaining against RUB. The causes for this movement are very complex, however simple they may appear, and educated guesses can always be broken by events which are not widely understood. Russia's economy depends on oil, that is true, but it is a vast economy with many complex factors involved, factors that are not all economic in nature and can change rapidly. Educated guesses are OK but you must maintain a very good handle over multiple issues and data, not all of which are economic in nature, to make these educated guesses and to trade based upon these educated guesses. This is a LOT of work. This also means that not all educated guesses have the same value, some are much better than others probably depending upon the degree of knowledge and correctness of the personal vision used by a given trader.

What the AIs and NNs do is to free you from this kind of continual and incessant need to maintain a level of knowledge over all of these factors to the level of being able to make the educated guesses required to successfully trade. This does not mean that the AIs or NNs will totally free you from keeping some kind of idea or view of the market, but it will simplify this task as far as I am concerned. They will also help moderating the influence of personal emotions from trading.

I hope you take this as my own point of view, not as an absolute denial of your post. Indeed your method is valid and possibly it describes the method the vast majority of traders out there use in one way or the other. What I really want to say is that there is room for AIs and NNs in trading and that using these tools will simplify the amount of work you must do to keep up with the market, and limit to some degree the impact of personal emotions upon trading.


photo

 Alex Krishtop, trader, researcher, consultant in forex and futures

 Monday, January 5, 2015



Oscar, the situation with USDRUB is absolutely not complex at all, and oil prices play only a minor role in this process. The corporate debt of our key companies was denominated in USD and EUR, because these companies borrowed in these currencies from foreign banks. This information is freely available to any interested party. Now as the US prohibited certain financial activity with these companies and basically made restructuring of the debt impossible, and the time to pay was approaching, is there any wonder about what happened to the ruble? Do you need to perform any complex analysis to understand it and act accordingly?


photo

 Oscar Cartaya, Insurance Med. Director

 Monday, January 5, 2015



Alex, I think you have boiled down the situation of the USDRUB to a most basic level. If I am not mistaken, I believe there are political overtones (Ukraine, sanctions) to the present predicament Russian companies are having refinancing their debt. The issue with Ukraine is, to say the least, complex and very political in nature. It involves military force (Ukranian, separatists, and Russian), geopolitical and military considerations. Without the Ukraine issue there would be no sanctions, and without sanctions there would be no specter of default hanging over Russian companies. And finally, since oil has been the cash engine fueling the Russian economy and financing its armed forces resurgence, the drastic drop in the oil price has to be taken into consideration as well.

I do not think the USDRUB pair is an issue which lacks complexity, however you are free to analyze it and trade it in any way you wish. I believe the true level of complexity of any issue you (plural you) may wish to analyze is seldom taken into consideration by many traders. Approximation works Alex, as you (singular you) probably have happily found out trading the USDRUB pair.


photo

 Alex Krishtop, trader, researcher, consultant in forex and futures

 Monday, January 5, 2015



Oscar, the reasons you quoted are all correct, but the immediate reason that provoked the rush in the ruble was exactly what I named. The level of complexity that is required to make proper trading decisions is far lower than that required for politico-socio-psycho-whatever analysis.

Your example sounds like a story with a DOA watch sent via USPS. It is possible to explain it by numerous factors, including weather conditions that caused a thin layer of ice on top of pavement (at this point it's good to discuss differences in ice thickness depending on the temperature), issues with soils of shoes of a particular make (don't forget to discuss here the history of shoe making), provide a psychological description of the postman including his biography and that of his relatives, and finally come to the conclusion that only this great number of factors can explain that the watch was broken when that postman fell the parcel down because of slippery.

In general, analysing _all_ possible reasons that cause a certain effect leads to frustration. One needs to analyse only the immediate reason for the market process that is to be exploited.


photo

 Oscar Cartaya, Insurance Med. Director

 Monday, January 5, 2015



Alex this is true. There is a phrase that summarized the attempt to analyze all possible reasons causing a market move: "analysis paralysis." I could not agree more that a personal approach to trading anything has to be simplified to a level that allows trading and permits obtaining decent results. I could not agree more with you that analyzing all factors that may be included in a market move is both exhausting and frustrating. This is basically the point I was making in my long post before. The issue of my prior long post is an attempt to describe ways to get to a point where your analysis is valid enough to produce the results you want. My post concentrated on the advantages of NNs and AIs in achieving this result, you on the other hand use the expertise approach. Reaching the point where your own expert opinion or analysis of a market situation is generally correct and adequate relying on your experience in simplifying the number of factors you take into consideration is doable. However this expert approach is not as easy as it may seem.

In my prior long post I indicated that all attempts to simplify the market into manageable portions where the risk can be more or less easy to estimate are basically a form of classification. This if grouping common factors into an approximation that will hold for a security or group of securities for a period of time. This is not a totally correct solution, but it is a good enough solution to allow profitable trading of the underlying security. Yes, you can use approximations for trading, indeed I think we all do in one way or the other.

However how to develop this workable approximation relying on your expertise is not simple. Legions of failed wanabe traders show how difficult it is to develop the necessary level of expertise to do so. Experience helps a lot in this regard, and yes there are people around us that seem linked or connected to the market in inexplicable ways that allow them to be successful traders. Whatever works for you is OK by definition. No one, certainly not me, is attacking you for explaining the way you trade.

I prefer to approximate the market using NNs or AIs, you prefer to let your knowledge of the market guide you. Both ways are fine as long as they work. The issue is really NOT to be right against all challengers, or to present the ONLY solution to the market, the issue is to make money. My hat is off to you for your ability to simplify issues and for being able to trade successfully. I have my own ways, different from yours, that also work. I can live with this and hope you can as well.


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Tuesday, January 6, 2015



@Oscar

I have experience using NN and AI tools. (Bioprofit). Some success with the e-mini but I did not have the financial mucle to trade it live. The mayor input was a bit strange transformation of the VIX. Then they changed the way the VIX was calculated and it did not work well after that.I have switched to the Forex market. The tools I used was EOD tools so they where not what I needed so I am using more traditional TA.tools. I am a bit curios as to what AI-tools you use and what markets/time frames you use them on


photo

 Oscar Cartaya, Insurance Med. Director

 Tuesday, January 6, 2015



I use Neuroshell and Chaos Hunter in my work. Besides market analysis, I have met people who use this combination for dealing with a variety of issues which have little to do with market trading, for example they may use them to analyze seismic plots in oil exploration.


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Tuesday, January 6, 2015



@Oscar

Thanks

:-)

>they may use them to analyze seismic plots in oil exploration

Same as developer of Bioprofit, They also make tools for oil prospecting (Biocompsystems)


photo

 Oscar Cartaya, Insurance Med. Director

 Tuesday, January 6, 2015



Ingvar, in my experience it is not the engine or tool you use that creates problems, the issue is the steep learning curve and the process of familiarization with the jillion details not described in the documentation provided. Keep that in mind with Biocomp as well. Oh, and Ward (the makers of Neuroshell and Chaos Hunter) also has a number of separate products which are intended for general use and I believe allow you to go deeper into the guts of the AI engine (AI Trilogy and Gene Hunter). All the best.


photo

 Oscar Cartaya, Insurance Med. Director

 Tuesday, January 6, 2015



Oh, forgot to make my disclaimer. I am not related in anyway with Ward Systems or their products, except for the fact that I bought them, use them, and like them.


photo

 Ingvar Engelbrecht, CEO, developer, janitor at Nova Data Skr. AB and www.maieutic.com

 Tuesday, January 6, 2015



Oscar :-)

I got the same disclaimer. And I like Biocompsystems products. Especially "Dakota"


photo

 Ersin D., ICT & Telecom Profession

 Wednesday, January 14, 2015



@Oscar,I think we are looking from different angles but this is normal, everybody has different opinions.

Curve fitting is one of the most important issues in algorithmic trading and I've stated before how to decrease it, there are also very good ideas in this topic.

My point about NNs are; we cannot solve the curve fitting problem by switching from conventional methods to NNs. NNs also do curve fit and the reasons are usually similar. Some people consider NNs as the ultimate method to get away from the problems. I don't agree this.

There are of course some advantages of using NNs but also it brings some additional parameters to solve; depending on the system it may be worth to use it or not. But not as a complete solution for solving curve fitting.

Lastly, about English, French etc discussion; every function f(x1,x2,x3..xn)=y has some inputs and you get an output. A neural network is nothing but a function. You can teach anything to NNs but the input and output relation should be meaningful (it can be extremely complex). There is no such a relation in stocks, may be you find it for 6 months then you need to re-train the NNs, as we do in optimization. The problem is same continues and training NNs are much more complicated then doing optimization.


photo

 Jim Damschroder, Chief Investment Officer at Gravity Capital Partners

 Wednesday, January 14, 2015



It looks like the group is consistently advocating an out of sample walk (OOS) forward backtest. Depending on your strategy you may also need to check against survivorship bias which is typically not alleviated by simple OOS but whose impact is usually under 2 % and potentially irrelevant. We recently added the out of sample walk forward strategy backtest as an automated feature to our platform; www.gsphere.net which works at a portfolio level, not for an individual security.... for which i believe Tradestation offers this technology.


photo

 Oscar Cartaya, Insurance Med. Director

 Wednesday, January 14, 2015



@ Ersin. Yes, you are absolutely correct, we do disagree in very fundamental issues. However, there are a few issues that we agree in. We agree that breakdowns of solutions or trading systems happen. The breakdowns sometimes happen after a shorter period of time in trading, sometimes after a longer period of time in trading. You are absolutely correct in this regard. However old trading systems may recover their trading effectiveness after an unspecified period of time and may be put into active trading again.

We absolutely disagree in the amount of time and effort that it requires to set NNs and obtain adequate OOS results from them. I think getting acceptable OOS solutions from an NN is not that difficult. I have run multiple NN runs and obtained different sets of acceptable solutions for a single security in a period of several hours. I think the time spent in doing this is quite acceptable. Of course the results have to be worked into a trading system which takes a bit longer, and tested which takes much longer, but that is the way ii is.

Judging by your comments you appear to spend a significant amount of time tooling and dealing with the internal workings of the NN. You also worry about maintaining a "meaningful relationship" between inputs and outputs in an NN. I have no idea what you mean by a meaningful relationship between inputs and outputs but it sounds like you are trying to fully understand what goes on inside the black box. This seems to me like a significant waste of time.

Finally you appear to doubt the validity of all that you cannot imagine rationally, express mathematically and validate by some means. OK, I agree in testing and paper trading new systems before sinking money into trading them, but really do not worry much about any of these other mathematical and rationalization issues. I do not try to shoehorn the market into a math formula, and do not believe I can understand the detailed workings of such a stupendously complex thing as the market. I just aim to develop enough knowledge of a few stocks so that I can trade them successfully.


photo

 Avi Messica, Owner at Private Business

 Thursday, January 15, 2015



Google for k-fold cross validation.


photo

 Kevin Saunders, Director at Tribelet Capital Management Pty Ltd and Investment Manager at Non Correlated Capital Pty Ltd

 Thursday, January 15, 2015



I use a 2 stage OOS process including running rulesets across several different markets. Then 1000+ iteration monte-carlo is useful where you:

1. Randomize Trade Order

2. Randomly skip trades

3. Randomise strategy parameters +/- 10%

4. Randomize start bars

5. Randomize history data

6. Randomise slippage

Ideally I like to know where my95% confidence is on the monte-carlo to see a general trend of probability.


photo

 Volker Knapp, Consultant bei WealthLab

 Friday, January 16, 2015



Another thing that I like to use, especially when using portfolio money management methods that exceed my possible buying power is:

1. Use worst case trade execution. Very powerful! Very helpful.

VK


photo

 Shalini Gupta, Financial Consultant at Investment Planning Counsel

 Friday, January 16, 2015



This mightbe too simplistic but could you not test in different market conditions as well as test forward?


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Saturday, January 17, 2015



This complements the previous message and presents 20 iterations of the so-called logistic map x[j+1]=4*x[j]*(1-x[j]). The first column is j. The second is x[j] starting from x[1]=0.4. The third is x[j] starting from x[1]=0.41. The fourth is x[j] started from x[1]=0.4 but "shifted up" to simplify plotting of x[j+1] vs. x[j].

1 0.4 0.41 0.96

2 0.96 0.9676 0.1536

3 0.1536 0.12540096 0.52002816

4 0.52002816 0.438702237 0.998395491

5 0.998395491 0.984970337 0.006407737

6 0.006407737 0.059215089 0.025466713

7 0.025466713 0.222834649 0.099272637

8 0.099272637 0.692717473 0.357670323

9 0.357670323 0.851439902 0.918969052

10 0.918969052 0.50595998 0.297859733

11 0.297859733 0.999857915 0.836557249

12 0.836557249 0.000568261 0.546916872

13 0.546916872 0.002271753 0.991195228

14 0.991195228 0.009066367 0.03490899

15 0.03490899 0.035936672 0.13476141

16 0.13476141 0.138580909 0.466403089

17 0.466403089 0.477504963 0.99548499

18 0.99548499 0.997975893 0.017978498

19 0.017978498 0.008080039 0.070621086

20 0.070621086 0.032059008 0.262534992

Best Regards,

Valerii


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Saturday, January 17, 2015



Ersin Demirbas: "A neural network is nothing but a function."

A neural network is a function of several variables. What usually confuses people beginning to study neural networks and making this simple conclusion is a manner in which neural networks are described. The matter is not only in building a terminological parallel between notions of evolutionary and neural science. The matter is already in drawing diagrams connecting the nodes of a neural network. This hides the sense that a neural network is a function of several variables well known to mathematicians.

Moreover such diagrams often lead to notoriously inefficient computer implementations of neural networks. Software designers begin to consider them as a system of interconnected nodes with a specific flow of information along the links. It is, of course, possible to match such diagrams to nodes dynamically allocated in computer memory with embedded interconnections. The one, who is familiar that operator new (C++ language), and functions malloc, or alloc (C language) allocate memory for a block containing not only the node but a header of the block, always tries to use something more memory efficient. It is important that a neural network allows a matrix representation.

A neural network is a function of several variables. Using suitable (like sigmoid) functions with nice differential properties allows to build neural networks possessing nice differentiable properties allowing to utilize many efficient optimization (learning) algorithms. This directly relates to the Kolmogorov's work

Kolmogorov, Andrey, N. "On representation of continuous functions of several variables by superposition of continuous functions of one variables and additions". Doklady Akademii Nauk SSSR 114, 1957, pp. 953-956.

Under typical conditions defining a neural network with two intermediate layers we get the function

f(x1, ..., xn) = SUM[from q = 1 to q = 2n + 1] {Gq(SUM[from p = 1 to p = n]Hpq(xp))}

(Sorry, I am not sure how to use on this site ordinary math notation). Proper implementation of the Kolmogorov's theorem on the representation properties of the functions given by the above equation requires deeper understanding of the conditions and specifics of computers (rounding or truncating the real numbers making them rational numbers). However, nothing dismisses the sense: a neural network is a function of several variables.

Best Regards,

Valerii


photo

 Ersin D., ICT & Telecom Profession

 Saturday, January 17, 2015



@Valerii; I didn't say that it's a function of one variable! I also didn't make a simple conclusion about neural networks. Calling NNs as a function does not make it simple. And as I stated before it can learn extremely complex relations. Please also read my previous posts.

I've successfully used NNs in several projects (other than financial markets). However I've doubts if it's capable of solving the issues in trading systems. If you have performed one for financial markets I would like to know it.


photo

 Kevin Saunders, Director at Tribelet Capital Management Pty Ltd and Investment Manager at Non Correlated Capital Pty Ltd

 Saturday, January 17, 2015



Thanks Valerii.

Your point is made but you are not suggesting that these mathematical manipulations are anything like what occurs in the market?

The key feature with the beast we deal with is unbounded variance. Something much harder to express mathematically. In fact, I would say that it is impossible to express. It is not a stochastic process as so many would like to believe. The recent move in the Swiss franc is a case in point.


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Saturday, January 17, 2015



Ersin,

I did not write my message in opposition to yours. I liked your sentence "a neural network is nothing but a function". I wrote my message in a sync in order to use a good opportunity to attract attention to the well known to specialists contribution of Kolmogorov to the subject of neural networks made 58 years ago. This is one of the most fundamental results obtained in the subject related to multilayer perceptron with two hidden layers. I also presented, exactly, the expression of the function obtained under such conditions.

As for the central thread of this discussion - over fitting, then I already described ordinary techniques used for preventing over fitting (you can also read my previous messages). I also stated that independently on the technique in a limit of increasing samples obtained for the same general population all methods (Bayesian optimization, least square maximizing likelihood function, etc.) lead to the same set of optimal parameters. However, if the samples are obtained from different populations (changing stochastic random properties), then it is not right to name this "over fitting problem". A better term would be incorrect use of probability theory. It does not mean that the task itself is incorrect.

Another thing (I was waiting who will say it here) is that over fitting is often associated with the least square method, LSM. LSM optimizes to errors. For two points LSM draws the line exactly via them (while they have errors)! For a coin by coincidence showing three tails (tail is 1 and head is 0) in a row of three trials it states that the estimate of mean is 1 and the estimate of variance is 0. It is like one can take that coin now, go, and win all bets using this "very special coin and putting on tail". But all this is knows and there are ways to overcome this over fitting (Bayesian optimization is one). The problem is that we solve a different problem on the markets.

Best Regards,

Valerii


photo

 Oscar Cartaya, Insurance Med. Director

 Saturday, January 17, 2015



Valerii, I think Kevin indicated that the market cannot be described mathematically. At least that is my interpretation of what he said in his last post. I see these attempts to reduce the market to a mathematical expression like the attempts by Cinderella's evil step sisters to stick their feet into the crystal slipper. Like the evil sisters feet which were far too big for the crystal slipper, the market's complexity is far too vast to be reduced to a mathematical expression. I think the very best mathematical expression of the market you may get will be nothing better than an approximation.


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Saturday, January 17, 2015



Oscar,

I believe that my message is long and you did not read it yet to the end. So, I am citing myself:

"If your sentences mean that mathematically one cannot describe markets, then, I, indeed, do not know one who can do it. How can one describe systems involving human beings' minds? Some primitive descriptions of some selective properties exist but it is so far for one to be systematically successful in speculation. A mind itself can become a more robust tool than a computer or mathematics. But for this it should be put in trading. I mean not an abstract mind but experienced and specifically trained by the market itself."

Best Regards,

Valerii


photo

 Oscar Cartaya, Insurance Med. Director

 Saturday, January 17, 2015



Valerii, this a fine self quote that makes good sense if I may say so. I did read Kevin's whole message,


photo

 Piotr Pietrzak, Director at IVP FINANCE (CYPRUS) LTD

 Sunday, January 18, 2015



Take an uncorrelated instrument for the same period of time and check if the strategy would still work.


photo

 Oscar Cartaya, Insurance Med. Director

 Sunday, January 18, 2015



@ Piotr. Could you be a bit more specific. Off the top of my head I would say the results would depend upon the strategy used and the way you arrived at it, but this is just a guess since the question is so undefined.


photo

 Kevin Saunders, Director at Tribelet Capital Management Pty Ltd and Investment Manager at Non Correlated Capital Pty Ltd

 Monday, January 19, 2015



Valerii, I suggest we resist the urge to turn general commentary into rigorous scientific criticism. I do recognise that it is edifying to discuss why 0.9 reoccurring is actually not equal to one. I want to contribute as a person who operates systematic methods in the market with real money. If brevity is a sin, perhaps I can expand somewhat by forwarding an opinion. The problem is not one of over fitting. Rather, the problem is a search for an "answer". One of the great traders of the early '80's (if his story is to be believed), Curtis Faith, wrote once - no one likes to admit the role luck plays within the market. I find Quants to often be the least able to accept this aspect. I build systems with the expectation that they will fail. This is a major source of edge. Cheers.


photo

 Valerii Salov, Director, Quant Risk Management at CME Group

 Monday, January 19, 2015



Kevin,

"One of the great traders of the early '80's ..., Curtis Faith ..."

Curtis Faith is a pupil of Richard Dennis. Richard Dennis was a great trader. Curtis would agree with me.

"I find Quants to often be the least able to accept this aspect."

Many quants are not traders. It would be pointless to seek for a trading advice from them. A trader does not need advices other than those given by a market.

As for over fitting, then there are confusing and/or incorrect statements on this thread. Curtis Faith also could not avoid them. For instance, in Faith, Curtis, M. "Way of the Turtle. The Secret Methods that Turned Ordinary People into Legendary Traders.", New York: McGraw-Hill, 2007 on page 153 we read:

"Overfitting or curve fitting: The system may be so complicated that it has no predictive value. Because it is turned to the historical data so closely, a slight alternation in market behavior will produce markedly poorer results."

This is one of the delusions associated with "over fitting". This thread is not an exception. Often, it is claimed that if the number of fitting points is close to the number of parameters to be optimized in the system, then it essentially over fits. Fitting greatly depends not only on these circumstances by the optimization criterion. An enormously (than market) simpler illustrative example is fitting a set of points by a polynomial.

If the number of points is equal to the degree of the polynomial minus one, then the least square method creates the so-called interpolating polynomial (also known as the Lagrange interpolating polynomial). The latter is unique and paths via each point. However, outside of the given points it oscillates and has no predictive power, if the underlying real system was, let us say, a cos(x). This is, often, named "over fitting". In this case the reality is cos(x). The data set is 10 points. The model fitting them is a polynomial of degree nine. It is well known that in this illustrative case over fitting takes place, if the least square method is applied. However, if one applies a regularization technique bounding the values of the coefficients or Bayesian optimization also justifying ("explaining") the usefulness of the regularization term in the sum of square deviations, then no over fitting occur. The same system and data set but no over fitting.

If one does not understand these details known for a simple system such as polynomial, then his or her considerations of a much more complex market and a trading system adapting to it make me skeptical. A good point is that that in order to be successful on the market, a knowledge of arithmetic can be sufficient. Jesse Livermore, Dennis, Faith, Larry Williams, Tom Baldwin and many other did not have Ph.D. Their minds trained by markets are good equivalents of a Ph.D. Losing a few hundred dollars in trading is a better and faster teacher than writing and reading a few messages here.

Best Regards,

Valerii


photo

 John Burchfield, Financial Engineer

 Monday, January 19, 2015



@Kevin Saunders




%%%



One of the great traders of the early '80's ..., Curtis Faith ..."



%%%




Folks, I highly recommend reading about Richard Dennis.



As for Curtis being a great trader, I believe that statement is relative.



%%%



http://turtletrader.com/curtis-faith-performance/



%%%




I recall early in the 1990s receiving my first advertisement brochure from Curtis, letting the world know the "secret" turtle method. Curtis was good at marketing. He flooded my mailbox regularly.



Folks, I highly recommend learning WHY the Turtle Method works.



As the Turtle Method came to be public knowledge, I learned that it is a variation of J.M. Hurst's ideas.



Folks, the ATR idea in the trading method can be informative.



Best,



John


photo

 John Burchfield, Financial Engineer

 Monday, January 19, 2015



Regardind the issue of overfitting, I believe that an ounce of prevention is worth a pound of cure. Yes, we can address the variance/bias issue too.



%%%%



http://nlp.stanford.edu/IR-book/html/htmledition/the-bias-variance-tradeoff-1.html



%%%





I believe that a significant issue in functional approximation has to do with choosing the appropriate set of features/determinants. The issue of collinearity is especially prevalent in designing systems. This is akin to some traders thinking that adding more indicators will improve the numbers. Essentially, we are looking at homomorphisms with collinear indicators.



%%%%%



https://books.google.com/books?id=XQXiSHXkQDcC&pg=PA178&lpg=PA178&dq=collinearity+and+homomorphisms&source=bl&ots=On1TXtXv50&sig=ajLma7WxtKI5QecuF1f6cTZElg4&hl=en&sa=X&ei=UHu9VNadGtGMsQTO6YDYDw&ved=0CEgQ6AEwBQ#v=onepage&q=collinearity%20and%20homomorphisms&f=false



%%%%%%%%%%%




The goal is to derive an independent set of measures, without performing any rotations or other adjustments for a basis space. It is possible. In 2014, I derived an independent set, from first principles in physics, solely from price with no adjustments. The beginning of the journey was the realization that we are working in a conservative space. You can find some of my work at Mark Jurik's Yahoo Group under the Cronus section. I am the discoverer of the Cronus measure, which is not derived from price of any instrument. FYI, Mark is a GREAT teacher



Early in my practice, I fell into the trap of adding indicators galore. What happens is the curse of dimensionality leads to analysis paralysis.



Another issue is inexperienced quants developing systems. I am defining inexperienced as lacking the expert knowledge of market processes. Here is an analogy. We can hire a stats/analytics person to model and improve a system. However, the stats person still needs an expert operator for general characteristics of the process. Also, the expert operator can bring insight regarding nuances particular to the process. No one "knows" the system better than the person at the line. Some give the line people way too little credit.



We have some fellow Bayesians here. Geoff Webb has some interesting research.



%%%%%%



http://www.csse.monash.edu.au/~webb/



The Knowledge Factory is an interactive machine learning environment that provides tight integration between machine learning and knowledge acquisition from experts.




Statistically sound association discovery. Association discovery includes association rule discovery, k-optimal rule discovery, emerging pattern discovery and contrast discovery




MultiBoosting [also known as Boost Bagging] combines boosting and bagging, obtaining most of boosting's superior bias reduction together with most of bagging's superior variance reduction. MultiBoosting is an example of Multi-Strategy Ensemble Learning. We have shown that combining ensemble learning techniques can substantially reduce error.




and LOTS more....



%%%%%%%%%%%%%%



He has a FREE version of software too available for download to implement ideas.




Best,



John


photo

 Kevin Saunders, Director at Tribelet Capital Management Pty Ltd and Investment Manager at Non Correlated Capital Pty Ltd

 Monday, January 19, 2015



Interesting page on Curtis, John. Thanks.

Please login or register to post comments.

TRADING FUTURES AND OPTIONS INVOLVES SUBSTANTIAL RISK OF LOSS AND IS NOT SUITABLE FOR ALL INVESTORS
Terms Of UsePrivacy StatementCopyright 2018 Algorithmic Traders Association