Search
× Search
Sunday, December 22, 2024

Archived Discussions

Recent member discussions

The Algorithmic Traders' Association prides itself on providing a forum for the publication and dissemination of its members' white papers, research, reflections, works in progress, and other contributions. Please Note that archive searches and some of our members' publications are reserved for members only, so please log in or sign up to gain the most from our members' contributions.

Algo-system ranking -- uniform world-wide approach

photo

 Oleksandr M., Algo development enthusiast – Talent and harmony in trading

 Monday, August 31, 2015

In regards to evaluation of Algo performance (i.e. - Outstanding, Mediocre, Poor) -- as you know this topic has been touched upon several times in the past (as well as in other neighboring groups on algorithmic//systematic trading) without clear conclusion, so many people - so many opinions.. So the question remains 3-fold: 1.) How does "Big Investor" actually compare 2 totally different Algos (also for intelligent capital allocation)? 2.) What's "Big Investor" quality evaluation criteria for results of auto trading (on history + live on real), as well as for Algo as a product. Passed - didn't pass. 3.) Is there any internationally recognised outfit//portal that has developed a set of meaningful and comprehensive assessment standards to fill-in today's gap between "Big Investor" expectations and Algo-developers' achievements? P.S. Is there an actual "Big Investor" in this Group to provide "first-hand" info with examples? P.P.S. Anybody knows whats cooking on this subject at all those eminent "TradeTech" conferences? P.P.P.S. Ultimately I wish there is respective stand along paragraph (split separately for Forex and other markets) introduced in ISO 90003/2004 software products and related services http://www.iso.org/iso/catalogue_detail?csnumber=35867


Print

31 comments on article "Algo-system ranking -- uniform world-wide approach"

photo

 Oleksandr Medviediev, Algo development enthusiast – Talent and harmony in trading

 Wednesday, September 2, 2015



Asking Alex Krishtop: am I right assuming that every case is different, and:

#1 - in 33% cases final decision is subjective (big boss gut feel)

#2 - in 33% cases final decision is tossing coin

#3 - in 33% cases some critical data is missing (average win vs average loss, etc.), so rule #1 becomes applicable ?!!


photo

 Oleksandr Medviediev, Algo development enthusiast – Talent and harmony in trading

 Thursday, September 3, 2015



..and, of course, system must be sexy, right?


photo

 Alex Krishtop, trader, researcher, consultant in forex and futures

 Thursday, September 3, 2015



First and above all — personal relationships with the system/strategy developer/trader/introducer/friend/whosoever. 90% of the decision. The rest 9% is Sharpe ratio.


photo

 Oleksandr Medviediev, Algo development enthusiast – Talent and harmony in trading

 Thursday, September 3, 2015



Sharpe was originally introduced for stock market. 9% stays applicable for Algo trading stocks as well?


photo

 Alex Krishtop, trader, researcher, consultant in forex and futures

 Thursday, September 3, 2015



You asked about the reasons for the final decisions, I answered. Who cares if Sharpe ratio is for evaluating of performance of stock pickers only? Decision makers mostly don't even know what it is and what it means, but they know that the reading should be greater than 2. Easy, right?


photo

 Oleksandr Medviediev, Algo development enthusiast – Talent and harmony in trading

 Thursday, September 3, 2015



2 (!) I could hardly imagine Algo consistently making greater than 0.4 (which would be outstanding for Forex in my view). Anyhow, what about "Robustness Assessment"? Is it already standard practice available for the review, or it's for ATASSN internal use only?


photo

 Jon Grah, Trading Systems Automation Expert @ AwarenessForex.com

 Thursday, September 3, 2015



The "Big Investors" often do not have the time to review individual strategies or monitor performance. They use asset/fund managers that do the majority of the heavy lifting on their behalf; so money is still fragmented that way. Sample selection bias is very easy to fall victim to even for seasoned fund managers. None of the "Big Investors" use things like zulutrade or other public copiers; those are primarily for uninformed investors. They are solicited or used by personal introductions and databases like evestment.com

Remember, lots of this [allocations] are based on trust and confidence (of the asset manager, allocator, etc) and not every investor wants their name out there (they want to remain private, for security reasons). At the same time, they do want to filter out scam artists and people who have no real interest in risk management.

My research so far has concluded:

* more and more funds are looking for 100% algo strategies. I think this points to #3 below, but also because they want to know that you have identified strengths AND weaknesses of when the algo is best suited for. It keeps the selection process more objective on both sides.

* When reviewing sustainability, they want to see how a portfolio will perform, not just individual strategies. Tick level backtests are ok for shorter term strategies. But they want to see a basket of different instruments traded at the same time if possible vs 10 individual instruments. But the main thing is....

* Transparency is more important. Remember that a genuine investor has no interest in trading themselves. That's your job. But you should be able to give them a basic idea of how you intend to be profitable. You are not required to give away your edge, but there should be a rational reason for you to be profitable over the long haul, whether that be dealing (market making), statistical arbitrage, value trading (mean reversion), etc.

You put yourself in their shoes. Not just having the $100MM, but being responsible to other investors or counter-parties. How would you establish trust, where would you place the funds, which brokers to use, etc?


photo

 Alex Krishtop, trader, researcher, consultant in forex and futures

 Thursday, September 3, 2015



I'm surprised that by now no one pointed me to the inconsistency in my calculations: 90% + 9% = 99%, but not 100%. What about that 1% left? Yes, it is for robustness assessment, which is a standard practice among sophisticated investors which dare invest into new, unknown, emerging talents. This is not a single mathematical procedure because in most cases we rely on our understanding of how markets change over time and see how the strategy/portfolio performs in different market environments. This is something that we indeed consider very seriously in the ATA's course for systematic traders and this is also what I recommend for sophisticated investors.


photo

 Jon Grah, Trading Systems Automation Expert @ AwarenessForex.com

 Friday, September 4, 2015



@Alex, I forgot to mention 'consistency' in what I learned. It kind of goes hand in hand with 'transparency' as they both complement each other. Part of the portfolio assessment to build investor confidence is determining how well the strategy can handle a variety of market conditions (robustness). Myself I am adding some on-the-fly volatility detection and adjustment of risk to handle that.

I had already considered it after Jan 15 SNB as to how to handle extreme/unexpected volatility on-the-fly. After taking a hit for the USDJPY unexpected volatility on 24 August, I'm speeding up that aspect of programming. I expect to see more of [unannounced volatility] that as China, Bitcoin, etc become more mainstream. In hindsight (which is always 20/20), I can't see why I didn't have such a filter before. It opens up a whole new insight as to how to handle risk.


photo

 Oleksandr Medviediev, Algo development enthusiast – Talent and harmony in trading

 Friday, September 4, 2015



Alex, Jon - Thank you for sharing your thoughts. If i was big investor (coming from lets say engineering/construction business - not an expert in financial markets) I would probably do the following:

#1 - seek an opinion from an expert, like ATASSN -- optional at present time for reasonable fee, OR

#2 - order Algo formal Audit by big internationally recognised firm -- doesn't seem to be optional at present. Because there are NONE uniform industry standards in this relatively new field.

Next, 90% belong to personal relationship with Algo-developer? Yes, agree - absolutely crucial. But it never happens overnight - takes time. Also there is one potential "issue" - usually once Algo-developer grows up as a professional and becomes recognizable - by that time he/she already has smaller investor(s), who unavoidably clash with new "Big Kahuna".

Anyways, let's put "Big Investor" aside, they have different set of mind - the kind we would never fully understand. What about Algo-developer himself(?) that worked real hard, burning mid-night oil, working his tail out on testing/optimization puzzles and spending endless hours in front of the charts..

What are options for him/her to figure out real $ value of his product - vital in order to ensure proper marketing? Maybe 100-dollar price-tag is too much, maybe 1M is too low..

Bottom line - my original question on Algo ranking in essence is left unanswered. Must be better than 90+9+1. More profound/specific information and/or referral to appropriate resources would be greatly appreciated.


photo

 Oleksandr Medviediev, Algo development enthusiast – Talent and harmony in trading

 Friday, September 4, 2015



@Jon, in regards to risk reduction measures: isn't Broker' spread goes up during adverse//volatile markets? I mean just cancel signal:

if Current Spread >= normal

-- that seem to be faster thus more secure/conservative permanent solution.


photo

 Rostant Ramlochan, Quantitative Analyst / Strategist: Asian and US/Canada markets

 Saturday, September 5, 2015



there are a set of commonly used performance measures Fund of Funds use to compare strategies/funds in general. this forms a good starting point. these quantitative measures are used in conjunction with qualitative measures such as how you feel about the Manager's expertise and knowledge .....


photo

 Jon Grah, Trading Systems Automation Expert @ AwarenessForex.com

 Monday, September 7, 2015



@Olek, not necessarily. Just because prices are more volatile (faster velocity of price in a given time period) does not necessarily mean a lack of liquidity (availability of buyers/sellers). When spread is high, the cause is a lack of takers on the opposite side in which the market is moving. When trading with a b-book broker, the broker is often the "taker" and will widen the spread to guard against adverse selection (real or perceived).

It is possible for the market to move 100 pips or more in a short period of time, but plenty of liquidity and therefore normal spreads. This just happened on the mini-'crash' on 24 Aug in USDJPY, where the broker quoted regular spreads for that pair for almost the entire 250+pip move that occurred over 3 minutes. This also happened in the 18 Mar FOMC announcement, where a delayed reaction caused a sudden 200 pip takeoff in prices for eurusd and gbpusd (maybe other pairs also) over a 2-3 min period. The spread didn't widen significantly until 1/2-3/4ths of the way up.

Also if your spread filter is too tight, you might not get in properly so that you can average out on time. A spread filter alone is insufficient in these cases.

Of course, this also depends on the venue you are using for execution. We are assuming direct market access with an ECN/STP broker with Prime Broker or Prime-of-Prime clearing. If the broker is warehousing trades (b-book), it is possible to have artificially wider spreads, disconnects, requotes, etc. This can also work in your favor, but only if you know exactly how to handle them. Even during SNB Jan 15th, pricing was taking place with proper ECN brokers (with very wide spreads...nearly no liquidity). All the b-book brokers and some agency models (LMAX) also suspended their services on CHF pairs for several hours or more during that period.

BTW, I might have saved you about 3-5 years of figuring that out on your own ;)

@Rostant, can you elaborate on the 'commonly used performance measures' that the Funds of Funds use to compare strategies?


photo

 Oleksandr Medviediev, Portfolio Analyst – FIBI

 Monday, September 7, 2015



@Jon, Thank you for great overview and important insight on market spread core basics - I owe you a cow! Yes, indeed ))

In regards to 24/08 USDJPY 250+ move - for particular "phenomenon" there seem to be an easy fix -- "never trade on Mondays and during important news" Rule (that's been my Rule#2 during recent years). Because usually on Mondays markets digest whatever happened over weekend with mostly unpredictable output.

In regards to warehousing "b-book" brokers (that's called "scam", right?) - there is handy portal forexpeacearmy . com designed to help traders to keep away from that.

So, that's good news =)) On a flip-side: again, my original question on Algo ranking stays unanswered.


photo

 Guy R. Fleury, Independent Computer Software Professional

 Monday, September 7, 2015



The performance comparison debate has been done here a few times in the past. Ranking trading systems is relatively easy. There are not that many parameters in this problem. Let's see.

Time: should be the same trading interval for all (read years, not minutes). Otherwise, one system is having an advantage over the other.

Initial capital: should be the same for each trading strategy. It's not hard to figure out that if total return is: A(0)*(1+r)^t, then putting more capital on the table should produce more, other things being equal. Evidently, if the initial capital is not of equal size it will skew end results.

There is only one element left, and that is r, the compounded annual growth rate (CAGR).

I don't really care if a trading system does one trade or thousands of trades over its long term trading interval.

It's the output, the final result, the how much you have at the finish line that really matters.

The rest, the trading itself, is just how you got there and could be considered as some trivia of the game you played. Things like, ah, I did it like this or like that, see how I placed a bet here, got out there. Technically, trivia. It's the finish line, the total end result that counts.

Nonetheless, a single trade can be expressed as: q*Δp, that it ends with a profit or loss is almost irrelevant. Your interest is in: Ʃ q*Δp, the net sum of all the generated profits and losses from all the trades taken over the entire trading interval. It is the reason behind wanting to rank systems.

The how many trades it took to get there is also kind of irrelevant. If I have to do a thousand trades more than in another system to be way ahead, I would do it. That's why we have machines, they can do the work.

What ever the trading systems you design, you will end up, for each of them, with a positive total profit if and only if: Ʃ q*Δp > 0. The net sum of all the trading activity generated a profit or not over the trading interval.

Then the ranking is simple: order each strategy's long term output: A(0) + Ʃ q*Δp, and problem solved.

You will be left after the ranking with: do I still prefer strategy A over strategy B, even if B came in first place! And from there, you could find other subjective preferences that could weight in on your choice. To end of with: I prefer strategy A over B, because... at least you would have some reasons, some basis for comparing.


photo

 Oleksandr Medviediev, Portfolio Analyst – FIBI

 Monday, September 7, 2015



@Guy, so what we got:

#1. Time: should be the same trading interval (goes without saying).

#2. Initial capital: should be the same (goes without saying).

#3. Time-Frame: should be the same (goes without saying).

#4. Leverage used: should be the same (goes without saying).

#5. Markets applied: should be the same (goes without saying).

+ other critical items:

#6. Guy R. Fleury - CAGR (I personally like it too, at least its objective parameter).

#7. Rostant Ramlochan - feel about Algo developer expertise/knowledge.

#8. Alex Krishtop - personal relationships with Algo developer (90%), Sharpe (9%), Robustness Assessment (1% - secret ingredient).

Well, as said before - so many people so many opinions. Didn't get closer to the answer a bit.

So far my impression is following:

* either this question didn't get enough attention//exposure//elevation in this Group, or

- "uniform world-wide approach" does not exist at the moment.


photo

 Guy R. Fleury, Independent Computer Software Professional

 Monday, September 7, 2015



@Oleksandr, you can attribute #1, #2 and #6 to what I said; nothing else. I used “other things being equal” when comparing two similar items. Thought it was evidently clear. Apparently not.

You could add a #9 too: how about adding a peanut butter sandwich to the mix, will that change things? Yes, it most certainly will. You could end up with an extra sandwich tipping the scale.

The point is: if you add ingredients to your comparison basis all the time, you are changing the set of criteria of the comparison all the time. So the what about this or that criteria after the question is either irrelevant or you should formulate your question better.

For instance, #4 (leverage) can be used to make a difference in trading strategies. It is first applied to an existing strategy to see if it produces a higher output or not. It's more comparing a strategy to its former self. The notion of strategy A using x amount of leverage compared to strategy B is not that good a comparison basis since inherently you are comparing a risk measure. Comparing strategy A with and without x amount of leverage can answer the question of what was the impact of leverage on a particular trading strategy.

But adding leverage also adds risk. If you add risk, you should produce more. And indirectly it is this added risk you are comparing. From there, you could compare strategy B: A(0) + 1.2*Ʃ q*Δp, to strategy A with leverage: A(0) + 1.1*Ʃ q*Δp, or without: A(0) + Ʃ q*Δp. But if you give both the same amount of leverage, technically the same amount of added risk, you are back with the original problem: still comparing the final output on an equal footing.

As for #3, (time-frame), who should care how you do it and under what time-frame. You get to your end results doing 100 trades or 100,000 trades does not change the math: Ʃ q*Δp > 0. Did you get to the finish line with a profit? And how big was it under the same total clocked time? The path you took to get there is just anecdotal. Do it in what ever time-frame you like, or can do, or what ever your trading strategy dictates. It technically should not be a real concern. The time-frame can only be used to describe how you did it. It will affect the number of trades done over the strategy's lifetime. I would say what ever time-frame you like is fine. I'm interested in the final output: A(0) + Ʃ q*Δp.

#5 is indirectly evident. One should compare strategies within the same market. There is no universal trading system, meaning there is no trading strategy that works fine in all markets and produces above average performance levels in all. However, trading is trading, and one can always compare the final output, whatever the trading activity and on what: A(0) + Ʃ q*Δp. You can compare end results where it counts.

As for #9, who cares about the personal relationship with the developer. Do you think I would put money on a trading strategy because I like the guy? Thing again. The needed output is a worthwhile trading strategy, a program, some code to be executed. And if the code is no good, it is no good; who ever the developer was.

To me, it is as if you just would like to cultivate confusion when saying: <...Didn't get closer to the answer a bit. > even when the nature of the problem can be well defined.


photo

 Samir Halim, President, Systematic Asset Management

 Monday, September 7, 2015



I would agree with all the comments, as big investors have different comfort level and risk-appetite. All in all, I believe a good starting point is performance statistics and a narrative addressing system development process, robustness, risk management, continuous monitoring and upgrade as well as non-correlation to the S&P 500.. An example of performance stats of the SPY EODE system that I am currently trading and presenting to potential investors;

:Total Net Profit (775 days) 61.94%

Average Monthly Return 1.72%

Annualized Return 20.65%

Std Dev Monthly Returns 1.60%

Annualized Std Dev 5.53%

1. Year Treasury Note 0.19%

Sharpe Ratio 3.7

MDD Trade Close -1.16%

Calamar Ratio 17.74

t-Test 7.26

Correl to SPY -11.24%


photo

 Oleksandr Medviediev, Portfolio Analyst – FIBI

 Monday, September 7, 2015



@Sam, Thank you very much for sharing this info, that's more like it. Clarifications:

#1 - above results are simulated or live(?)

#2 - what is the trading Platform(?)

#3 - what's SL size in points, except thresholds(?)

#4 - what the average holding time(?)

Also need to check CAGR, please run in MS-Excel the RATE function:

=RATE(A3,,-A1,A2)

where A1 starting amount, A2 ending amount, and A3 number of years, and let us know.


photo

 Alex Krishtop, trader, researcher, consultant in forex and futures

 Tuesday, September 8, 2015



Guys, please refrain from discussing a particular model's performance. There is the "Promotions" section in our group for this purpose.


photo

 Rostant Ramlochan, Quantitative Analyst / Strategist: Asian and US/Canada markets

 Wednesday, September 9, 2015



@Jon Grah - re: 'commonly used performance measures' that the Funds of Funds use to compare strategies'

we used the following: annualized returns, volatility, sharpe and sortino ratio, calmar ratio, % of months profitable, 5 largest drawdowns (peak to trough) and time to recover (back past previous high. if you plot cumulative returns the shape of the graph can tell you a lot about long term behaviour. some others use more complicated measures but not standardly used by all, if you can look at HF monthly report you can see the statistics funds report, but all can be derived from just the monthly returns. i also liked benchmarking to HF strategy indices, would this fund rank in the top 20% of funds with the same general type strategy, and also correlation to certain market indices and to other well known funds.

other things we looked at too include leverage used, gross and net exposure, turnover, commissions paid as% of profits, ..... but this was to get a better idea how they were trading not how well they were doing. this is a problem that has been looked at for a long time with fairly standard approach both quantitatively and qualitatively


photo

 Mark Brown mark@markbrown.com, Global Quantitative Financial Research, International Institutional Trading, Algorithmic Modeling.

 Friday, September 11, 2015



big investors like low returns with close to zero risk. they have the money and just want to keep it. the higher the performance the stronger magnetism to gamblers for clients - to the point of lotto's.


photo

 William Schamp, President/Quantitative Analyst - Beacon Logic LLC

 Tuesday, September 15, 2015



In regards to evaluating an algorithm's performance (i.e. - Outstanding, Mediocre, Poor), I suggest that a 4th alternative exists, that of "Needs-to-be-hidden". This is reserved for an algorithm so good and risk free it should never be allowed to be traded or even in the hands of a majority of those seeking this level of profitability.


photo

 Oleksandr Medviediev, Portfolio Analyst – FIBI

 Tuesday, September 15, 2015



@William, truly risk free is only interbank arbitrage. Forget it, it's a mystery - people talking.. (could be illegal too).

Any Algo falls under "Needs-to-be-hidden" - after non-disclosure agreement is signed ))

Anyhow, "Outstanding" should cover it all - just need to pinpoint realistic limits.


photo

 William Schamp, President/Quantitative Analyst - Beacon Logic LLC

 Tuesday, September 15, 2015



I should have said nearly risk free. You are absolutely right . . . impossible . . . fable . . . probably illegal and an ole wives tale. Have a good day.


photo

 Fadi Abdo, Managing Partner at FADinvest

 Saturday, September 19, 2015



There is no such risk free model , even those of the big banks that makes the market in FOREX get hammered hard sometimes, yet i agree they got better edge by tryng to profit from the flow of deals


photo

 William Schamp, President/Quantitative Analyst - Beacon Logic LLC

 Saturday, September 19, 2015



@Fadi . . . If you think the big banks have all of the answers you are living on the wrong planet. FOREX is not an environment where anyone but the banks can have consistent success. As long as the big banks are allowed to hide transactions in the currency pairs it will never be a level playing field. That isn't the case elsewhere though.


photo

 Shelly Baker, CFP, CTFA, CDFA, Assistant Vice President, Private Wealth Management at SunTrust

 Saturday, September 19, 2015



we


photo

 Ignatius Bose, Trader/ Analyst

 Monday, September 21, 2015



One question, probably asked a million times; Is there a particular profit/ loss ratio that should be considered while developing an algo system?


photo

 Zeev Ami, --

 Monday, September 21, 2015



Mr William Schamp let me ask you this why not trade spot fx but use the data from the future market where it is more transparent


photo

 William Schamp, President/Quantitative Analyst - Beacon Logic LLC

 Monday, September 21, 2015



@Zeev - They aren't the same. That would be like predicting the weather in Florida by looking at a weather map of Louisiana.

Please login or register to post comments.

TRADING FUTURES AND OPTIONS INVOLVES SUBSTANTIAL RISK OF LOSS AND IS NOT SUITABLE FOR ALL INVESTORS
Terms Of UsePrivacy StatementCopyright 2018 Algorithmic Traders Association