[SHARE] GAB - Gekko Automated Backtests
#8
(03-24-2018, 09:22 AM)simpsus Wrote: Now that I got it to work I can start asking content related questions.

Any Strategy is only as good as the past data you test it with. Or in other words, we are fitting strategies to past data. The success depends on the correlation between past and future. Nobody knows the future, so we try to make the correlation as robust as possible. 

Is that the reason for averaging?
Do you plan to automate the fitting further? i.e. Automatically iterating one parameter. At the moment I just make a 50 to a 40:60,10 and so on.

If so, then you could also iterate over strategies to detect fundamental changes in the market to switch to a better fitting strategy. Bigger scale trend changes, increases in volatility

Good questions,

Past vs Future

Yes, thus we should not read to much into a single backtest. People that post insane result for a single backtest is quite annoying since they do not understand that a single backtest is worth next to nothing. I could post results that yields 2+ TRILLION percent profit but simply don't since some users somehow expects such insane results is doable in the future. Thus it only leads to a lot of silly questions that i simply don't have time with (i do appreciate good questions though!).

But... if you got 1000+ runs with a larger spread of values that gives you an "OK -- this shit actually was profitable" you should, statistically, have something that is much better then just a single run. It's the same with sciences where 1 test yielding a positive outcome isn't considered valid since we just got 1 data-point with 1 setup that has proved it. It's commonly called "Sigma" and is sort of the value of certainty.

This is a very scientific topic though so i won't go to deep, for further reading see e.g:
http://www.physics.org/article-questions.asp?id=103

This is also the reason why one can sort by other parameters and not just "Most profitable" since "Most profitable" just mean most profitable for that specific combination of parameters and for those specific points in time. I consider the "most profitable" to be the cieling of what is possible, not as something that comes close to "these params will yield this result" since that would, statistically, be a very uneducated stance (short: it would be insane).

Please do note though that if you too quickly optimize parameters you might also be overfitting the strategy (Google 'backtest overfitting of data').

This is a typhical example of overfitting:
1. Use the tool to do 100 runs (way too few)
2. Stop it and check the most profitable runs parameters
3. Use that as a baseline and do +/-10 on the params
4. All runs profit!

Good way:
long = 100:2000,100
short = 10:90,10

The bad way:
long = 100:2000,1
short = 10:90,1

Good: The good version of this uses large stepping which is good since it gives us an idea, a "in the ballpark" value, which is what we are after.
Bad: The bad version tries to find the most profitable only and doesn't care for the "in the ballpark" value (we don't know the future so 100% best backtest matters less). It will also yield a lot of similar results and thus the average of "this works" will be worse.

-

Averaging the results

Yes -- this is the reason the average is interesting. Since a single backtest does not matter one optimally wants an average for at least 1000x runs. That average will most probably be more future proof since if it's an average of say 1000x runs it is something that has proven to work 1000x (and not just 1 single time).

If we get a little bit more scientific i would even say that one would need at least 10 000 runs in order to have some level of certainty.
You might now understand why i needed a tool to do this automatically, imagine doing this in the regular Gekko UI... there are nicer ways to commit suicide.
1000x runs should get one in the ballpark though since it starts painting a pretty clear picture when it comes to what works and not.

Here's an example of using the RBB ADX strategy with a XMR/USDT pair:
https://i.imgur.com/RCKYm0k.png


So 1,4 trillion profit?!
No -- It's safe to say that the 1,4 trillion result is basically garbage because -- see the rest of the 19 runs? All those, together, are much more important.The 1,4T+ result is an outlier, an odd set of params that happend to yield something the best profit in this specific case. It should be treated as such.

The average part of the 'view' will also be expanded upon.
I'm rewriting the core at the moment so expect the next version of the tool to have breaking changes (which mean that old results will not work since stuff such as the database columns etc have changed and thus the parsing will change to reflect that etc).

-

Iterate-per-param

No -- no plan exist to iterate over 1x parameter and just changing it one after the other since i don't see the point. If you just want to change a single parameter just make that specific parameter dynamic (A:B,C) and leave the rest?

-

Auto-change strategy

This is outside of the tool since it actually doesn't do any of the testing, it just sends params for Gekko to test.
If one wants such functions one would have to code a strategy that works in such a way. There have been examples of someone doing that so it should be possible.
  Reply


Messages In This Thread
RE: [SHARE] GAB - Gekko Automated Backtests - by tommiehansen - 03-24-2018, 09:46 AM

Forum Jump:


Users browsing this thread: