Increase performance? - Printable Version +- Gekko Forum (https://forum.gekko.wizb.it) +-- Forum: Gekko (https://forum.gekko.wizb.it/forum-13.html) +--- Forum: Technical Discussion (https://forum.gekko.wizb.it/forum-23.html) +--- Thread: Increase performance? (/thread-1438.html) |
Increase performance? - tommiehansen - 02-05-2018 Any idea where to start to debug performance when backtesting? e.g. i see that moment.js is used and also seems to be used on every candle which would seem to possibly be one of the culprints? Article, Moment.js vs Native Performance: https://medium.com/@jerrylowm/moment-js-vs-native-performance-issues-5b85d6518014 RE: Increase performance? - askmike - 02-06-2018 I haven't done extensive analysis but I have a feeling there are two main bottlenecks right now that keep the backtester from being fast are: - logging: all log messages are piped from the childprocess to the main process to the stdout. This should be behind a debug flag or something. - io: how candles are stored (in sqlite on default) and prepared when doing a backtest is definitely not optimized in any way. Especially in situations where people want to run backtests over the same data (with different strategy settings for example), they can all be calculated once and stored in disk or so. Before we micro optimize things like moment (which should not be that hard, moment is only really used in any part of the codebase related to backtests for logging purposes and such). But nonetheless I am super open for anyone doing any kind of benchmark to figure out exactly what is slow RE: Increase performance? - tommiehansen - 02-06-2018 - logging: Yes, disabling logging increases performance but not that much. From ~9s to ~7.8s using the same data. I use a simple this.debug true/false for my own stuff to enable or disable debugging messages since one doesn't seem to be able to control the default debugging state within strategies. -io: By default Gekko also seem to use journal mode FILE (instead of RAM), syncronous FULL etc which isn't the best settings for faster READ, see e.g: https://blog.devart.com/increasing-sqlite-performance.html WRITE is another thing, the setting(s) should be dynamic depending on write or read. Also -- just getting all the data for over 1 years of data only takes around ~200ms (or ~450ms if order by id desc) when querying the db manually (with bad settings). Just doing a large READ doesn't take that long even if number of rows is over 1 million. One would also ofc get all the data in a single query and not query the db all the time. With that in mind i really don't see how SQLite could be a culprint during a backtest since we only got (or should have) a single and simple SELECT? simple perf check: var strat = { init: function(){ this.startTime = new Date(); }, end: function(){ let seconds = ((new Date()- this.startTime)/1000), minutes = seconds/60, str; minutes < 1 ? str = seconds + ' seconds' : str = minutes + ' minutes'; log.debug('Finished in ' + str); } } RE: Increase performance? - askmike - 02-07-2018 > - logging: Yes, disabling logging increases performance but not that much. From ~9s to ~7.8s using the same data. I use a simple this.debug true/false for my own stuff to enable or disable debugging messages since one doesn't seem to be able to control the default debugging state within strategies. Can you try setting config.silent to true, this disables literally all logging, setting config.debug to false will additionally noop your strat's log function (if you have one). Keep in mind that logging might not effect your situation very much, but it might very well on other types of systems. > Also -- just getting all the data for over 1 years of data only takes around ~200ms (or ~450ms if order by id desc) when querying the db manually (with bad settings). Just doing a large READ doesn't take that long even if number of rows is over 1 million. One would also ofc get all the data in a single query and not query the db all the time. With that in mind i really don't see how SQLite could be a culprint during a backtest since we only got (or should have) a single and simple SELECT? Gekko doesn't query everything at once, but in batches. This way Gekko does not need to store all the candles you need in memory (which will be hard on servers and embedded devices). You can configure the config.backtest.batchSize to change this. > From ~9s to ~7.8s using the same data. Was this using your perf check? What candleSize and what dateRange was this? RE: Increase performance? - tommiehansen - 02-07-2018 I have no idea what conditions i ran under when i posted that, so new tests with more defined settings. Also it is impossible to answer your questions without a lot of data and comparisons, so.... Conditions Candle Size: 15 (minutes) System: Windows WSL (bash, my home laptop) Strategy: RSI_BULL_BEAR.js Data: USDT-XRP, 2017-12-01 - 2018-02-03 (Poloniex) Perf measure method: Custom function posted before Settings (TOML) # SMA Trends SMA_long = 1000 SMA_short = 50 # BULL BULL_RSI = 10 BULL_RSI_high = 80 BULL_RSI_low = 60 # BEAR BEAR_RSI = 15 BEAR_RSI_high = 50 BEAR_RSI_low = 20 --- Perf, with logging (run 1-3): #1 10.877s #2 11.2s #3 10.844s Perf, with custom this.debug @ false (which disables most messages): #1 8.424s #2 8.222s #3 8.234 + config.backtest.batchSize = 2000 @ init(): #1 7.415s #2 7.447s #3 7.474 + config.backtest.batchSize = 10000000 (10 million) @ init(): #1 8.031s #2 7.966s #3 7.965s Oddly enought performance is not increased when loading every possible candle into RAM as you say the batch size would do (if setting it very high). It would seem there's another culprint. 1 million candles @ 1 min equals ~1.1 year, 10 million ~11-13 years. + config.backtest.batchSize = 2000 + config.silent = true @ init(): #1 7.616s #2 7.515s #3 7.466s No change. + config.backtest.batchSize = 2000, + config.silent = true + config.debug = false @ init(): #1 7.788s #2 7.511s #3 7.37s --- USDT-ETH 2016-01-01 18:17 - 2017-10-05 20:25 (1 year, 9 months, Poloniex) All other settings same as with XRP. All logging off + batchSize @ 20 million candles: #1 1.38m (minutes) #2 1.38m #3 1.36m Node MAX RAM-usage: ~1580 MB Looking at the win system performance monitor i saw that amount of RAM was increased incrementally over the duration of the backtest when using huge batch sizes; it doesn't seem that Gekko loads all to RAM at once (as one would want) even though batchSize is over 20 million? It's like the opposite of what one want is happening. All logging off + batchSize @ 2000 candles: #1 1.28m #2 1.23m #3 1.21m Node MAX RAM-usage: ~82 MB Oddly enought smaller batch sizes is faster then larger ones. This doesn't make much sense since if SQLite would be the culprint making 1x large SELECT would always be faster then making a lot of smaller ones. All logging off + batchSize @ 1000 candles: #1 1.20m #2 1.19m #3 1.20m Node MAX RAM-usage: ~79 MB All logging off + no batchSize set (default): #1 1.33m #2 1.33m #3 1.33m Node MAX RAM-usage: ~71 MB --- I get why one would not want to have 20 million candles in RAM while running it LIVE etc but backtesting is something entirely different since one of the things one would want is to run it as fast as possible in order to test again with different settings. Durations of ~1.2-1.3 minutes may also seem really fast when backtesting 1 year and 9 months of data but imagine doing 10x of these in a row since that is what one would want (test + change settings + test + change settings + test .....). It quickly becomes very cumbersome. Also it would seem the DB isn't the culprint, that's why i initially wondered about moment.js since if it is ran on each candle etc could be one of the culprints. RE: Increase performance? - SirTificate - 02-14-2018 I'm not sure if a windows notebook running bash is a good choice for benchmarking. The bash emulation takes ressources and after a lot of backtesting your cpu might run into throttling, hence you can't really rely on the results. An idle server without power management would be a better place to measure the performance. RE: Increase performance? - tommiehansen - 02-14-2018 It's actually just as stable as the version i have at work running in a docker container at one of the rack servers at work (the dual xeon is a bit better at running multiple backtests at the same time though obviously). Not that much difference in speed with a regular test since Gekko simply won't use that much resources (even if one really wants it too..). Anyway -- it's relative so it doesn't matter too much. I mostly wanted to know what it actually is that takes time since there's little way to know that (if one doesn't start debugging the entire core...) |