I have no idea what conditions i ran under when i posted that, so new tests with more defined settings.
Also it is impossible to answer your questions without a lot of data and comparisons, so....
Conditions
Candle Size: 15 (minutes)
System: Windows WSL (bash, my home laptop)
Strategy:
RSI_BULL_BEAR.js
Data: USDT-XRP, 2017-12-01 - 2018-02-03 (Poloniex)
Perf measure method: Custom function posted before
Settings (TOML)
# SMA Trends
SMA_long = 1000
SMA_short = 50
# BULL
BULL_RSI = 10
BULL_RSI_high = 80
BULL_RSI_low = 60
# BEAR
BEAR_RSI = 15
BEAR_RSI_high = 50
BEAR_RSI_low = 20
---
Perf, with logging (run 1-3):
#1 10.877s
#2 11.2s
#3 10.844s
Perf, with custom this.debug @ false (which disables most messages):
#1 8.424s
#2 8.222s
#3 8.234
+ config.backtest.batchSize = 2000 @ init():
#1 7.415s
#2 7.447s
#3 7.474
+ config.backtest.batchSize = 10000000 (10 million) @ init():
#1 8.031s
#2 7.966s
#3 7.965s
Oddly enought performance is not increased when loading every possible candle into RAM as you say the batch size would do (if setting it very high). It would seem there's another culprint. 1 million candles @ 1 min equals ~1.1 year, 10 million ~11-13 years.
+ config.backtest.batchSize = 2000 + config.silent = true @ init():
#1 7.616s
#2 7.515s
#3 7.466s
No change.
+ config.backtest.batchSize = 2000, + config.silent = true + config.debug = false @ init():
#1 7.788s
#2 7.511s
#3 7.37s
---
USDT-ETH 2016-01-01 18:17 - 2017-10-05 20:25 (1 year, 9 months, Poloniex)
All other settings same as with XRP.
All logging off + batchSize @ 20 million candles:
#1 1.38m (minutes)
#2 1.38m
#3 1.36m
Node MAX RAM-usage: ~1580 MB
Looking at the win system performance monitor i saw that amount of RAM was increased
incrementally over the duration of the backtest when using huge batch sizes; it doesn't seem that Gekko loads all to RAM at once (as one would want) even though batchSize is over 20 million? It's like the opposite of what one want is happening.
All logging off + batchSize @ 2000 candles:
#1 1.28m
#2 1.23m
#3 1.21m
Node MAX RAM-usage: ~82 MB
Oddly enought smaller batch sizes is faster then larger ones. This doesn't make much sense since if SQLite would be the culprint making 1x large SELECT would always be faster then making a lot of smaller ones.
All logging off + batchSize @ 1000 candles:
#1 1.20m
#2 1.19m
#3 1.20m
Node MAX RAM-usage: ~79 MB
All logging off + no batchSize set (default):
#1 1.33m
#2 1.33m
#3 1.33m
Node MAX RAM-usage: ~71 MB
---
I get why one would not want to have 20 million candles in RAM while running it LIVE etc but backtesting is something entirely different since one of the things one would want is to run it as fast as possible in order to test again with different settings.
Durations of ~1.2-1.3 minutes may also seem really fast when backtesting 1 year and 9 months of data but imagine doing 10x of these in a row since that is what one would want (test + change settings + test + change settings + test .....). It quickly becomes very cumbersome.
Also it would seem the DB isn't the culprint, that's why i initially wondered about moment.js since if it is ran on each candle etc could be one of the culprints.