Complete Gekko's datasets - ready files to download and use
#1
Ready to use Gekko's SQLite dumps files. Without importing via Gekko import and exchange APIs.
Just copy the file to the history directory and you have: full history of Binance Exchange for example. The files are updated daily after 23:15 GMT.

Included
- Binance - FULL history
- Poloniex - all pairs - in progress
- Kraken - all pairs - in progress

Download
Links and all acctually information are on Github: https://github.com/xFFFFF/Gekko-Datasets

Soon
- Kraken all pairs
- Bitfinex all pairs
- Poloniex all pairs
- Gdax all pairs

How to import pairs from Coinfalcon, BTCC and Bitstamp? Smile
  Reply
#2
Thumbs Up 
thank you very much


half of more strategies could not be tested   Wink


rest a little

We can not reach   Wink
  Reply
#3
Awesome, thank you!
  Reply
#4
thank you


------------------------------------------
binance-btc = binance_0.1.db
binance-usdt= binance_01.db


file names are the same

how to put all of them in the history file
thank you
  Reply
#5
You can merge files in external app. This one support merge databases: https://www.codeproject.com/Articles/220...re-Utility
Here is terminal solution: http://sqlite.1065341.n5.nabble.com/How-...19362.html

Too large database will decrase Gekko UI performance on every database scan.

In my opinion best solution is using Gekko CLI and for seperated binance-bnb database is create new folder history_bnb and set in config.js somethink like this:
Code:
config.adapter = 'sqlite';
config.sqlite = {
  path: 'plugins/sqlite',
  dataDirectory: 'history_bnb',
  version: 0.1,
  dependencies: []
}
My projects [Strategies] [Datasets]
  Reply
#6
(04-04-2018, 10:14 PM)xFFFFF Wrote: Ready to use Gekko's SQLite dumps files. Without importing via Gekko import and exchange APIs.
Just copy the file to the history directory and you have: full history of Binance Exchange for example. The files are updated daily after 23:15 GMT.
Great work bro!
  Reply
#7
Great,

You could use my CLI-tools (or copy them) to upload and users could use the download part of it to easily download the data (and then simply update this via git pull and thus very easily pull your last data by just doing git pull & . download.sh).

One of the benefits is that it's creates a single tar.gz archive and automatically uploads this to transfer.sh + updates a simple txt-file with link to latest upload. The limit on transfer.sh is 10 GB/file but the compressed files usually take up 30-40% of the original so should be good for at least 20 GB of data (if not one could easily split this to multiple archives).

1. $ git clone https://github.com/tommiehansen/gekko_tools
2. Goto dir cli within the gekko_tools folder
3. copy-paste the config file;
$ cp config.sample.sh config.sh
4. edit the config.sh file and set your Gekko directory if it differs (it defaults to being two levels under the cli directory so ../../gekko/)
5. run
$ . upload.sh
  Reply
#8
I dont understand. What is difference between my solution and Your? Do You saw my script? https://github.com/xFFFFF/Gekko-Datasets...atasets.sh
My projects [Strategies] [Datasets]
  Reply
#9
The difference for the upload part is that it becomes a single file that is much easier to download then X number of different files and it doesn't use Google Drive and doesn't need perl either.

Also the users could just clone the git repository and just do $ . download.sh to refresh the dataset, uncompress it and add it to their Gekko directory all with one single command.
So basically what i already wrote.
  Reply
#10
Now the user can choose which pairs he wants to download. It does not have to download 4 GB of data. I did it on purpose. I do not see the point in writing a download script, because the user can do it with one command: wget https://drive.google.com/file/d/18izvqp3...sp=sharing && unzip binance.zip && cp gekko/history

Update
GDAX EUR pairs added to gdrive
My projects [Strategies] [Datasets]
  Reply


Forum Jump:


Users browsing this thread: