Coder Social home page Coder Social logo

lefnire / tforce_btc_trader Goto Github PK

View Code? Open in Web Editor NEW
803.0 803.0 233.0 4.27 MB

TensorForce Bitcoin Trading Bot

Home Page: http://ocdevel.com/podcasts/machine-learning/26

License: GNU Affero General Public License v3.0

Python 42.55% JavaScript 11.08% HTML 0.22% CSS 0.19% Jupyter Notebook 45.95%

tforce_btc_trader's Introduction

TensorForce Bitcoin Trading Bot

Update 2018-08-14

Tag v0.1 has the code which follows this README. Tag v0.2 is a major overhaul after lessons learned in a finance job, and much of this README won't match the new code. I can't get tests to converge in either case, so something is fundamentally missing from this project - ie, don't count on making money (use as a starting-point / education instead). I'm stepping away for a while and won't be very active here, but I'm not completely abandoning.


Join the chat at https://gitter.im/lefnire/tforce_btc_trader

A TensorForce-based Bitcoin trading bot (algo-trader). Uses deep reinforcement learning to automatically buy/sell/hold BTC based on price history.

This project goes with Episode 26+ of Machine Learning Guide. Those episodes are tutorial for this project; including an intro to Deep RL, hyperparameter decisions, etc.

1. Setup

  • Python 3.6+ (I use template strings a lot)
  • Install & setup Postgres
    • Create two databases: btc_history and hyper_runs. You can call these whatever you want, and just use one db instead of two if you prefer (see Data section).
    • cp config.example.json config.json, pop ^ into config.json
  • Install TA-Lib manually.
  • pip install -r requirements.txt
    • If issues, try installing these deps manually.
  • Install TensorForce from git repo (constantly changing, we chase HEAD)
    • git clone https://github.com/lefnire/tensorforce.git
    • cd tensorforce && pip install -e .

Note: you'll wanna run this on a GPU rig with some RAM. I'm using a 1080ti and 16GB RAM; 8GB+ is often in used. You can use a standard PC, no GPU (CPU-only); in that case pip install -I tensorflow==1.5.0rc1 (instead of tensorflow-gpu). The only downside is performance; CPU is way slower than GPU for ConvNet computations. Worth evaluating this repo on a CPU before you decide "yeah, it's worth the upgrade."

2. Populate Data

  • Download mczielinski/bitcoin-historical-data
  • Extract to data/bitcoin-historical-data
  • python -c 'from data.data import setup_runs_table;setup_runs_table()'
    • if you get ModuleNotFoundError: No module named 'data.data', prefix commands with PYTHONPATH=. python ...
    • If you have trouble with that, just copy/paste the SQL from that file, execute against your hyper_runs DB from above.

3. Hypersearch

The crux of practical reinforcement learning is finding the right hyper-parameter combo (things like neural-network width/depth, L1 / L2 / Dropout numbers, etc). Some papers have listed optimal default hypers. Eg, the Proximate Policy Optimization (PPO) paper has a set of good defaults. But in my experience, they don't work well for our purposes (time-series / trading). I'll keep my own "best defaults" updated in this project, but YMMV and you'll very likely need to try different hyper combos yourself. The file hypersearch.py will search hypers forever, ever honing in on better and better combos (using Bayesian Optimization (BO), see gp.py). See Hypersearch section below for more details.

python hypersearch.py

Optional flags:

  • --guess <int>: sometimes you don't want BO, which is pretty willy-nilly at first, to do the searching. Instead you want to try a hunch or two of your own first. See instructions in utils.py#guess_overrides.
  • --net-type <lstm|conv2d>: see discussion below (LSTM v CNN)
  • --boost: you can optionally use gradient boosting when searching for the best hyper combo, instead of BO. BO is more exploratory and thorough, gradient boosting is more "find the best solution now". I tend to use --boost after say 100 runs are in the database, since BO may still be dilly-dallying till 200-300 and daylight's burning. Boost will suck in the early runs.
  • --autoencode: many of you might hit some GPU RAM constraints (hypersearch crashes due to maxed memory). If so, use this flag. It dimensionality-reduces the price-history timesteps so more can fit into RAM. It does so destructively - think of lossy image compression - but might be required for your case. See #6 for info on what leads to mem-maxing.
  • --n-steps <int>, --n-tests <int>: vary how long to train and how often to report. n-steps is number of timesteps to train (in 10k; ie --n-steps 100 means 1M). n-tests is how many times to split that and report back to you / save an entry for viz.

4. Run

Once you've found a good hyper combo from above (this could take days or weeks!), it's time to run your results.

python run.py --name <str>

  • --name <str> (required): name of the folder to save your run (during training) or load from (during --live/--test-live.
  • --id <int>: the id of some winning hyper-combo you want to run with. Without this, it'll run from the hard-coded hyper defaults.
  • --early-stop <int>: sometimes your models can overfit. In particular, PPO can give you great performance for a long time and then crash-and-burn. That kind of behavior will be obvious in your visualization (below), so you can tell your run to stop after x consecutive positive episodes (depends on the agent - some find an optimum and roll for 3 positive episodes, some 8, just eyeball your graph).
  • --live: whooa boy, time to put your agent on GDAX and make real trades! I'm gonna let you figure out how to plug it in on your own, 'cause that's danger territory. I ain't responsible for shit. In fact, let's make that real - disclaimer at the end of README.
  • --test-live: same as live, but without making the real trades. This will start monitoring a live-updated database (from config.json), same as live, but instead of making the actual trade, it pretends it did and reports back how much you would have made/lost. Dry-run. You'll definitely want to run this once or twice before running --live.

First, run python run.py [--id 10] --name test. This will train your model using run 10 (from hypersearch.py) and save to saves/test. Without --id it will use the hard-coded deafults. You can hit Ctrl-C once during training to kill training (in case you see a sweet-spot and don't want to overfit). Second, run python run.py [--id 10] --name test --[test-]live to run in live/test-live mode. If you used --id before, use it again here so that loading the model matches it to its net architecture.

5. Visualize

TensorForce comes pre-built with reward visualization on a TensorBoard. Check out their Github, you'll see. I needed much more customization than that for viz, so we're not using TensorBoard. I created a mini Flask server (2 routes) and a D3 React dashboard where you can slice & dice hyper combos, visualize progression, etc. If you click on a single run, it'll display a graph of the buy/sell signals that agent took in a time-slice (test-set) so you can eyeball whether he's being smart.

  • Server: cd visualize;FLASK_APP=server.py flask run
  • Client:
    • cd visualize/client
    • npm install;npm install -g webpack-dev-server
    • npm start => localhost:8080


About

This project is a TensorForce-based Bitcoin trading bot (algo-trader). It uses deep reinforcement learning to automatically buy/sell/hold BTC based on what it learns about BTC price history. Most blogs / tutorials / boilerplate BTC trading-bots you'll find out there use supervised machine learning, likely an LTSM. That's well and good - supervised learning learns what makes a time-series tick so it can predict the next-step future. But that's where it stops. It says "the price will go up next", but it doesn't tell you what to do. Well that's simple, buy, right? Ah, buy low, sell high - it's not that simple. Thousands of lines of code go into trading rules, "if this then that" style. Reinforcement learning takes supervised to the next level - it embeds supervised within its architecture, and then decides what to do. It's beautiful stuff! Check out:

This project goes with Episode 26+ of Machine Learning Guide. Those episodes are tutorial for this project; including an intro to Deep RL, hyperparameter decisions, etc.

Data

For this project I recommend using the Kaggle dataset described in Setup. It's a really solid dataset, best I've found! I'm personally using a friend's live-ticker DB. Unfortunately you can't. It's his personal thing, he may one day open it up as a paid API or something, we'll see. There's also some files in data/populate which use the CryptoWat.ch API. Great API going forward, but doesn't have the history you'll need to train on. If any y'all find anything better than the Kaggle set, LMK.

So here's how this project splits up databases (see config.json). We start with a history DB, which has all the historical BTC prices for multiple exchanges. Import it, train on it. Then we have an optionally separate runs database, which saves the results of each of your hypersearch.py runs. This data is used by our BO or Boost algo to search for better hyper combos. You can have runs table in your history database if you want, one-and-the-same. I have them separate because I want the history DB on localhost for performance reason (it's a major perf difference, you'll see), and runs as a public hosted DB, which allows me to collect runs from separate AWS p3.8xlarge running instances.

Then, when you're ready for live mode, you'll want a live database which is real-time, constantly collecting exchange ticker data. --live will handle keeping up with that database. Again, these can all 3 be the same database if you want, I'm just doing it my way for performance.

LSTM v CNN

You'll notice the --net-type <lstm|conv2d> flag in hypersearch.py and run.py. This will select between an LSTM Recurrent Neural Networks (RNNs) or Convolutional Neural Networks (CNN). I have them broken out of the hypersearch since they're so different, they kinda deserve their own runs DB each - but if someone can consolidate them into the hypersearch framework, please do. You may be thinking, "BTC prices is time-series, time-series is LSTM... why CNN?" It strangely turns out that LSTM doesn't do so hot here. In my own experience, in colleagues' experience, and in 2-3 papers I've read (here's one) - we're all coming to the same conclusion. We're not sure why... the running theory is vanishing/exploding gradient. LSTMs work well in NLP which has some maximum 50-word sentences or so. LSTMs mitigated vanilla RNN's vanishing/exploding gradient for such sentences, true - but BTC history is infinite (on-going). Maybe LSTM can only go so far with time-series. Another possibility is that Deep Reinforcement Learning is most commonly researched, published, and open-sourced using CNNs. This because RL is super video-game centric, self-driving cars, all the vision stuff. So maybe the math behind these models lends better to CNNs? Who knows. The point is - experiment with both. Report back on Github your own findings.

So how does CNN even make sense for time-series? Well we construct an "image" of a time-slice, where the x-axis is time (obviously), the y-axis (height) is nothing... it's [1]. The z-axis (channels) is features (OHLCV, VWAP, bid/ask, etc). This is kinda like our agent literally looking at an image of price actions, like we do when day-trading, but a bit more robot-friendly / less human-friendly.

[Update March 04 2018]: I'm having better success recently w/ LSTMs and have made that the default. A change in TensorForce perhaps?

Reinforcement Models

TensorForce has all sorts of models you can play with. This project currently only supports Proximate Policy Optimization (PPO), but I encourage y'all to add in other models (esp VPG, TRPO, DDPG, ACKTR, etc) and submit PRs. ACKTR is the current state-of-the-art Policy Gradient model, but not yet available in TensorForce. PPO is the second-most-state-of-the-art, so we're using that. TRPO is 3rd, VPG is old. DDPG I haven't put much thought into.

Those are the Policy Gradient models. Then there's the Q-Learning approaches (DQNs, etc). We're not using those because they only support discrete actions, not continuous actions. Our agent has one discrete action (buy|sell|hold), and one continuous action (how much?). Without that "how much" continuous flexibility, building an algo-trader would be... well, not so cool. You could do something like (discrete action = (buy-$200, sell-$200, hold)), but I dunno man... continuous is slicker.

Hypersearch

You're likely familiar with grid search and random search when searching for optimial hyperparameters for machine learning models. Grid search searches literally every possible combo - exhaustive, but takes infinity years (especially w/ the number of hypers we work with in this project). Random search throws a dart at random hyper combos over and over, and you just kill it eventually and take the best. Super naive - it works ok for other ML setups, but in RL hypers are the make-or-break; more than model selection. Seriously, I've found L1 / L2 / Dropout selection more consequential than PPO vs DQN, LSTM vs CNN, etc.

That's why we're using Bayesian Optimization (BO). Or sometimes you'll hear Gaussian Processes (GP), the thing you're optimizing with BO. See gp.py. BO starts off like random search, since it doesn't have anything to work with; and over time it hones in on the best hyper combo using Bayesian inference. Super meta - use ML to find the best hypers for your ML - but makes sense. Wait, why not use RL to find the best hypers? We could (and I tried), but deep RL takes 10s of thousands of runs before it starts converging; and each run takes some 8hrs. BO converges much quicker. I've also implemented my own flavor of hypersearch via Gradient Boosting (if you use --boost during training); more for my own experimentation.

We're using gp.py, which comes from thuijskens/bayesian-optimization. It uses scikit-learn's in-built GP functions. I also considered dedicated BO modules, like GPyOpt. I found gp.py easier to work with, but haven't compared it's relative performance, nor its optimal hypers (yes, BO has its own hypers... it's turtles all the way down. But luckily I hear you can pretty safely use BO's defaults). If anyone wants to explore any of that territory, please indeed!

License: AGPLv3.0

GPL bit so we share our findings. Community effort, right? Boats and tides. Affero bit so we can all run our own trading instances w/ personal configs / mods. Heck, any of us could run this as a service / hedge fund. I'm pretty keen on this license, having used it in a prior internet company I'd founded; but if someone feels strongly about a different license, please open an issue & LMK - open to suggestions. See LICENSE.

Disclaimer

By using this code you accept all responsibility for money lost because of this code.

FYI, I haven't made a dime. Doubtful the project as-is will fly. It could benefit from add-ons, like some NLP fundamentals functionality. But it's a start!

tforce_btc_trader's People

Contributors

alirezaseifi avatar gitter-badger avatar lefnire avatar methenol avatar talhaasmal avatar tmorgan4 avatar willex avatar yuntianlong2002 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tforce_btc_trader's Issues

Error: Cannot find module 'webpack'

It shows error while trying "5. Visualize".
What could be the cause of this (issue/problem)?
Appreciate your help.
("npm install;npm install -g webpack-dev-server"comand has NO ERROR)

npm start => localhost:8080
module.js:478
throw err;
^

Error: Cannot find module 'webpack'
at Function.Module._resolveFilename (module.js:476:15)
at Function.Module._load (module.js:424:25)
at Module.require (module.js:504:17)
at require (internal/module.js:20:19)
at Object. (/usr/lib/node_modules/webpack-dev-server/lib/Server.js:22:17)
at Module._compile (module.js:577:32)
at Object.Module._extensions..js (module.js:586:10)
at Module.load (module.js:494:32)
at tryModuleLoad (module.js:453:12)
at Function.Module._load (module.js:445:3)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: webpack-dev-server "="
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

log
0 info it worked if it ends with ok
1 verbose cli [ '/usr/bin/node', '/usr/bin/npm', 'start', '=' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'prestart', 'start', 'poststart' ]
5 info lifecycle [email protected]prestart: [email protected]
6 info lifecycle [email protected]
start: [email protected]
7 verbose lifecycle [email protected]start: unsafe-perm in lifecycle true
8 verbose lifecycle [email protected]
start: PATH: /usr/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/home/ubuntu/tforce_btc_trader/visualize/client/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
9 verbose lifecycle [email protected]start: CWD: /home/ubuntu/tforce_btc_trader/visualize/client
10 silly lifecycle [email protected]
start: Args: [ '-c', 'webpack-dev-server "="' ]
11 silly lifecycle [email protected]start: Returned: code: 1 signal: null
12 info lifecycle [email protected]
start: Failed to exec start script
13 verbose stack Error: [email protected] start: webpack-dev-server "="
13 verbose stack Exit status 1
13 verbose stack at EventEmitter. (/usr/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:285:16)
13 verbose stack at emitTwo (events.js:106:13)
13 verbose stack at EventEmitter.emit (events.js:191:7)
13 verbose stack at ChildProcess. (/usr/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
13 verbose stack at emitTwo (events.js:106:13)
13 verbose stack at ChildProcess.emit (events.js:191:7)
13 verbose stack at maybeClose (internal/child_process.js:920:16)
13 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:230:5)
14 verbose pkgid [email protected]
15 verbose cwd /home/ubuntu/tforce_btc_trader/visualize/client
16 verbose Linux 4.4.0-1052-aws
17 verbose argv "/usr/bin/node" "/usr/bin/npm" "start" "="
18 verbose node v6.13.1
19 verbose npm v5.7.1
20 error code ELIFECYCLE
21 error errno 1
22 error [email protected] start: webpack-dev-server "="
22 error Exit status 1
23 error Failed at the [email protected] start script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 1, true ]

v0.2 Save trials object to resume on next run

It's going to be a few days before I can test this, but if anyone wants give this a shot and report back it would be greatly appreciated. As long as there isn't anything heavily nested being passed back in trials, this should work. Tested with hyperas but should work with hyperopt. Saving the pickle to a local file is a temporary solution until it can be pushed to a SQL table and pulled back down.

Somewhere at the top of hypersearch add:
import pickle

Then replace lines 344-346 with this:

    # set initial max_eval, attempt to load a saved trials object from pickle, if that fails start fresh.
    # grab how many trials were previously run and add max_evals to it for the next run.
    # this allows the hyper parameter search to resume where it left off last.
    # TODO save trials to SQL table and restore from there instead of local pickle. 
    max_evals = 20
    try:
        trialPickle = open('./trial.pickle','rb')
        trials = pickle.load(trialPickle)
        max_evals = len(trials.trials) + max_evals
    except:
        trials = Trials()

    best = fmin(loss_fn, space=space, algo=tpe.suggest, max_evals=max_evals, trials=trials)

    with open('./trial.pickle', 'wb') as f:
            pickle.dump(trials, f)

Hyperopt seems to support saving this data to mongoDB, however, we can probably get it to a json friendly format and keep the data in an sql table similar to the runs
https://github.com/hyperopt/hyperopt/wiki/Parallelizing-Evaluations-During-Search-via-MongoDB

Create TensorForce PR for manually closing agent/model

Create a PR to TensorForce adding this commit, which manually removes auto-closing agent/model w/in runner.py so that we can work with the model a bit after training (to test, to use in live-mode, etc) - we'll close manually.

Above commit is insufficient. Will need to change all their code instances currently using runner.py and adding agent.close();model.close() after runner.run(). This is the only commit in my fork required for this project, so getting that into upstream will remove my fork as a dep.

ModuleNotFoundError: No module named 'data.data'

I have the following issue by running the koggle.py script

Traceback (most recent call last):
  File "data/populate/kaggle.py", line 10, in <module>
    from data.data import engine
ModuleNotFoundError: No module named 'data.data'

Data Source

Just got here and read the readme. Data sourcing is no problem with BTC or any other crypto. We can get it straight from the exchanges for free. Check out the library CCXT.

Typeerror occurred at tensorforce 0.4.3 When run hypersearch.py

I just run hypersearch.py with no argument then get the following error.
I am not familiar with tensorforce but I know it happened statement in tensorforce/execution/runner.py which show how episodes progress.

I could run hypersearch.py with my fixed tensorforce which is commentted out this error state btw.

My environment

  • Tensorforce 0.4.3
  • Tensorflow-gpu 1.13.1
  • tforce_btc_trader v0.2

the error

Traceback (most recent call last):
  File "hypersearch.py",
 line 364, in <module>
    main()
  File "hypersearch.py", line 357, in main
    best = fmin(loss_fn, space=space, algo=tpe.suggest, max_evals=max_evals, trials=trials)
  File "/usr/local/lib/python3.6/site-packages/hyperopt/fmin.py", line 388, in fmin
    show_progressbar=show_progressbar,
  File "/usr/local/lib/python3.6/site-packages/hyperopt/base.py", line 639, in fmin
    show_progressbar=show_progressbar)
  File "/usr/local/lib/python3.6/site-packages/hyperopt/fmin.py", line 407, in fmin
    rval.exhaust()
  File "/usr/local/lib/python3.6/site-packages/hyperopt/fmin.py", line 262, in exhaust
    self.run(self.max_evals - n_done, block_until_done=self.asynchronous)
  File "/usr/local/lib/python3.6/site-packages/hyperopt/fmin.py", line 227, in run
    self.serial_evaluate()
  File "/usr/local/lib/python3.6/site-packages/hyperopt/fmin.py", line 141, in serial_evaluate
    result = self.domain.evaluate(spec, ctrl)
  File "/usr/local/lib/python3.6/site-packages/hyperopt/base.py", line 844, in evaluate
    rval = self.fn(pyll_rval)
  File "hypersearch.py", line 315, in loss_fn
    env.train_and_test(agent)
  File "/root/tforce_btc_trader/btc_env.py", line 255, in train_and_test
    runner.run(timesteps=train_steps)
  File "/root/tensorforce2/tensorforce/execution/runner.py", line 149, in run
    pbar.update(num_episodes - self.global_episode)
TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'

kaggle.py: "ModuleNotFoundError: No module named 'data'"

Hey,

I'm trying to set the project up and am running into a problem when populating the data.

When running python data/populate/kaggle.py as described as part of step 2 I'm seeing this error message.

$ python data/populate/kaggle.py

Traceback (most recent call last):
  File "data/populate/kaggle.py", line 7, in <module>
    from data.data import engine
ModuleNotFoundError: No module named 'data'

My first guess was that it had to have something to do with the working directory. So i tried some different ones, but still can't get it to work.

Any of you guys having the same issue?

Cheers

Google Colab Tutorial not showing

For some reason when I import your new tutorial into Google Colab, only the headings show and none of the code. I can't figure it out.

Proper backtesting (stop/limit orders, trades based on bid/ask, etc)

Current implementation is very crude - just enough to ballpark strategy & hyper combo. Backtesting is custom-built. It relies on next-state close prices for PNL; buys/sells on close (instead of buying at ASK and selling at BID); only supports market-orders as such (no stop/limits); etc. Then there's more complex bits like order depth (how much can I buy at this price before that guy runs out and I have to fill the rest of that order at a different price). Very importantly, w/o stop/limit orders, there's no risk-control, which is half the game (could double the agent's performance). (Accounting for order types will require augmenting the Env's action-space).

All this could be solved w/ a backtesting library, and it's high-time. I've scoped out a few, my notes here. I'm leaning on backtrader currently, and possibly paired with ccxt

Adapt to tensorforce#memory

TensorForce's memory branch supposedly fixes a major flaw in the PPO implementation. It's a pretty wild-west branch for now, and I've started my own branch to follow it. I'm gonna put that on hold until they merge their branch into master and cut a release. We'll need to adapt to the new hyperparameters introduced (adding them to our hypersearch framework).

Add DDPG agent

Per #6 (comment), I'd like to try the DDPG RL agent (compared to PPO agent). DDPG hypers will need to be added to hypersearch, and likely some other code adjustments. I once had DQN support, when I removed it I may have tailored the code to be too PPO-centric.

Visualisation issue

I'm running node 8.11.1 on ubuntu 16.04. Flask server is running and returning on port 500 but when I run the npm start I get a config issue.

npm start => localhost:8080
✖ 「wds」: Invalid configuration object. Webpack has been initialised using a configuration object that does not match the API schema.

  • configuration.entry should be one of these:
    object { : non-empty string | [non-empty string] } | non-empty string | [non-empty string] | function
    -> The entry point(s) of the compilation.

Switching to conv1d/ LSTM-FCN representation

  1. Perhaps switch to conv1d instead of using conv2d layers in hyperparameter.py ? I'm pretty sure conv1d is just a layer on conv2d, but it's probably more reliable if things change in the future since i believe Tensorforce calls tensorflow's conv layers.

Have you tried using this representation:
https://arxiv.org/abs/1801.04503
for your data?

I'm working on a similar problem equities and this representation was giving me somewhat better results on the data. It gives the network two views of the data (detailed in the paper), which seems to help.

Live-mode NotImplementedError

This is a TODO. so I create it as an issue to fix if I figure it out

File "run.py", line 48, in main env.run_live(agent, test=args.test_live) File "tforce_btc_trader/btc_env.py", 
line 438, in run_live self.run_deterministic(runner, print_results=True) File 
"tforce_btc_trader/btc_env.py", line 389, in run_deterministic next_state, terminal, reward = 
self.execute(runner.agent.act(next_state, deterministic=True, independent=True)) File 
"tforce_btc_trader/btc_env.py", line 343, in execute raise NotImplementedError NotImplementedError

** hint

See 6fc4ed2 for prior live-mode code which worked. Much has changed since then and it won't work inthat state, so removing and leaving to you to fix (and submit PR please!)

This is a question

Is it profitable or test application? How is real world trading? I just download. I will try to run my spare time.

Use conv2d depth (channel / z-axis) instead of height (y-axis) for features

Currently the conv2d set uses time as the x-axis (naturally), and features as the y-axis (not-so-naturally). Then we have window/stride sort of finding patterns positionally. Makes perfect sense to have a window finding positional patterns on the time axis, but really the features should all boil down together, not in chunks. So TODO: experiment with features as channels/depth instead of height.

TypeError: __init__() got an unexpected keyword argument 'cell_clip'

It shows error while trying "python run.py --name test".
What could be the cause of this (issue/problem)?
Appreciate your help.

Traceback (most recent call last):
File "run.py", line 57, in
main()
File "run.py", line 43, in main
**hydrated
File "c:\users\administrator\rl\tensorforce\tensorforce\agents\ppo_agent.py", line 151, in init
entropy_regularization=entropy_regularization
File "c:\users\administrator\rl\tensorforce\tensorforce\agents\learning_agent.py", line 149, in init
batching_capacity=batching_capacity
File "c:\users\administrator\rl\tensorforce\tensorforce\agents\agent.py", line 79, in init
self.model = self.initialize_model()
File "c:\users\administrator\rl\tensorforce\tensorforce\agents\ppo_agent.py", line 179, in initialize_model
likelihood_ratio_clipping=self.likelihood_ratio_clipping
File "c:\users\administrator\rl\tensorforce\tensorforce\models\pg_prob_ratio_model.py", line 88, in init
gae_lambda=gae_lambda
File "c:\users\administrator\rl\tensorforce\tensorforce\models\pg_model.py", line 95, in init
requires_deterministic=False
File "c:\users\administrator\rl\tensorforce\tensorforce\models\distribution_model.py", line 86, in init
discount=discount
File "c:\users\administrator\rl\tensorforce\tensorforce\models\memory_model.py", line 106, in init
reward_preprocessing=reward_preprocessing
File "c:\users\administrator\rl\tensorforce\tensorforce\models\model.py", line 200, in init
self.setup()
File "c:\users\administrator\rl\tensorforce\tensorforce\models\model.py", line 307, in setup
self.initialize(custom_getter=custom_getter)
File "c:\users\administrator\rl\tensorforce\tensorforce\models\pg_model.py", line 107, in initialize
super(PGModel, self).initialize(custom_getter)
File "c:\users\administrator\rl\tensorforce\tensorforce\models\distribution_model.py", line 93, in initialize
kwargs=dict(summary_labels=self.summary_labels)
File "c:\users\administrator\rl\tensorforce\tensorforce\core\networks\network.py", line 180, in from_spec
kwargs=kwargs
File "c:\users\administrator\rl\tensorforce\tensorforce\util.py", line 159, in get_object
return obj(*args, **kwargs)
File "C:\Users\Administrator\RL\tforce_btc_trader\hypersearch.py", line 155, in init
super(CustomNet, self).init(layers_spec, **kwargs)
File "c:\users\administrator\rl\tensorforce\tensorforce\core\networks\network.py", line 270, in init
kwargs=dict(scope=scope, summary_labels=summary_labels)
File "c:\users\administrator\rl\tensorforce\tensorforce\core\networks\layer.py", line 143, in from_spec
kwargs=kwargs
File "c:\users\administrator\rl\tensorforce\tensorforce\util.py", line 159, in get_object
return obj(*args, **kwargs)
TypeError: init() got an unexpected keyword argument 'cell_clip'

run.py - TypeError: Argument 'obj' has incorrect type (expected list, got BoxList)

I am trying to get started with the project and I am getting the following error:

python run.py --name test
Traceback (most recent call last):
  File "run.py", line 57, in <module>
    main()
  File "run.py", line 50, in main
    env.train_and_test(agent, args.n_steps, args.n_tests, args.early_stop)
  File "/Users/bvz/Documents/tforce_btc_trader/btc_env.py", line 599, in train_and_test
    runner.run(timesteps=timesteps_each)
  File "/Users/bvz/Documents/tensorforce/tensorforce/execution/runner.py", line 126, in run
    state, terminal, reward = self.environment.execute(actions=action)
  File "/Users/bvz/Documents/tforce_btc_trader/btc_env.py", line 462, in execute
    custom = self.end_episode_score()
  File "/Users/bvz/Documents/tforce_btc_trader/btc_env.py", line 545, in end_episode_score
    sharpe = self.sharpe()
  File "/Users/bvz/Documents/tforce_btc_trader/btc_env.py", line 532, in sharpe
    diff = (pd.Series(totals.trade).pct_change() - pd.Series(totals.hold).pct_change())[1:]
  File "/Users/bvz/anaconda3/lib/python3.6/site-packages/pandas/core/series.py", line 227, in __init__
    raise_cast_failure=True)
  File "/Users/bvz/anaconda3/lib/python3.6/site-packages/pandas/core/series.py", line 2868, in _sanitize_array
    subarr = _possibly_convert_platform(data)
  File "/Users/bvz/anaconda3/lib/python3.6/site-packages/pandas/core/common.py", line 1002, in _possibly_convert_platform
    values = lib.list_to_object_array(values)
TypeError: Argument 'obj' has incorrect type (expected list, got BoxList)

I am working out of the memory branch.

I have installed all of the dependencies, gathered the data from kaggle, imported it into postgres, and updated the config.json.

Is this a data or dependency issue on my end?

Actions exploration

I'm working outside of hypersearch right now so these are probably not ideal parameters. It seems the model becomes a little more flexible to less than perfect parameters (and the random associated with the model's initial state) with actions exploration defined.

https://reinforce.io/blog/introduction-to-tensorforce/
actions_exploration=dict(
type='ornstein_uhlenbeck',
sigma=0.1,
mu=0.0,
theta=0.1
),
these parameters are from the example in the above link and are not optimized

Any benefit to adding parameters for actions exploration to hypersearch?

Table runs does not exist

Running hypersearch.py is giving me this error:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "runs" does not exist
LINE 1: select hypers, returns from runs where flag='conv2d'
^
[SQL: 'select hypers, returns from runs where flag=%(f)s'] [parameters: {'f': 'conv2d'}]

Which looks to me like the runs table does not exist in hyper_runs - which makes sense as there seems to be nothing trying to create it?

I've created the table but I don't know what data types to use.

Update: Bigint[] everything apart from agent and flag which are text and it works!

Update 2: I'm not seeing any of the runs being written to the table and looking at the dataset I'm seeing on the console I'm assuming I have the datatypes very wrong - any view of what the datatypes are would be great.

FYI using Ubuntu 16.04, python 3.6

Volatile GPU-Util is low

I am testing with a GTX1070 . when running python hypersearch.py to hypersearch ,By running nvidia-smi , I found that the Volatile GPU-Util is very low (only 6%) with Perf = P2 and GPU memory used 7773MB.

The hyper search speed is also not faster than cpu very much , I expected the speed should be 10x. Is it correct?

best regards

AttributeError: 'NoneType' object has no attribute 'run'

I just run the "python hypersearch.py" directly and get the following error. Actually, I find that "self.monitored_session.run" have repeated many times and finally it got a None. I am not familiar with tensorforce and do not know what happened. Any help is appreciated.


Traceback (most recent call last):
  File "hypersearch.py", line 772, in <module>
    main()
  File "hypersearch.py", line 768, in main
    y_list=Y
  File "/home/RL/tforce_btc_trader/gp.py", line 193, in bayesian_optimisation2
    y_list.append(loss_fn(params))
  File "hypersearch.py", line 719, in loss_fn
    reward = hsearch.execute(vec2hypers(params))
  File "hypersearch.py", line 547, in execute
    env.train_and_test(agent, self.cli_args.n_steps, self.cli_args.n_tests, -1)
  File "/home/RL/tforce_btc_trader/btc_env.py", line 487, in train_and_test
    self.run_deterministic(runner, print_results=True)
  File "/home/RL/tforce_btc_trader/btc_env.py", line 471, in run_deterministic
    runner.agent.act(next_state, deterministic=False)
  File "/home/anaconda3/lib/python3.6/site-packages/tensorforce/agents/agent.py", line 145, in act
    deterministic=deterministic
  File "/home/anaconda3/lib/python3.6/site-packages/tensorforce/models/model.py", line 1268, in act

    actions, internals, timestep = self.monitored_session.run(fetches=fetches, feed_dict=feed_dict)
  File "/home/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 536, in run
    return self._sess.run(fetches,
AttributeError: 'NoneType' object has no attribute 'run'

(psycopg2.ProgrammingError) relation "coinbase" does not exist

I'm getting the following error by running python3 run.py --id 10 --name test --test

  File "tforce_btc_trader/env/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "coinbase" does not exist
LINE 1: ...se.weighted_price as coinbase_weighted_price from coinbase o...
                                                             ^
 [SQL: 'select coinbase.open as coinbase_open, coinbase.high as coinbase_high, coinbase.low as coinbase_low, coinbase.close as coinbase_close, coinbase.volume_btc as coinbase_volume_btc, coinbase.volume_currency as coinbase_volume_currency, coinbase.weighted_price as coinbase_weighted_price from coinbase order by timestamp desc limit 100000 offset 0'] (Background on this error at: http://sqlalche.me/e/f405)

Try ray/RLlib

[Update 2018-07-27] Update: seems Coach has slowed down (w/o much community), and rllab has stopped. A more recently popular framework is rllib (one letter different than rllab).


I'd like to try replacing TensorForce with Coach and see if we get any better performance. Use the Clipped PPO for an apples-to-apples comparison; instructions on converting our btc_env.py here.

History: before landing on TensorForce I'd tried rll/rllab and openai/baselines. Baselines is backed by OpenAI, the company behind half these algorithms (they're behind PPO, the model we're using). But baselines isn't a plug-n-play framework intended for developer use; instead it's a dumping-ground for each paper's corresponding sample code. I couldn't get any of their stuff customized to our use-case; all runs resulted eventually in NaNs everywhere. Coach is new. No hunch as to whether it'll outperform, though it is backed by Intel which bodes well. I want to give it a whirl, but don't have time right now - so if anyone wants to take a stab at it, please indeed!

sqlalchemy.exc.ResourceClosedError when running hypersearch.py

Hi,

I get the following error and stacktrace when running hypersearch.py. I'm using Python 3.6.4 64-bit on Windows 10 with Postgres 10.1.3 64-bit. I've installed all the requirements in the requirements.txt file with the same version as specified, but I still get the following error.

2018-02-02 22:59:09.443734: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX
2018-02-02 22:59:09.708364: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1105] Found device 0 with properties:
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.7845
pciBusID: 0000:01:00.0
totalMemory: 8.00GiB freeMemory: 6.64GiB
2018-02-02 22:59:09.708511: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
row_count: 1574274
Traceback (most recent call last):
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 1112, in _execute_context
conn = self.__connection
AttributeError: 'Connection' object has no attribute '_Connection__connection'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 1114, in _execute_context
conn = self._revalidate_connection()
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 429, in _revalidate_connection
raise exc.ResourceClosedError("This Connection is closed")
sqlalchemy.exc.ResourceClosedError: This Connection is closed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "hypersearch.py", line 775, in
main()
File "hypersearch.py", line 771, in main
y_list=Y
File "I:\toolkits\tforce_btc_trader_clean\gp.py", line 193, in bayesian_optimisation2
y_list.append(loss_fn(params))
File "hypersearch.py", line 722, in loss_fn
reward = hsearch.execute(vec2hypers(params))
File "hypersearch.py", line 549, in execute
env.train_and_test(agent, self.cli_args.n_steps, self.cli_args.n_tests, -1)
File "I:\toolkits\tforce_btc_trader_clean\btc_env.py", line 466, in train_and_test
self.use_dataset(Mode.TEST)
File "I:\toolkits\tforce_btc_trader_clean\btc_env.py", line 273, in use_dataset
df = data.db_to_dataframe(self.conn, limit=limit, offset=offset, arbitrage=self.hypers.arbitrage)
File "I:\toolkits\tforce_btc_trader_clean\data\data.py", line 201, in _db_to_dataframe_main
df = pd.read_sql_query(query, conn).iloc[::-1]
File "i:\python_venv\tforce_btc_trader\lib\site-packages\pandas\io\sql.py", line 332, in read_sql_query
parse_dates=parse_dates, chunksize=chunksize)
File "i:\python_venv\tforce_btc_trader\lib\site-packages\pandas\io\sql.py", line 1087, in read_query
result = self.execute(*args)
File "i:\python_venv\tforce_btc_trader\lib\site-packages\pandas\io\sql.py", line 978, in execute
return self.connectable.execute(*args, **kwargs)
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 939, in execute
return self._execute_text(object, multiparams, params)
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 1097, in _execute_text
statement, parameters
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 1121, in _execute_context
None, None)
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 1402, in _handle_dbapi_exception
exc_info
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\util\compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\util\compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 1114, in _execute_context
conn = self._revalidate_connection()
File "i:\python_venv\tforce_btc_trader\lib\site-packages\sqlalchemy\engine\base.py", line 429, in _revalidate_connection
raise exc.ResourceClosedError("This Connection is closed")
sqlalchemy.exc.StatementError: (sqlalchemy.exc.ResourceClosedError) This Connection is closed [SQL: 'select coinbase.open as coinbase_open, coinbase.high as coinbase_high, coinbase.low as coinbase_low, coinbase.close as coinbase_close, coinbase.volume_btc as coinbase_volume_btc, coinbase.volume_currency as coinbase_volume_currency, coinbase.weighted_price as coinbase_weighted_price, coincheck.open as coincheck_open, coincheck.high as coincheck_high, coincheck.low as coincheck_low, coincheck.close as coincheck_close, coincheck.volume_btc as coincheck_volume_btc, coincheck.volume_currency as coincheck_volume_currency, coincheck.weighted_price as coincheck_weighted_price from coinbase\n left join lateral (\n select open, high, low, close, volume_btc, volume_currency, weighted_price\n from coincheck\n where coincheck.timestamp <= coinbase.timestamp\n order by coincheck.timestamp desc\n limit 1 \n ) coincheck on true\n order by coinbase.timestamp desc limit 157427 offset 1416846']

Is this a postgres version issue, or something related to windows? It seems to happen when doing the Select statement for the test data.

IndexError when running hypersearch.py

Hi,

The following error occurs right at the end of a training run when running hypersearch.py with no other arguments:

100% Custom: 0.885 Sharpe: 0.000 Return: 0.000 Trades: 0[<0] 10000[=0] 0[>0]
Running no-kill test-set
Traceback (most recent call last):
File "hypersearch.py", line 852, in
main()
File "hypersearch.py", line 848, in main
y_list=Y
File "I:\toolkits\tforce_btc_trader\gp.py", line 193, in bayesian_optimisation2
y_list.append(loss_fn(params))
File "hypersearch.py", line 799, in loss_fn
reward = hsearch.execute(vec2hypers(params))
File "hypersearch.py", line 624, in execute
env.train_and_test(agent, self.cli_args.n_steps, self.cli_args.n_tests, -1)
File "I:\toolkits\tforce_btc_trader\btc_env.py", line 619, in train_and_test
self.run_deterministic(runner, print_results=True)
File "I:\toolkits\tforce_btc_trader\btc_env.py", line 587, in run_deterministic
next_state, terminal, reward = self.execute(runner.agent.act(next_state, deterministic=True))
File "I:\toolkits\tforce_btc_trader\btc_env.py", line 425, in execute
pct_change = self.prices_diff[step_acc.i + 1]
IndexError: index 12864 is out of bounds for axis 0 with size 12864

Write unit tests for everything

Going on faith that hypersearch.py / run.py are even acting the way they're supposed to. Really truly need unit tests in this project. Ain't no weekend project, this is money we're dealing with.

Python hung (likely because of too many psycopg2 connections) during attempted impmentation of https://github.com/deepmind/scalable_agent

The graph assembles just fine, after finalization and onto training, one of the threads (assuming main?) gets hung. This is the traceback:

-------------------- Thread 4590249408 --------------------
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/site-packages/tensorflow/python/training/queue_runner_impl.py", line 257, in _run
enqueue_callable()
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1257, in _single_operation_run
self._call_tf_sessionrun(None, {}, [], target_list, None)
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)

also get this traceback for the same thread (getting hung here as well?):

File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/site-packages/tensorflow/python/training/queue_runner_impl.py", line 293, in _close_on_stop
coord.wait_for_stop()
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/site-packages/tensorflow/python/training/coordinator.py", line 311, in wait_for_stop
return self._stop_event.wait(timeout)
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/threading.py", line 551, in wait
signaled = self._cond.wait(timeout)
File "/Users/hughalessi/miniconda3/envs/rl/lib/python3.6/threading.py", line 295, in wait
waiter.acquire()

Thread traceback acquired using https://gist.github.com/niccokunzmann/6038331. I'm assuming this is because each instance of the btc_env creates a new psycopg2 connection to the history database hosted by postgresql, however I do not know how to fix it.

Although I know this is an outside project, any insight would be greatly appreciated. If the deepMind IMPALA implementation demonstrates decent results, I would be happy to share the implementation here if I can get it working.

TypeError: init() got an unexpected keyword argument 'cell_clip'

It shows error while trying "python run.py --name test".
(This is other repository's command "lefnire/tforce_btc_trader.git")
What could be the cause of this (issue/problem)?
Appreciate your help.

Traceback (most recent call last):
File "run.py", line 57, in
main()
File "run.py", line 43, in main
**hydrated
File "c:\users\administrator\rl\tensorforce\tensorforce\agents\ppo_agent.py", line 151, in init
entropy_regularization=entropy_regularization
File "c:\users\administrator\rl\tensorforce\tensorforce\agents\learning_agent.py", line 149, in init
batching_capacity=batching_capacity
File "c:\users\administrator\rl\tensorforce\tensorforce\agents\agent.py", line 79, in init
self.model = self.initialize_model()
File "c:\users\administrator\rl\tensorforce\tensorforce\agents\ppo_agent.py", line 179, in initialize_model
likelihood_ratio_clipping=self.likelihood_ratio_clipping
File "c:\users\administrator\rl\tensorforce\tensorforce\models\pg_prob_ratio_model.py", line 88, in init
gae_lambda=gae_lambda
File "c:\users\administrator\rl\tensorforce\tensorforce\models\pg_model.py", line 95, in init
requires_deterministic=False
File "c:\users\administrator\rl\tensorforce\tensorforce\models\distribution_model.py", line 86, in init
discount=discount
File "c:\users\administrator\rl\tensorforce\tensorforce\models\memory_model.py", line 106, in init
reward_preprocessing=reward_preprocessing
File "c:\users\administrator\rl\tensorforce\tensorforce\models\model.py", line 200, in init
self.setup()
File "c:\users\administrator\rl\tensorforce\tensorforce\models\model.py", line 307, in setup
self.initialize(custom_getter=custom_getter)
File "c:\users\administrator\rl\tensorforce\tensorforce\models\pg_model.py", line 107, in initialize
super(PGModel, self).initialize(custom_getter)
File "c:\users\administrator\rl\tensorforce\tensorforce\models\distribution_model.py", line 93, in initialize
kwargs=dict(summary_labels=self.summary_labels)
File "c:\users\administrator\rl\tensorforce\tensorforce\core\networks\network.py", line 180, in from_spec
kwargs=kwargs
File "c:\users\administrator\rl\tensorforce\tensorforce\util.py", line 159, in get_object
return obj(*args, **kwargs)
File "C:\Users\Administrator\RL\tforce_btc_trader\hypersearch.py", line 155, in init
super(CustomNet, self).init(layers_spec, **kwargs)
File "c:\users\administrator\rl\tensorforce\tensorforce\core\networks\network.py", line 270, in init
kwargs=dict(scope=scope, summary_labels=summary_labels)
File "c:\users\administrator\rl\tensorforce\tensorforce\core\networks\layer.py", line 143, in from_spec
kwargs=kwargs
File "c:\users\administrator\rl\tensorforce\tensorforce\util.py", line 159, in get_object
return obj(*args, **kwargs)
TypeError: init() got an unexpected keyword argument 'cell_clip'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.