Coder Social home page Coder Social logo

pytrader's Introduction

pytrader

Build Status

What

pytrader is a cryptocurrency trading robot that uses machine learning to predict price movements at confidence intervals, and sometimes execute trades. It is programmed to work on the poloniex.com cryptocurrency platform.

Prettymuch, this:

I (@owocki) built this as a side project in January / February 2016, as a practical means of getting some experience with machine learning, quantitative finance, and of course hopefully making some profit ;).


3/20/2017 - This project has been put on ice by it's contributors. Comment here if you would like to see a revival of development => #80

3/26/2016 - My test portfolio was initialized with a 1 BTC deposit, and after 2 months and 23,413 trades, exited with 0.955 BTC. The system paid 2.486 BTC in fees to poloniex. CALL TO ACTION -- Get this trader to profitability. A strategy is being fleshed out here.

3/27/2016 - Sign up for the pytrader slack channel here.

4/3/2016 - New documentation -- How to trade with pytrader


How

pytrader uses pybrain and sklearn to make trade ( buy/sell/hold decisions ), and then acts upon them.

1. sklearn classifiers

Supported classifiers are as follows:

["Nearest Neighbors", "Linear SVM", "RBF SVM", "Decision Tree",
                 "Random Forest", "AdaBoost", "Naive Bayes", "Linear Discriminant Analysis",
                 "Quadratic Discriminant Analysis"]

Here's an example of a Decision Tree classifier being used to make a buy (blue), sell (red), or hold(green) decision on the BTC_ETH pair.

and here's a Naive Bayes decision tree for the USDT_BTC pair

On both graphs, the x axis is a recent price movement, and the y axis is a previous price movement, the length of which is determined by a parameter called granularity. These graphs show only the last two price movements. The graphing library used is constrained by two dimensional space, but you could generate a classifier that acts upon n pricemovements ( n dimensional space ).

There are many many different parameters one could use to train a ClassifierTest. This problem space is enumerated by the management command predict_many_sk.py. For each permutation of parameters, a percent_correct value is generated against actual price movement data. Using this brute force methodology, we are able to discover which classifiers are up for the job of trading.

By testing and tuning various parameters to to the ClassifierTest, I was able to consistently predict buy/sell/hold movements between 55-65% of the time, depending upon the currency pair and parameters to the test.

2. pybrain neural networks

In addition to using sklearn Classifiers, Pybrain Supervised Learning tools were used to predict price movement. This is represented in the data model as a PredictionTest, and the problem space is enumerated in predict_many_v2.py. By testing and tuning various parameters in the pybrain NN, I was able to consistently predict directional price movements around 55% of the time.

Database model

Administration & Optimization

Administration of this tool is primarily done through the django admin.

There's a series of graphs in the admin that show trades, and portfolio profitability over time.

... and allow the graphical debugging of trade decisions ...

... and allow the tuning of PredictionTests and ClassifierTests ...

by each of the native pybrain (Prediction Test) and sklearn (ClassiferTests) parameters ..

Once a NN or classifier is found that is better than what is being used, trade.py is updated with the most profitable configurations.

        self.predictor_configs = [
            {'type' : 'nn',
                'name' : 'ETH / 5',
                'symbol': 'BTC_ETH',
                'weight' : 0.1,
                'granularity' : granularity,
                'datasetinputs': 5},
            {'type' : 'nn',
                'name' : 'ETH / 5',
                'symbol': 'BTC_ETH',
                'weight' : 0.1,
                'granularity' : granularity,
                'datasetinputs': 4},
            {'type' : 'classifier',
                'symbol': 'USDT_BTC',
                'name' : 'AdaBoost',
                'weight' : 0.1,
                'granularity' : granularity,
                'datasetinputs' : 2,
                'minutes_back': 1000},
            {'type' : 'classifier',
                'symbol': 'USDT_BTC',
                'name' : 'Naive Bayes',
                'weight' : 0.1,
                'granularity' : granularity,
                'datasetinputs' : 2,
                'minutes_back': 1000},
            {'type' : 'classifier',
                'symbol': 'BTC_ETH',
                'name' : 'Naive Bayes',
                'weight' : 2,
                'granularity' : granularity * 3,
                'datasetinputs' : 2,
                'minutes_back': 1000},
        ]

trade.py

trade.py is the system's always-running trading engine. At a high level, it creates & trains ClassifierTests and PredictionTests based upon the most profitable indicators. Once those Tests are trained, it runs a loop that makes trades based upon them if a certain confidence threshold is reached amongst its self.predictor_configs .

Why open source this?

Although I am able to predict price movements with some degree of accuracy that beats random, I was never able to generate a robot that traded profitably after fees. Especially after poloniex changed their fee structure

My test portfolio was initialized with a 1 BTC deposit, and after 2 months and 23,413 trades, exited with 0.955 BTC. The system paid 2.486 BTC in fees to poloniex.

The code is not perfect. This was a pre-product/market-fit side project. Please feel free to open an Issue if you do not understand something. CALL TO ACTION -- Get this trader to profitability. A strategy is being fleshed out here.

Deployment

After you've cloned the repo, you'll want to create a pypolo/local_settings.py file with the following information in it:

import os

BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

API_KEY = "<POLO_API_KEY>"
API_SECRET = "<POLO_SECRET>"

# Additional Django Apps you with to only be enabled in debug mode.
DEBUG_APPS = []

# this defines whether trade.py will actually submit trades to the poloniex API.  setting to `False` is useful for testing
MAKE_TRADES = True

# the following 4 lines are needed only if you want to be alerted of fail cases (when the trader is not running, etc)
ALERT_EMAIL = '<your_email>'
SMTP_USERNAME = '<smtp_user>'
SMTP_HOST = '<smtp_host>'
SMTP_PASSWORD = '<smtp_pass>'

LOG_FILE = '/var/log/django.log'

# Configuration of the number of threads to be used for predictions - Set to CPU cores + 1 if dedicated machine
NUM_THREADS = 5

# only required for pull_twitter.py
# get this info @ https://apps.twitter.com/
TWITTER_CONSUMER_KEY = ''
TWITTER_CONSUMER_SECRET = ''
TWITTER_ACCESS_TOKEN_KEY = ''
TWITTER_ACCESS_TOKEN_SECRET = ''


install your requirements

pip install -r requirements.txt

set up your database.. here are some sample DB configs (place in pypolo/local_settings.py):

#postgres

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',  # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
        'NAME': 'trader',                      # Or path to database file if using sqlite3.
        # The following settings are not used with sqlite3:
        'USER': 'trader',
        'PASSWORD': '<pw>',
        'HOST': '127.0.0.1',  # '127.0.0.1',                      # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
        'PORT': '',  # '5124',                      # Set to empty string for default.
        'ATOMIC_REQUESTS': True,
    },
}

#sqllite

import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

_DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
    }
}

run migration commands

./manage.py syncdb
./manage.py migrate

and then install the system crontab

crontab scripts/crontab.txt

... and your system is installed.

Once enough Price objects are stored in the database, you'll be able to begin training your NN / classifiers. (see example command ./manage.py pull_prices below or download a seed database of prices here).

How do I optimize pytrader's trades?

See the next document, How to trade with pytrader.

Docker dev setup

Install docker at least version 1.10.3, and install docker-compose at least version 1.6.2.

  • initalize your environment:
cp docker-compose.yml.example docker-compose.yml
cp docker/env.example docker/env
cp pypolo/local_settings.py.example pypolo/local_settings.py
  • Add your POLONIEX_API_KEY and POLONIEX_API_SECRET to docker/env (its gitignored, dont worry)
  • Set your PYTRADER_LOGIN and PYTRADER_PASSWORD to docker/env. This will set the login you use to access the site.
  • Build Docker image (compiling stuff for scipy and numpy takes time): docker-compose build or pull the images from Docker Hub: docker-compose pull
  • Run the containers: docker-compose up
  • Seed your db with newest dump from http://repo.snipanet.com:
docker exec -it pytrader_web_1 /root/pytrader/scripts/load_newest_data.sh

Get shell in pytrader container

docker exec -it pytrader_web_1 /bin/bash

Postres dumps

Seed dumps are available from http://repo.snipanet.com. Thanks Snipa22!

pytrader's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytrader's Issues

Migrations do not migrate

I've tried running the shipped migrations and it seems that they may may have gone out of sync with one another at some point in time around 12 or 13. I attempted removing the migrations directory and running makemigrations from anew and everything seems to run well, so it make be good to do this permanently in the repo; to ditch the existing migration files and ship a single base migration and any other further OSS changes can be applied on top.

It looks like the only data migrations in place are:

  • 0013_auto_20160222_2046.py
  • 0025_auto_20160224_1450.py
  • 0027_auto_20160224_1543.py
  • 0031_auto_20160224_2219.py

All of which appear to be simple updates for existing data during development and are irrelevant to any newly created databases.

Further analysis of initial results

My test portfolio was initialized with a 1 BTC deposit, and after 2 months and 23,413 trades, exited with 0.955 BTC. The system paid 2.486 BTC in fees to poloniex.

Investigate whether I was lucky or good at trading.

  1. Were my results just because ETH had a spectacular rise ( #5 (comment) ) ?
  2. How would my results have changed if I was on a low/no-volume exchange?

Bonus points
3. How does this affect how we should robustly backtest our trading algos against past data?

Contributions Business Model

Hi @owocki ,

As I told you I was checking your code. I dint deploy yet, but I will do it. I have a few knowledge at AI and I was trying to understand. Anyhow, i think maybe you are teaching with poor info your machine, as I understood, you are doing that with the Poloniex Trading Book behavior.

  • First -
    As you are trading BTC/USD and BTC/ETH pair you might get the better info for that. As I understand, the job here is to make the bot predict and take the best decision and execute. The financial market anticipate the moves, so, you have to give the bot a good source, and for this case you should work with OKCOIN, KRAKEN, BITFINEX and BTC-e, because Poloniex traders are following those companies Book, so if you have this source of learning combined with Poloniex Book, you will execute first than the others and probably will have more profit.
  • Second -
    Instruments
    Spot, Futures and Swaps have different behaviors and trade mechanics.
    As your bot (by now) is working only with Spot doing arbitrage between pairs, you have to first, decide witch pair will be your net asset reference (BTC, USD, ETH.....) because doing arbitrage you only increase your asset in one.(exemple: imagine if you buy today 1 BTC for 420 and at same time you by ETH with those BTC's, and ETH goes UP 3% and you sell, but at the same time BTC goes down 12%. If your Asset in reference is USD, you are loosing money).
    For Futures and Swaps is other discussion, and we can do it further.

-Third -
Size
Your size (Amount of Trade) is very important and is part of you Risk Management.
Some Trade Books from some Exchanges doesn't have enough liquidity to your minute/hour/daily volume, and if you push too much executions at the book, you can disturb the market behavior compromising a bullish strategy (Example. You have 10 BTC to be liquidate, and the bot knows that he can start to sell, but the market is not confident yet to place big orders at that level, if you place an "anchor order" (big order) or start to execute big orders, you can clean the buy side book before it came at the level that you need)

  • Fourth-
    Timeframe
    The Timeframe is extremely important specially at the Spot doing arbitrage, and might be considered according to your size.
    You have to analyze and teach that to the bot, the 12h, 6h, 4h, 2h, 1h, 30min, 15min, 5 min and 1 min. Timeframe, in order to understand the Pivots, Support, Resistance and Trade Volume. Those information are crucial, specially at the arbitrage. You cant allow the Bot to buy at the top of resistance or sell at the support (depends of the Market Trend). You have a lot of reference of this at tradingview.com and I can share some charts to you to understand better.

Well, has a lot more to include on this machine, but to make this working fine, you already did a lot and very nice job. If we start to follow those statements we can improve your code and make a money machine.

As soon as I deploy your code I let you know.
RM

kernel unicode issue

Traceback (most recent call last):
  File "./manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 330, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 390, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 441, in execute
    output = self.handle(*args, **options)
  File "/root/pytrader/history/management/commands/predict_many_sk.py", line 42, in handle
    ct.get_classifier()
  File "/root/pytrader/history/models.py", line 329, in get_classifier
    clf.fit(self.X_train, self.y_train)
  File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 193, in fit
    fit(X, y, sample_weight, solver_type, kernel, random_seed=seed)
  File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 251, in _dense_fit
    max_iter=self.max_iter, random_seed=random_seed)
TypeError: Argument 'kernel' has incorrect type (expected str, got unicode)

Correlations Study

Direct or Inverse Correlations may be occurring between different pairs; and, sometimes delayed on different timeframes.

Price Action data might reveal / indicate strategic buying; selling; hedging opportunities from one pair to another.

classifier images

Hi,

the classifier images are not generated in the static/ folder. Any idea which dependency may be missing ?

Thanks

support for `ClassifierTests`.`datasetinputs` != 2

from slack

for some reason whenever you change datasetinput to anything other than 2 for the classifiers, trade barfs…

this is something i never got around to in original coding.

resolution paths:

  1. fix this
  2. update the documentation

Shared Reference Implementation

It might be good to set up a shared server, where vetted collaborators can deploy the master branch of this repo. This would allow us to:

  • have a shared price history table
  • have a shared history of any future training data (social data, sentiment analysis, volume, etc)
  • have a shared reference implementation of the system.

We should also add a programatic way of backing up / restoring the shared data (prices, social data, etc) to our individual systems.

Seed Database

I have a database of price, volume, bid/ask spread history for the entire poloniex index, at a minute-by-minute- granularity, for the last several months (from 2/21 to present). I will post if there is enough interest.

All timezones in UTC

There's a little bit of hackery in the repo in transforming UTC to MST (for example, subtraction of timedelta(hours=7) instead of true timezone transforms in order to get the correct local time in the UI. I'll button this up.

`/admin/optimize` graphs are not helpful

These graphs:

were originally designed as a way to test whether trade.py parameters were correctly predicting the market, but they're just not all that useful. We need to come up with a better way to gauging how well our trade parameters are following the market.

docker container

it seems a lot of folks are having a hard time getting the repo up and running (dependancy issues, gettting postgres importing, etc) i was thinking of just creating a VM people can use so that we dont waste more time on installation of libraries.

Incentive structures for a successful pytrader

I've been thinking a fair bit about the concerns raised in #7 and what it will take in order to make this system successful over time.

What success looks like

Success being defined as :

  1. traders gain alpha, and subsequently, can make a profit using pytrader.
  2. the greater the contributions to pytrader, the greater the alpha.
    • a. contributions to pytrader compound upon each other, creating greater alpha.
    • b. there is sufficient market cap & trading volume available such that pytrader contributors are not competing with one another for profit.
    • c. if pytraders do compete with each other, those who have made greater contributions will beat those have made lesser contributions.
  3. 1 & 2 are maintained over time.

Actors

Since open sourcing the repo, I've been chatting with people on the pytrader slack ( #23 ) about the development roadmap.

So far, I see the following actor archetypes:

  1. core contributor -- just me so far
  2. contributor - python, machine learning, or quant. has made PR with meaningful contributions to repo.
  3. potential contributor - same skills as contributor. potential to make meaningful contributions.
  4. trader -- had enough coding experience and/or interest to run the repo, but not to make meaningful contributions.
  5. other -- anything else

Balance

A key part of achieving success as defined above will be maintaining balance. Balance between

  • Public and private information
  • Incentives for value transfer between actors in the system

Given the zero sum nature of the trading market, the incentive structure of pytrader should be set up to avoid competing with one another.

Given the compounding nature of the collective contributions of pytrader, the incentive structure should be set up to encourage PRs back to the repository.

In Practice

I propose the following setup:

  1. code is public, maintained in the main pytrader fork.
  2. data is private, maintained by indivdual contributors and traders.
  3. some data may be made public such that it encourages compounding the value of this repository, such as prices, bid/ask spread, and (in the future) order books, social semantics.

TODOs & Future considerations

  • Profitable NN configurations -- These are currently public. They may be made private soon, after documentation is written on how to find profitable NN configurations on ones own.
  • Tipping -- In order to encourage PRs to the repository, a BTC bounty is may be attached to roadmap items. I am looking for feedback on this idea.
  • Private Slack Room I will be creating a private slack room, entitled #contributors. To get an invite, you must make a contribution to the repository.

linting standard

Choose a standard and start enforcing it so that the code does not become a mess, style-wise.

pep8?

LISCENSE file

A LICENSE file might help when inviting contributions, so people know what the ground rules are.

Support BTCC

Interesting project. I could see that you have made some bitcoins from these trades but the Poloniex fee rates are high. Why not support BTCC? It doesn't charge fees for trading. :-)

smart selection of size of bets

make trade bot consider balances (and portfolio distribution) before deciding trade amount. it'd also be nice if the tradebot could specify whether it wants to liquidate its position after n granularity increments.

Web UI for non-python audience

Predict crypto trades with machine learning right from your browser

Posted my idea in the Slack group and started working with @jchomali on a web version to bring all the potential capabilities of the trader to people not familiarized with python or non developers.

The main intention is both:

  • To create finally a good looking, clear and easy to use dashboard to see the situation of all your investments using Poloniex API
  • To receive suggestions argued by machine learning about your next movements.

porfolioview

Main view with a quick recap of all your investments and main situation

statisticsview

Detailed view with statistics about each one of the coins where you invested and some more graphs about your detailed activity

Just a quick draft, lots of improvements to be added but that’s the main idea.
Would love to receive your feedback, thoughts and if someone wants to work on it, just ping us!

Profile jobs

Profile the jobs based on the stats at #42 and see if we can get the runtime down.

Wrong requirement versions

numpy fixed 1.7.1 causes:
RuntimeError: module compiled against API version 9 but this version of numpy is 7

Using django-chartit causes:
'chartit' is not a valid tag library: ImportError raised loading chartit.templatetags.chartit: cannot import name simplejson. This is a known issue of django-chartit, which has been updated for 1.5 year.

Prior research

While the utility of this project is subject to debate (see #7 and #1) , compiling existing work in one place would be undeniably useful.

I suggest we list any previous attempts at

  • open source trading algorithms
  • algorithms for trading, even without code
  • tests with data of trading algorithms
  • etc

I'll start by citing http://arxiv.org/abs/1410.1231, http://arxiv.org/abs/1406.7577, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.466.4798&rep=rep1&type=pdf, and https://pdfs.semanticscholar.org/43f4/b31d321d90f724ad5edd9b855d06db4be4a6.pdf, plus several github repos:

Disclaimer: these were found by searching, some may be deprecated or not working.

AttributeError: 'module' object has no attribute 'getmro'

I am trying to get this system running once and for all, and now I've re-cloned and come to ./manage.py syncdb part where I get the following output:

Traceback (most recent call last):
File "./manage.py", line 10, in
execute_from_command_line(sys.argv)
File "/home/syntacs/.local/lib/python2.7/site-packages/django/core/management/init.py", line 338, in execute_from_command_line
utility.execute()
File "/home/syntacs/.local/lib/python2.7/site-packages/django/core/management/init.py", line 312, in execute
django.setup()
File "/home/syntacs/.local/lib/python2.7/site-packages/django/init.py", line 15, in setup
from django.utils.log import configure_logging
File "/home/syntacs/.local/lib/python2.7/site-packages/django/utils/log.py", line 16, in
from django.views.debug import ExceptionReporter, get_exception_reporter_filter
File "/home/syntacs/.local/lib/python2.7/site-packages/django/views/debug.py", line 9, in
from django.core.urlresolvers import Resolver404, resolve
File "/home/syntacs/.local/lib/python2.7/site-packages/django/core/urlresolvers.py", line 18, in
from django.http import Http404
File "/home/syntacs/.local/lib/python2.7/site-packages/django/http/init.py", line 4, in
from django.http.response import (
File "/home/syntacs/.local/lib/python2.7/site-packages/django/http/response.py", line 13, in
from django.core.serializers.json import DjangoJSONEncoder
File "/home/syntacs/.local/lib/python2.7/site-packages/django/core/serializers/init.py", line 24, in
from django.core.serializers.base import SerializerDoesNotExist
File "/home/syntacs/.local/lib/python2.7/site-packages/django/core/serializers/base.py", line 6, in
from django.db import models
File "/home/syntacs/.local/lib/python2.7/site-packages/django/db/models/init.py", line 6, in
from django.db.models.query import Q, QuerySet, Prefetch # NOQA
File "/home/syntacs/.local/lib/python2.7/site-packages/django/db/models/query.py", line 16, in
from django.db.models import sql
File "/home/syntacs/.local/lib/python2.7/site-packages/django/db/models/sql/init.py", line 2, in
from django.db.models.sql.subqueries import * # NOQA
File "/home/syntacs/.local/lib/python2.7/site-packages/django/db/models/sql/subqueries.py", line 9, in
from django.db.models.sql.query import Query
File "/home/syntacs/.local/lib/python2.7/site-packages/django/db/models/sql/query.py", line 17, in
from django.db.models.aggregates import Count
File "/home/syntacs/.local/lib/python2.7/site-packages/django/db/models/aggregates.py", line 5, in
from django.db.models.expressions import Func, Value
File "/home/syntacs/.local/lib/python2.7/site-packages/django/db/models/expressions.py", line 7, in
from django.db.models import fields
File "/home/syntacs/.local/lib/python2.7/site-packages/django/db/models/fields/init.py", line 19, in
from django import forms
File "/home/syntacs/.local/lib/python2.7/site-packages/django/forms/init.py", line 6, in
from django.forms.fields import * # NOQA
File "/home/syntacs/.local/lib/python2.7/site-packages/django/forms/fields.py", line 57, in
class Field(six.with_metaclass(RenameFieldMethods, object)):
File "/home/syntacs/.local/lib/python2.7/site-packages/django/utils/six.py", line 778, in new
return meta(name, bases, d)
File "/home/syntacs/.local/lib/python2.7/site-packages/django/utils/deprecation.py", line 50, in new
for base in inspect.getmro(new_class):
AttributeError: 'module' object has no attribute 'getmro'

NameError: name 'DEBUG_APPS' is not defined

Hey Kevin, any idea what's causing this?

Including stack trace and my local_settings...

Traceback (most recent call last):
  File "./manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/home/ccf/tools/anaconda2/lib/python2.7/site-packages/django/core/management/**init**.py", line 338, in execute_from_command_line
    utility.execute()
  File "/home/ccf/tools/anaconda2/lib/python2.7/site-packages/django/core/management/**init**.py", line 303, in execute
    settings.INSTALLED_APPS
  File "/home/ccf/tools/anaconda2/lib/python2.7/site-packages/django/conf/**init**.py", line 48, in **getattr**
    self._setup(name)
  File "/home/ccf/tools/anaconda2/lib/python2.7/site-packages/django/conf/__init__.py", line 44, in _setup
    self._wrapped = Settings(settings_module)
  File "/home/ccf/tools/anaconda2/lib/python2.7/site-packages/django/conf/__init__.py", line 92, in **init**
    mod = importlib.import_module(self.SETTINGS_MODULE)
  File "/home/ccf/tools/anaconda2/lib/python2.7/importlib/**init**.py", line 37, in import_module
    **import**(name)
  File "/home/ccf/projects/pytrader/pypolo/settings.py", line 124, in <module>
    INSTALLED_APPS += DEBUG_APPS
NameError: name 'DEBUG_APPS' is not defined

[ccf@li286-238 pytrader]$ cat pypolo/local_settings.py


import os

BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(**file**)))

MAKE_TRADES=False

API_KEY="foo"
API_SECRET="bar"
# sqllite

_DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
    }
}

Get trader to profitability in at least one market

One could presumably increase the margin for a classifier such that, to classify a sequence of price movements as a BUY or SELL, the predicted movement must be higher than what it is now (0.002% increase/decrease threshold)

wont migrate; django error

Hi I have set up everything and trying to run: manage.py migrate, but get a long error message ending in:

"base.py line 21, in complain raise ImproperlyConfigured("settings.DATABASES is improperly configured. "
django.core.exceptations.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details.

Documentation on how to optimize the NN/classifiers

predict_many_sk and predict_many_v2 jobs, how to optimize them.

im going to write up a little more on this, there are a handful of folks who have gotten the repo actually up and running and have been asking questions about how to optimize the trading engine.

Database Setup

Hi! First of all thanks for great contribution! Second when i'm trying to deploy it locally after ./manage.py migrate i've got error that settings.DATABASES is improperly configured. But even after adding database config to setting i've still get an error, this time: django.db.utils.OperationalError: no such column: history_predictiontest.type

Trade Model Evaluation ideas

Ideas for evaluating / backtesting trading models.

From #5 (comment) :

A quick way would be create a theoretical control group: (if you bought 1 BTC/ETH and held it for the same two months) and compare the profits.

or

A second much more rigorous method (what I would do before even investing 1 real BTC) would be to get a collection of data from different markets and simulate PyTrader at random segments on them. This would be a true experiment and give a good idea of the actual profitability of the algorithm in the general market. But, I imagine this would take quite a bit of computation power. Do you have a general estimate of how long it would take a $900 PC running 24/7 to simulate 2 months of trading? (assuming all the code is modified somehow for this to happen)

Thanks @jeff-hykin for these ideas. Lets discuss these and other ideas below.

Ideas for managing fees

  • Integrate with a low-fee or no-fee exchange.
  • Raise the threshold for which the trader decides to buy/sell, and lower the threshold for which it decides to hold.
  • Manage fees when pytrader takes multiple positions in the same instrument in different directions

Investigate Adding social semantics

I have not incorporated social semantics in this yet, but I'm sure there's some signal there. I'd bet reddit, bitcointalk, and twitter would be good places to start

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.