Coder Social home page Coder Social logo

asavinov / intelligent-trading-bot Goto Github PK

View Code? Open in Web Editor NEW
748.0 43.0 169.0 1.02 MB

Intelligent Trading Bot: Automatically generating signals and trading based on machine learning and feature engineering

Home Page: https://t.me/intelligent_trading_signals

License: MIT License

Python 100.00%
bitcoin machine-learning artificial-intelligence feature-engineering crypto trading algorithmic-trading crypto-trading trading-bots cryptocurrency

intelligent-trading-bot's Introduction

 ___       _       _ _ _                  _     _____              _ _               ____        _ 
|_ _|_ __ | |_ ___| | (_) __ _  ___ _ __ | |_  |_   _| __ __ _  __| (_)_ __   __ _  | __ )  ___ | |_
 | || '_ \| __/ _ \ | | |/ _` |/ _ \ '_ \| __|   | || '__/ _` |/ _` | | '_ \ / _` | |  _ \ / _ \| __|
 | || | | | ||  __/ | | | (_| |  __/ | | | |_    | || | | (_| | (_| | | | | | (_| | | |_) | (_) | |_ 
|___|_| |_|\__\___|_|_|_|\__, |\___|_| |_|\__|   |_||_|  \__,_|\__,_|_|_| |_|\__, | |____/ \___/ \__|
                         |___/                                               |___/                   
₿   Ξ   ₳   ₮   ✕   ◎   ●   Ð   Ł   Ƀ   Ⱥ   ∞   ξ   ◈   ꜩ   ɱ   ε   ɨ   Ɓ   Μ   Đ  ⓩ  Ο   Ӿ   Ɍ  ȿ

https://t.me/intelligent_trading_signals 📈 Intelligent Trading Signals 📉 https://t.me/intelligent_trading_signals

Intelligent trading bot

The project is aimed at developing an intelligent trading bot for automated trading cryptocurrencies using state-of-the-art machine learning (ML) algorithms and feature engineering. The project provides the following major functionalities:

  • Defining derived features using custom (Python) functions including technical indicators
  • Analyzing historic data and training machine learning models in batch off-line mode
  • Analyzing the predicted scores and choosing best signal parameters
  • Signaling service which is regularly requests new data from the exchange and generates buy-sell signals by applying the previously trained models in on-line mode
  • Trading service which does real trading by buying or selling the assets according to the generated signals

Intelligent trading channel

The signaling service is running in cloud and sends its signals to this Telegram channel:

📈 Intelligent Trading Signals 📉 https://t.me/intelligent_trading_signals

Everybody can subscribe to the channel to get the impression about the signals this bot generates.

Currently, the bot is configured using the following parameters:

  • Exchange: Binance
  • Cryptocurrency: ₿ Bitcoin
  • Analysis frequency: 1 minute (currently the only option)
  • Score between -1 and +1. <0 means likely to decrease, and >0 means likely to increase
  • Filter: notifications are sent only if score is greater than ±0.20 (may change)
  • One increase/decrease sign is added for each step of 0.05 (exceeding the filter threshold)

There are silent periods when the score in lower than the threshold and no notifications are sent to the channel. If the score is greater than the threshold, then every minute a notification is sent which looks like

₿ 24.518 📉📉📉 Score: -0.26

The first number is the latest close price. The score -0.26 means that it is very likely to see the price lower than the current close price.

If the score exceeds some threshold specified in the model then buy or sell signal is generated which means that it is a good time to do a trade. Such notifications look as follows:

🟢 BUY: ₿ 24,033 Score: +0.34

Training machine learning models (offline)

Batch data processing pipeline

For the signaler service to work, a number of ML models must be trained and the model files available for the service. All scripts run in batch mode by loading some input data and storing some output files. The batch scripts are located in the scripts module.

If everything is configured then the following scripts have to be executed:

  • python -m scripts.download_binance -c config.json
  • python -m scripts.merge -c config.json
  • python -m scripts.features -c config.json
  • python -m scripts.labels -c config.json
  • python -m scripts.train -c config.json
  • python -m scripts.signals -c config.json
  • python -m scripts.train_signals -c config.json

Without a configuration file the scripts will use the default parameters which is useful for testing purposes and not intended for showing good performance. Use sample configuration files which are provided for each release like config-sample-v0.6.0.jsonc.

Downloading and merging source data

The main configuration parameter for the both scripts is a list of sources in data_sources. One entry in this list specifies a data source as well as column_prefix used to distinguish columns with the same name from different sources.

  • Download the latest historic data: python -m scripts.download_binance -c config.json

    • It uses Binance API but you can use any other data source or download data manually using other scripts
  • Merge several historic datasets into one dataset: python -m scripts.merge -c config.json

    • This script solves two problems: 1) there could be other sources like depth data or futures 2) a data source may have gaps so we need to produce a regular time raster in the output file

Generate features

This script is intended for computing derived features:

  • Script: python -m scripts.features -c config.json
  • Currently it runs in non-incremental model by computing features for all available input records (and not only for the latest update), and hence it may take hours for complex configurations
  • The script loads merged input data, applies feature generation procedures and stores all derived features in an output file
  • Not all generated features will be used for training and prediction. For the train/predict phases, a separate list of features is specified
  • Feature functions get additional parameters like windows from the config section
  • The same features must be used for on-line feature generation (in the service when they are generated for a micro-batch) and off-line feature generation.

The list of features to be generated is configured via feature_sets list in the configuration file. How features are generated is defined by the feature generator each having some parameters specified in its config section.

  • talib feature generator relies on the TA-lib technical analysis library. Here an example of its configuration: "config": {"columns": ["close"], "functions": ["SMA"], "windows": [5, 10, 15]}
  • itbstats feature generator implements functions which can be found in tsfresh like scipy_skew, scipy_kurtosis, lsbm (longest strike below mean), fmax (first location of maximum), mean, std, area, slope. Here are typical parameters: "config": {"columns": ["close"], "functions": ["skew", "fmax"], "windows": [5, 10, 15]}
  • itblib feature generator implemented in ITB but most of its features can be generated (much faster) via talib
  • tsfresh generates functions from the tsfresh library

Generate labels

This script is similar to feature generation because it adds new columns to the input file. However, these columns describe something that we want to predict and what is not known when executing in online mode. For example, it could be price increase in future:

  • Script: python -m scripts.labels -c config.json
  • The script loads features, computes label columns and stores the result in output file
  • Not all generated labels have to be used. The labels to be used for training are specified in a separate list

The list of labels to be generated is configured via label_sets list in the configuration. One label set points to the function which generates additional columns. Their configuration is very similar to feature configurations.

  • highlow label generator returns True if the price is higher than the specified threshold within some future horizon
  • highlow2 Computes future increases (decreases) with the conditions that there are no significant decreases (increases) before that. Here is its typical configuration: "config": {"columns": ["close", "high", "low"], "function": "high", "thresholds": [1.0, 1.5, 2.0], "tolerance": 0.2, "horizon": 10080, "names": ["first_high_10", "first_high_15", "first_high_20"]}
  • topbot Deprecated
  • topbot2 Computes maximum and minimum values (labeled as True). Every labelled maximum (minimum) is guaranteed to be surrounded by minimums (maximums) lower (higher) than the specified level. The required minimum difference between adjacent minimums and maximums is specified via level parameters. The tolerance parameter allows for including also points close to the maximum/minimum. Here is a typical configuration: "config": {"columns": "close", "function": "bot", "level": 0.02, "tolerances": [0.1, 0.2], "names": ["bot2_1", "bot2_2"]}

Train prediction models

This script uses the specified input features and labels to train several ML models:

  • Script: python -m scripts.train -c config.json
  • Hyper-parameter tuning is not part of this procedure - they are supposed to be known
  • The algorithm descriptions and hyper-parameters are specified in the model store
  • The results are stored as multiple model files in the model folder. File names are equal to the predicted column names and have this pattern: (label_name, algorithm_name)
  • This script trains models for all specified labels and all specified algorithms
  • The script also generates prediction-metrics.txt file with the prediction scores for all models

Configuration:

  • Models and hyper-parameters are described in model_store.py
  • Features to be used for training are specified in train_features
  • List of labels is specified in labels
  • List of algorithms is specified in algorithms

Aggregation and post-processing

The goal of this step is to aggregate the prediction scores generated by different algorithms for different labels. The result is one score which is supposed to be consumed by the signal rules on the next step. The aggregation parameters are specified in the score_aggregation section. The buy_labels and sell_labels specify input prediction scores processed by the aggregation procedure. window is the number of previous steps used for rolling aggregation and combine is a way how two score types (buy and labels) are combined into one output score.

Signal generation

The score generated by the aggregation procedure is some number and the goal of signal rules is to make the trading decisions: buy, sell or do nothing. The parameters of the signal rules are described in the trade_model.

Train signal models

This script simulates trades using many buy-sell signal parameters and then chooses the best performing signal parameters:

  • Script: python -m scripts.train_signals -c config.json

Prediction online based on trained models (service)

This script starts a service which periodically executes one and the same task: load latest data, generate features, make predictions, generate signals, notify subscribers:

  • Start script: python -m service.server -c config.json
  • The service assumes that the models were trained using the features specified in the configuration
  • The service uses credentials to access the exchange which are specified in the configuration

Hyper-parameter tuning

There are two problems:

  • How to choose best hyper-parameters for ML models. This problem is solved in the classical way, e.g., by grid search. For example, for Gradient Boosting, we train the model on the same data using different hyper-parameters and then select those showing the best score. This approach has one drawback - we optimize it for the best score which is not trading performance. This means that the trading performance is not guaranteed to be good (and in fact it will not be good). Therefore, we use this score as an intermediate feature with the goal to optimize trading performance on later stages.
  • If we compute the final aggregated score (like +0.21), then the question is should we buy, sell or do nothing? In fact, it is the most difficult question. To help answer it, additional scripts were developed for backtesting and optimizing buy-sell signal generation:
    • Generate rolling predictions which simulates what we do by regularly re-training the models and using them for prediction: python -m scripts.predict_rolling -c config.json
    • Train signal models for choosing the best thresholds for sell-buy signals producing the best performance on historic data: python -m scripts.train_signals -c config.json

Configuration parameters

The configuration parameters are specified in two files:

  • service.App.py in the config field of the App class
  • -c config.jsom argument to the services and scripts. The values from this config file will overwrite those in the App.config when this file is loaded into a script or service

Here are some most important fields (in both App.py and config.json):

  • data_folder - location of data files which are needed only for batch offline scripts
  • symbol it is a trading pair like BTCUSDT
  • Analyzer parameters. These mainly columns names.
    • labels List of column names which are treated as labels. If you define a new label used for training and then for prediction then you need to specify its name here
    • algorithms List of algorithm names used for training
    • train_features List of all column names used as input features for training and prediction.
  • Signers:
    • buy_labels and sell_labels Lists of predicted columns used for signals
    • trade_model Parameters of the signaler (mainly some thresholds)
  • trader is a section for trader parameters. Currently, not thoroughly tested.
  • collector These parameter section is intended for data collection services. There are two types of data collection services: synchronous with regular requests to the data provider and asynchronous streaming service which subscribes to the data provider and gets notifications as soon as new data is available. They are working but not thoroughly tested and integrated into the main service. The current main usage pattern relies on manual batch data updates, feature generation and model training. One reason for having these data collection services is 1) to have faster updates 2) to have data not available in normal API like order book (there exist some features which use this data but they are not integrated into the main workflow).

See sample configuration files and comments in App.config for more details.

Signaler service

Every minute, the signaler performs the following steps to make a prediction about whether the price is likely to increase or decrease:

  • Retrieve the latest data from the server and update the current data window which includes some history (the history length is defined by a configuration parameter)
  • Compute derived features based on the nearest history collected (which now includes the latest data). The features to be computed are described in the configuration file and are exactly the same as used in batch mode during model training
  • Apply several (previously trained) ML models by forecasting some future values (not necessarily prices) which are also treated as (more complex) derived features. We apply several forecasting models (currently, Gradient Boosting, Neural network, and Linear regression) to several target variables (labels)
  • Aggregate the results of forecasting produced by different ML models and compute the final signal score which reflects the strength of the upward or downward trend. Here we use many previously computed scores as inputs and derive one output score. Currently, it is implemented as an aggregation procedure but it could be based on a dedicated ML model trained on previously collected scores and the target variable. Positive score means growth and negative score means fall
  • Use the final score for notifications

Notes:

  • The final result of the signaler is the score (between -1 and +1). The score should be used for further decisions about buying or selling by taking into account other parameters and data sources
  • For the signaler service to work, trained models have to be available and stored in the "MODELS" folder. The models are trained in batch mode and the process is described in the corresponding section.

Starting the service: python3 -m service.server -c config.json

Trader

The trader is working but not thoroughly debugged, particularly, not tested for stability and reliability. Therefore, it should be considered a prototype with basic functionality. It is currently integrated with the Signaler but in a better design should be a separate service.

Related projects

Backtesting

External integrations

intelligent-trading-bot's People

Contributors

asavinov avatar woehrer12 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

intelligent-trading-bot's Issues

Signals performance database

Hi, cool project, man.

It will be interesting if we can build a signal performance database.
Also, it is giving viceversa signals, like buying when must selling,

I'll love to help if you need assistance.

Not an issue, more a question

I have setup this bot and trained it, I lowered the threshold a bit, and the enter trade signals for sell and buy are pretty on the spot, more accurate and frequent then the original one, but there is one thing, they are inverse, so when 'my bot' says SOLD: I open a SHORT, and vice versa, anyone know what is going on, how to fix?

I use a separate script to monitor my telegram and open trades accordingly (because i'm on Bybit Inverse BTCUSD) so no real issue, but was just wondering if this can be fixed easily, what I did wrong in my parameters/

Precision is ill-defined

Hello,

i'm currentlyx playing around with features and stumbling over the following message:

Train 'high_20_lc'. Algorithm lc. Label: high_20. Train length 525600. Train columns 9
C:\Users\Christian\miniconda3\envs\trading-bot\lib\site-packages\sklearn\metrics\_classification.py:1469: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))
Train 'low_20_lc'. Algorithm lc. Label: low_20. Train length 525600. Train columns 9
C:\Users\Christian\miniconda3\envs\trading-bot\lib\site-packages\sklearn\metrics\_classification.py:1469: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
  _warn_prf(average, modifier, msg_start, len(result))

That error appears when i comment out the feature:

// {"column_prefix": "", "generator": "talib", "feature_prefix": "", "config": {"columns": ["close"], "functions": ["STDDEV"], "windows": [5, 10, 15, 60]}}

Part of my config is:

// === GENERATE FEATURES ===

    "feature_sets": [
        {"column_prefix": "", "generator": "talib", "feature_prefix": "", "config":  {"columns": ["close"], "functions": ["SMA"], "windows": [5, 8, 13]}},
        {"column_prefix": "", "generator": "talib", "feature_prefix": "", "config":  {"columns": ["close"], "functions": ["EMA"], "windows": [5, 10]}},
        {"column_prefix": "", "generator": "talib", "feature_prefix": "", "config":  {"columns": ["close"], "functions": ["LINEARREG_SLOPE"], "windows": [5, 10, 15, 60]}}
        // {"column_prefix": "", "generator": "talib", "feature_prefix": "", "config":  {"columns": ["close"], "functions": ["STDDEV"], "windows": [5, 10, 15, 60]}}
        // {"column_prefix": "", "generator": "common.my_feature_example:my_feature_example", "feature_prefix": "", "config":  {"columns": "close", "function": "add", "parameter": 2.0, "names": "close_add"}}
    ],

    // === LABELS ===

    "label_sets": [
        {"column_prefix": "", "generator": "highlow2", "feature_prefix": "", "config":  {"columns": ["close", "high", "low"], "function": "high", "thresholds": [2.0], "tolerance": 0.2, "horizon": 120, "names": ["high_20"]}},
        {"column_prefix": "", "generator": "highlow2", "feature_prefix": "", "config":  {"columns": ["close", "high", "low"], "function": "low", "thresholds": [2.0], "tolerance": 0.2, "horizon": 120, "names": ["low_20"]}}
    ],

    // === TRAIN ===

    "label_horizon": 120,  // Batch/offline: do not use these last rows because their labels might not be correct
    "train_length": 525600,  // Batch/offline: Uses this number of rows for training (if not additionally limited by the algorithm)

    "train_feature_sets": [
    {
        "generator": "train_features", "config": {
        // Use values from the attributes: train_features, labels, algorithms
    }}
    ],

    "train_features": [
        "close_SMA_5", "close_SMA_8", "close_SMA_13",
        "close_EMA_5", "close_EMA_10",
        "close_LINEARREG_SLOPE_5", "close_LINEARREG_SLOPE_10", "close_LINEARREG_SLOPE_15", "close_LINEARREG_SLOPE_60"
      //  "close_STDDEV_5", "close_STDDEV_10", "close_STDDEV_15", "close_STDDEV_60"
    ],

    "labels": ["high_20", "low_20"],

    "algorithms": [
        {
            "name": "lc",  // Unique name will be used as a column suffix
            "algo": "lc",  // Algorithm type is used to choose the train/predict function
            "params": {"penalty": "l2", "C": 1.0, "class_weight": null, "solver": "sag", "max_iter": 100},
            "train": {"is_scale": true, "length": 1000000, "shifts": []},
            "predict": {"length": 1440}
        }
    ],

Any idea what i'm doing wrong?

KeyError: features_kline

Hey, still in the process of figuring out and feel a bit lost, when i run; "python -m scripts.grid_search" it errors like;

2023-10-13 20:47:06 ⌚ alca in ~/projects/intelligent-trading-bot
± |master ?:5 ✗| → python -m scripts.grid_search
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/home/alca/projects/intelligent-trading-bot/scripts/grid_search.py", line 24, in
features_kline = App.config["features_kline"]
~~~~~~~~~~^^^^^^^^^^^^^^^^^^
KeyError: 'features_kline'

Everything else like;

python -m scripts.download_binance -c config.jsonc
python -m scripts.merge -c config.jsonc
python -m scripts.features -c config.jsonc
python -m scripts.labels -c config.jsonc
python -m scripts.train -c config.jsonc
python -m scripts.signals -c config.jsonc
python -m scripts.train_signals -c config.jsonc
python -m service.server -c config.jsonc

runs flawless.

Error when running python -m scripts.generate_features -c config.json

I have the following error :

Loading data from source data file DATA_ITB/BTCUSDT/data.csv...
Finished loading 2591 records with 23 columns.
Start generating features for 2591 input records.
Start generator klines...
Unknown feature generator klines
Traceback (most recent call last):
  File "/home/gitpod/.pyenv/versions/3.8.13/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/gitpod/.pyenv/versions/3.8.13/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/workspace/intelligent-trading-bot/scripts/generate_features.py", line 202, in <module>
    main()
  File "/workspace/.pyenv_mirror/user/current/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/workspace/.pyenv_mirror/user/current/lib/python3.8/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/workspace/.pyenv_mirror/user/current/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/workspace/.pyenv_mirror/user/current/lib/python3.8/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/workspace/intelligent-trading-bot/scripts/generate_features.py", line 66, in main
    df, new_features = generate_feature_set(df, fs, last_rows=0)
TypeError: cannot unpack non-iterable NoneType object

new complementary tool

I want to offer a new point of view, and my colaboraty

Why this stock prediction project ?

Things this project offers that I did not find in other free projects, are:

  • Testing with +-30 models. Multiple combinations features and multiple selections of models (TensorFlow , XGBoost and Sklearn )
  • Threshold and quality models evaluation
  • Use 1k technical indicators
  • Method of best features selection (technical indicators)
  • Categorical target (do buy, do sell and do nothing) simple and dynamic, instead of continuous target variable
  • Powerful open-market-real-time evaluation system
  • Versatile integration with: Twitter, Telegram and Mail
  • Train Machine Learning model with Fresh today stock data

https://github.com/Leci37/stocks-prediction-Machine-learning-RealTime-telegram/tree/develop

Trade

Is trad working on spot market

Errors when trying to download binance history

Hello!, as in the subject, when I try to download the history, I have these errors, the latest config.
Thanx!

root@buntu:/home/adiif1/bot/intelligent-trading-bot-master# python3 -m scripts.download_binance -c config-sample-v0.5.0.json Start downloading 'BTCUSDT' ... File not found. All data will be downloaded and stored in newly created file. Downloading all available 1m data for BTCUSDT. Be patient..! Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/adiif1/bot/intelligent-trading-bot-master/scripts/download_binance.py", line 387, in <module> main() File "/usr/lib/python3/dist-packages/click/core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "/usr/lib/python3/dist-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/usr/lib/python3/dist-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/lib/python3/dist-packages/click/core.py", line 754, in invoke return __callback(*args, **kwargs) File "/home/adiif1/bot/intelligent-trading-bot-master/scripts/download_binance.py", line 104, in main klines = App.client.get_historical_klines( File "/usr/local/lib/python3.10/dist-packages/binance/client.py", line 984, in get_historical_klines return self._historical_klines( File "/usr/local/lib/python3.10/dist-packages/binance/client.py", line 1019, in _historical_klines start_ts = convert_ts_str(start_str) File "/usr/local/lib/python3.10/dist-packages/binance/helpers.py", line 76, in convert_ts_str return date_to_milliseconds(ts_str) File "/usr/local/lib/python3.10/dist-packages/binance/helpers.py", line 24, in date_to_milliseconds d: Optional[datetime] = dateparser.parse(date_str, settings={'TIMEZONE': "UTC"}) File "/usr/local/lib/python3.10/dist-packages/dateparser/conf.py", line 89, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/dateparser/__init__.py", line 54, in parse data = parser.get_date_data(date_string, date_formats) File "/usr/local/lib/python3.10/dist-packages/dateparser/date.py", line 421, in get_date_data parsed_date = _DateLocaleParser.parse( File "/usr/local/lib/python3.10/dist-packages/dateparser/date.py", line 178, in parse return instance._parse() File "/usr/local/lib/python3.10/dist-packages/dateparser/date.py", line 182, in _parse date_data = self._parsers[parser_name]() File "/usr/local/lib/python3.10/dist-packages/dateparser/date.py", line 196, in _try_freshness_parser return freshness_date_parser.get_date_data(self._get_translated_date(), self._settings) File "/usr/local/lib/python3.10/dist-packages/dateparser/date.py", line 234, in _get_translated_date self._translated_date = self.locale.translate( File "/usr/local/lib/python3.10/dist-packages/dateparser/languages/locale.py", line 131, in translate relative_translations = self._get_relative_translations(settings=settings) File "/usr/local/lib/python3.10/dist-packages/dateparser/languages/locale.py", line 158, in _get_relative_translations self._generate_relative_translations(normalize=True)) File "/usr/local/lib/python3.10/dist-packages/dateparser/languages/locale.py", line 172, in _generate_relative_translations pattern = DIGIT_GROUP_PATTERN.sub(r'?P<n>\d+', pattern) File "/usr/local/lib/python3.10/dist-packages/regex/regex.py", line 710, in _compile_replacement_helper is_group, items = _compile_replacement(source, pattern, is_unicode) File "/usr/local/lib/python3.10/dist-packages/regex/_regex_core.py", line 1737, in _compile_replacement raise error("bad escape \\%s" % ch, source.string, source.pos) regex._regex_core.error: bad escape \d at position 7 root@buntu:/home/adiif1/bot/intelligent-trading-bot-master#

error during SIGNAL run. dividing to ZERO

Dobriy Den Alexander,

I ınstalled all files. worked TRAINING SIGNAL also.
python -m scripts.features -c config.json after this comment giving ERROR. it is can not divide to ZERO with row 186, 182 etc.

how can ı work with this bot.

Have a nice day

when i change the base_window such like your example new parameters

"base_window": 40320,
"averaging_windows": [1, 60, 360, 1440, 4320, 10080],
"area_windows": [60, 360, 1440, 4320, 10080],

and i change train_features:

"train_features": [
    "close_1", "close_60", "close_360", "close_1440", "close_4320", "close_10080",
    "close_std_60", "close_std_360", "close_std_1440", "close_std_4320", "close_std_10080",
    "volume_1", "volume_60", "volume_360", "volume_1440", "volume_4320", "volume_10080",
    "span_1", "span_60", "span_360", "span_1440", "span_4320", "span_10080",
    "trades_1", "trades_60", "trades_360", "trades_1440", "trades_4320", "trades_10080",
    "tb_base_1", "tb_base_60", "tb_base_360", "tb_base_1440", "tb_base_4320", "tb_base_10080",
    "close_area_60", "close_area_360", "close_area_1440", "close_area_4320", "close_area_10080",
    "close_trend_60", "close_trend_360", "close_trend_1440", "close_trend_4320", "close_trend_10080",
    "volume_trend_60", "volume_trend_360", "volume_trend_1440", "volume_trend_4320", "volume_trend_10080"
  ],

so do i need to change labels and score_aggregation?

Missing LICENSE

I see you have no LICENSE file for this project. The default is copyright.

I would suggest releasing the code under the GPL-3.0-or-later or AGPL-3.0-or-later license so that others are encouraged to contribute changes back to your project.

Predict rolling

PytzUsageWarning: The localize method is no longer necessary, as this time zone supports the fold attribute (PEP 495). For more details on migrating to a PEP 495-compliant implementation, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
date_obj = stz.localize(date_obj)
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/intelligent-trading-bot/scripts/predict_rolling.py", line 281, in
main()
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1130,
in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1055,
in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1404,
in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/root/intelligent-trading-bot/scripts/predict_rolling.py", line 100, in main
prediction_start = find_index(df, P.prediction_start_str)
File "/root/intelligent-trading-bot/common/utils.py", line 135, in find_index
id = res.index[0]
File "/usr/local/lib/python3.8/dist-packages/pandas/core/indexes/base.py", line 5358, in getitem
return getitem(key)
IndexError: index 0 is out of bounds for axis 0 with size 0

issue with TA-Lib

When i run the install requirements, after a while the following error appears regarding TA-Lib

  running bdist_wheel
  running build
  running build_py
  creating build
  creating build\lib.win-amd64-cpython-38
  creating build\lib.win-amd64-cpython-38\talib
  copying talib\abstract.py -> build\lib.win-amd64-cpython-38\talib
  copying talib\deprecated.py -> build\lib.win-amd64-cpython-38\talib
  copying talib\stream.py -> build\lib.win-amd64-cpython-38\talib
  copying talib\__init__.py -> build\lib.win-amd64-cpython-38\talib
  running build_ext
  building 'talib._ta_lib' extension
  creating build\temp.win-amd64-cpython-38
  creating build\temp.win-amd64-cpython-38\Release
  creating build\temp.win-amd64-cpython-38\Release\talib
  "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Ic:\ta-lib\c\include -IC:\Users\Marco\AppData\Local\Temp\pip-build-env-c_m_ym55\normal\Lib\site-packages\numpy\core\include -IC:\Users\Marco\anaconda3\envs\intelligenetTradingBot\include -IC:\Users\Marco\anaconda3\envs\intelligenetTradingBot\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tctalib/_ta_lib.c /Fobuild\temp.win-amd64-cpython-38\Release\talib/_ta_lib.obj
  _ta_lib.c
  talib/_ta_lib.c(1080): fatal error C1083: Non Š possibile aprire il file inclusione: 'ta_libc.h': No such file or directory
  <string>:77: UserWarning: Cannot find ta-lib library, installation may fail.
  error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for TA-Lib
Failed to build TA-Lib
ERROR: Could not build wheels for TA-Lib, which is required to install pyproject.toml-based projects

No data is being downloaded

Hello ,
I made a simple config.json from config-sample-v0.2.0.json with Binaance API Key and telegram key, let it ran for a while, no data is being downloaded.

It starts with :
Start downloading 'BTCUSDT' ...
File not found. All data will be downloaded and stored in newly created file.
Downloading all available 1m data for BTCUSDT. Be patient..!
..and stays like that forever.
Maybe some binance endpoint was changed in the meanwhile ?

Trade

Bot show logo signals trade but not apply in my account

Please contact me.

Hi,

Didn't found your email so please contact me:

[email protected]

I would like to pay you for making "intelligent-trading-bot" ready to go on a Linux or Windows Oracle VM image.

I would also pay for future updates.

Missing parameter in score_aggregation

When running the signals script I'm getting the following error:
Loading predictions from input file: C:\intelligent-trading-bot-master\DATA_ITB\BTCUSDT\predictions.csv Predictions loaded. Length: 237774. Width: 27 Traceback (most recent call last): File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\intelligent-trading-bot-master\scripts\signals.py", line 153, in <module> main() File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\click\core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\click\core.py", line 1055, in main rv = self.invoke(ctx) File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python39_64\lib\site-packages\click\core.py", line 760, in invoke return __callback(*args, **kwargs) File "C:\intelligent-trading-bot-master\scripts\signals.py", line 91, in main aggregate_scores(df, model.get('score_aggregation'), 'buy_score_column', buy_labels) File "C:\intelligent-trading-bot-master\common\signal_generation.py", line 34, in aggregate_scores raise ValueError(f"Configuration must specify 'score_aggregation' parameters") ValueError: Configuration must specify 'score_aggregation' parameters

I suppose I'm missing a parameter in score_aggregation?

Some assistance testing intelligent bot

Hello, my name is Innocent, and I am new to algorithmic trading. First of all, I would like to thank you for the hard work you put into this project. I took an interest in it and tried testing it out on my machine, but I ran into some errors. I wonder if you can help me.

The problem I encountered is when trying to run the intelligent bot to make some trades. I had finished setting up the bot server on my local machine, and I was successfully receiving signals to my Telegram account. However, when I tried to run trader.py using the command python -m services.trader -c config-sample-v0.6.dev.json, I did not receive any response, even after setting up tenssort on my computer.

I was wondering if you could guide me on how to set up trader.py and other files in the services folder so that I can test them out.

Model Training

Thanks for your work! Any advices for model training? I performed training but got score around zero.

I need your help

Hello, thank you for publishing this project
I have been researching digital currency exchange for some time and I really do not know where to start. I am more interested in working with Python and need guidance on what the database structure and data type should look like. Thanks for taking the time to guide me.

AI trading based on news

Can you please let AI read cryptonews and trade accordingly? Thanks.
I'd like to use Ollama as AI server.

pandas.errors.EmptyDataError: No columns to parse from file

[root@Aranoch intelligent-trading-bot]# python -m scripts.train -c config.json
2023-05-24 04:10:33.407856: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-05-24 04:10:33.460921: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-05-24 04:10:33.461423: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-05-24 04:10:34.239841: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Loading data from source data file /home/intelligent-trading-bot/data/BTCUSDT/matrix.csv...
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/intelligent-trading-bot/scripts/train.py", line 193, in <module>
    main()
  File "/usr/local/python3/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/python3/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/usr/local/python3/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/python3/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/intelligent-trading-bot/scripts/train.py", line 51, in main
    df = pd.read_csv(file_path, parse_dates=[time_column], nrows=P.in_nrows)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/python3/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 912, in read_csv
    return _read(filepath_or_buffer, kwds)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/python3/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 577, in _read
    parser = TextFileReader(filepath_or_buffer, **kwds)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/python3/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 1407, in __init__
    self._engine = self._make_engine(f, self.engine)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/python3/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 1679, in _make_engine
    return mapping[engine](f, **self.options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/python3/lib/python3.11/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
    self._reader = parsers.TextReader(src, **kwds)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "pandas/_libs/parsers.pyx", line 555, in pandas._libs.parsers.TextReader.__cinit__
pandas.errors.EmptyDataError: No columns to parse from file

Error while analyzing data: 'buy_window'

Hello, I hope you're doing well. I want to take a moment to appreciate the enormous amount of work you have done for the world, it's magical what you have committed to creating.
I have faced this error trying to run the project on my ubuntu server. Any idea on how I could get it to work or fix what's going on?

--Error:

(venv) root@ygbanks-com:~/BTCML# python3 -m service.server -c config.json
Initializing server. Trade pair: BTCUSDT.
/root/BTCML/venv/lib/python3.10/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator StandardScaler from version 1.2.1 when using version 1.2.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
/root/BTCML/venv/lib/python3.10/site-packages/sklearn/base.py:318: UserWarning: Trying to unpickle estimator LogisticRegression from version 1.2.1 when using version 1.2.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
https://scikit-learn.org/stable/model_persistence.html#security-maintainability-limitations
warnings.warn(
/root/BTCML/service/server.py:77: DeprecationWarning: There is no current event loop
App.loop = asyncio.get_event_loop()
Finished health check (connection, server status etc.)
Finished initial data collection.
Scheduler started.
Error while analyzing data: 'buy_window'
Error while analyzing data: 'buy_window'
Error while analyzing data: 'buy_window'

Configs samples

Hello! Сould you update the sample configs? Because it seems there are not enough parameters for the full functionality of the scripts, for example "time_column" is missing at all, as well as "freq". I tried to start training several times, but the process was interrupted several times due to incorrect training configuration.

Sincerely!

Config from telegram

hello friend, is it possible to share the config of the one currently used on telegram? and on what devices does it work? it's about the specifications

Hi, I encountered an issue while executing the following command after your recent commit: python -m scripts.train_signals -c .\configs\config.jsonc.

python -m scripts.train_signals -c .\configs\config.jsonc
2023-12-12 22:28:15.543659: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
WARNING:tensorflow:From C:\Users\RLDC\AppData\Local\Programs\Python\Python310\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

Loading signals from input file: C:\DATA_ITB\BTCUSDT\signals.csv
Signals loaded. Length: 525600. Width: 10
Input data size 525600 records. Range: [2022-12-12 10:54:00, 2023-12-12 10:53:00]
MODELS: 0%| | 0/36 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\RLC\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\RLC\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "D:\dell laptop E backup\mainbackup\trd\2025\check\intelligent-trading-bot\scripts\train_signals.py", line 258, in
main()
File "C:\Users\RLC\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1157, in call
return self.main(*args, **kwargs)
File "C:\Users\RLC\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
File "C:\Users\RLC\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\RLC\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "D:\dell laptop E backup\mainbackup\trd\2025\check\intelligent-trading-bot\scripts\train_signals.py", line 178, in main
apply_rule_with_score_thresholds(df, score_column_names, trade_model)
File "D:\dell laptop E backup\check\intelligent-trading-bot\common\gen_signals.py", line 209, in apply_rule_with_score_thresholds
signal_column = model.get("signal_columns")[0]
TypeError: 'NoneType' object is not subscriptable

PytzUsageWarning

2022-02-07 06:07:46.418122: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2022-02-07 06:07:46.418169: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. /usr/local/lib/python3.8/dist-packages/statsmodels/compat/pandas.py:65: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead. from pandas import Int64Index as NumericIndex Initializing server. Trade pair: BTCUSDT. 2022-02-07 06:07:49.759194: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2022-02-07 06:07:49.759243: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2022-02-07 06:07:49.759266: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ubuntu-c-4-8gib-fra1-01): /proc/driver/nvidia/version does not exist 2022-02-07 06:07:49.760338: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Finished health check (connection, server status etc.) Finished initial data collection. /usr/local/lib/python3.8/dist-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html if obj.zone == 'local': /usr/local/lib/python3.8/dist-packages/apscheduler/triggers/cron/__init__.py:146: PytzUsageWarning: The normalize method is no longer necessary, as this time zone supports the fold attribute (PEP 495). For more details on migrating to a PEP 495-compliant implementation, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html return self.timezone.normalize(dateval + difference), fieldnum Scheduler started.

Any solution to this pytz issue

How to combine more than two scores?

Hello,

currently it's not possible to combine more than two scores, right?
The columns property only allows two values, the buy column and the sell column.
How can i combine the scores of multiple prediction models like in addition "high_20_lc", "low_20_lc", "high_10_lc" and "low_10_lc" to get a final score.

 "signal_sets": [
        {
            // Combine two unsigned scores into one signed score
            "generator": "combine", "config": {
                "columns": ["high_20_lc", "low_20_lc"],  // 2 columns: with grow score and fall score
                "names": "trade_score",  // Output column name: positive values - buy, negative values - sell
                "combine": "difference", // "no_combine" (or empty), "relative", "difference"
                "coefficient": 1.0, "constant": 0.0  // Normalize
            }
        }
    ],

Model performance issue

@asavinov. Thanks for you sharing. Very clear. Through your code, it is found that the trained model does not perform well (precision and recall) in predicting the price decline in the future. Do you have any good suggestions?

Trade not run

I try to run trade service but its stop with no reason

Freqtrade Hyperopt

Hi,

I just found your work, and that seem really good.
Maybe you can work for integrate your Signal system as hyperopt in freqtrade, you maybe not lost more time in Trader part and you will help few people !
If not, I will try your project and check if my little knowledge of python can help for trader part.

Thanks for your work,
Tcheksa

Sample of config file

Do you have a sample of your config file ?
All scripts have "freq" option hard coded ?

I have an error with "freq": "1h" but no idea oh problem trying to debug df

lightgbm.basic.LightGBMError: Check failed: (num_data) > (0) at /Users/runner/work/1/s/python-package/compile/src/io/dataset.cpp, line 33

Can we have a bit more documentation ?

Thx

Warnings while working

Hello, during the process of working with an already trained model(predict_models), the following warnings occurs:

1/1 [==============================] - 0s 13ms/step
C:\Users\sanya\PycharmProjects\intelligent-trading-bot\venv\lib\site-packages\scipy\stats\_stats_mstats_common.py:175: RuntimeWarning: invalid value encountered in double_scalars
  slope = ssxym / ssxm
C:\Users\sanya\PycharmProjects\intelligent-trading-bot\venv\lib\site-packages\scipy\stats\_stats_mstats_common.py:189: RuntimeWarning: invalid value encountered in sqrt
  t = r * np.sqrt(df / ((1.0 - r + TINY)*(1.0 + r + TINY)))
C:\Users\sanya\PycharmProjects\intelligent-trading-bot\venv\lib\site-packages\scipy\stats\_stats_mstats_common.py:192: RuntimeWarning: invalid value encountered in double_scalars
  slope_stderr = np.sqrt((1 - r**2) * ssym / ssxm / df)
C:\Users\sanya\PycharmProjects\intelligent-trading-bot\venv\lib\site-packages\scipy\stats\_stats_mstats_common.py:175: RuntimeWarning: invalid value encountered in double_scalars
  slope = ssxym / ssxm
C:\Users\sanya\PycharmProjects\intelligent-trading-bot\venv\lib\site-packages\scipy\stats\_stats_mstats_common.py:189: RuntimeWarning: invalid value encountered in sqrt
  t = r * np.sqrt(df / ((1.0 - r + TINY)*(1.0 + r + TINY)))
C:\Users\sanya\PycharmProjects\intelligent-trading-bot\venv\lib\site-packages\scipy\stats\_stats_mstats_common.py:192: RuntimeWarning: invalid value encountered in double_scalars
  slope_stderr = np.sqrt((1 - r**2) * ssym / ssxm / df)
1/1 [==============================] - 0s 15ms/step

What can this indicate, can this somehow affect the trained model and the whole prediction process? I see that linregress method is only used when in add_linear_trends method(class feature_generation_rollong_agg.py and in train_signal_models class which I have not touched.

Is it possible to fail at the data downloading stage? I used download_data_binance.py. By the way, such errors did not occur while downloading generating features, merging data or generate labels and while traning too.

I also use the latest version of the repository uploaded to github.

I'm new to python, that's why I'm asking such simple questions. Hope for understanding 😁

Regards!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.