Coder Social home page Coder Social logo

jmrichardson / tuneta Goto Github PK

View Code? Open in Web Editor NEW
377.0 13.0 62.0 730 KB

Intelligently optimizes technical indicators and optionally selects the least intercorrelated for use in machine learning models

License: MIT License

Python 100.00%
technical-analysis hyperparameter-optimization machine-learning technical-indicators optuna finance optimize trading stocks stock-market

tuneta's Introduction

tuneTA

TuneTA optimizes technical indicators using a distance correlation measure to a user defined target feature such as next day return. Indicator parameter(s) are selected using clustering techniques to avoid "peak" or "lucky" values. The set of tuned indicators can be pruned by choosing the most correlated with the target while minimizing correlation with each other (based on user defined maximum correlation). TuneTA maintains its state to add all tuned indicators to multiple data sets (train, validation, test).

Features

  • Given financial prices (OHLCV) and a target feature such as return, TuneTA optimizes the parameter(s) of technical indicator(s) using distance correlation to the target feature. Distance correlation captures both linear and non-linear strength and provides significant benefit over the popular Pearson correlation.
  • Optimal indicator parameters are selected in a multi-step clustering process to avoid values which are not consistent with neighboring values, providing a more robust parameter selection.
  • Prune indicators with a maximum correlation to each other. This is helpful for machine learning models which generally perform better with lower feature intercorrelation.
  • Supports tuning indicator(s) for single or multiple equities. Multiple equities can be combined into a market basket where indicator parameters are optimized across the entire basket of equities.
  • Multiple time ranges (ie: short, medium and long)
  • Supports pruning preexisting features
  • Persists state to generate identical indicators on multiple datasets (train, validation, test)
  • Parallel processing for technical indicator optimization as well as correlation pruning
  • Supports technical indicators produced from the following packages:
  • Correlation report of target and features
  • Early stopping

Overview

TuneTA simplifies the process of optimizing many technical indicators while avoiding "peak" values, and selecting the best indicators with minimal correlation between each other (optional). At a high level, TuneTA performs the following steps:

  1. For each indicator, Optuna searches for parameter(s) which maximize its correlation to a user defined target (for example, next day return).

  2. After the specified Optuna trials are complete, a 3-step KMeans clustering method is used to select the optimal parameter(s):

    1. Each trial is placed in its nearest neighbor cluster based on its distance correlation to the target. The optimal number of clusters is determined using the elbow method. The cluster with the highest average correlation is selected with respect to its membership. In other words, a weighted score is used to select the cluster with highest correlation but also with the most trials.
    2. After the best correlation cluster is selected, the parameters of the trials within the cluster are also clustered. Again, the best cluster of indicator parameter(s) are selected with respect to its membership.
    3. Finally, the centered best trial is selected from the best parameter cluster.
  3. Optionally, the tuned indicators can be pruned by selecting the indicators with a maximum correlation to the all other indicators.

  4. Finally, TuneTA generates all optimized indicators.


Installation

Note: Forcing re-installation of TA-Lib as last step to ensure it's compiled correctly with environment.

pip install -U git+https://github.com/jmrichardson/tuneta
pip install --force-reinstall --no-cache-dir --no-deps TA-Lib

Install the latest release:

pip install -U tuneta
pip install --force-reinstall --no-cache-dir --no-deps TA-Lib

Install using Colab:

!wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz
!tar -xzvf ta-lib-0.4.0-src.tar.gz
%cd ta-lib
!./configure --prefix=/usr
!make
!make install
!pip install Ta-Lib
!pip install -U git+https://github.com/jmrichardson/tuneta
!pip install -U git+https://github.com/DistrictDataLabs/yellowbrick.git
!pip install numpy==1.20.3
!pip install numba==0.54.1
!pip install pandas==1.3.4
!pip install scikit-learn==1.0.1

Examples

Tune RSI Indicator

For simplicity, lets optimize a single indicator:

  • RSI Indicator
  • Two time periods (short and long term): 4-30 and 31-180
  • Maximum of 100 trials per time period to search for the best indicator parameter
  • Stop after 20 trials per time period without improvement

The following is a snippet of the complete example found in the examples directory:

tt = TuneTA(n_jobs=4, verbose=True)
tt.fit(X_train, y_train,
    indicators=['tta.RSI'],
    ranges=[(4, 30), (31, 180)],
    trials=100,
    early_stop=20,
)

Two studies are created for each time period with up to 100 trials to test different indicator length values. The correlation values are displayed based on the trial parameter. The best trial with its respective parameter value is saved for both time ranges.

To view the correlation of both indicators to the target return as well as each other:

tt.report(target_corr=True, features_corr=True)
Indicator Correlation to Target:

                         Correlation
---------------------  -------------
tta_RSI_timeperiod_19       0.23393
tta_RSI_timeperiod_36       0.227434

Indicator Correlation to Each Other:

                         tta_RSI_timeperiod_19    tta_RSI_timeperiod_36
---------------------  -----------------------  -----------------------
tta_RSI_timeperiod_19                  0                        0.93175
tta_RSI_timeperiod_36                  0.93175                  0

To generate both RSI indicators on a data set:

features = tt.transform(X_train)
            tta_RSI_timeperiod_19  tta_RSI_timeperiod_36
Date                                                    
2011-10-03                    NaN                    NaN
2011-10-04                    NaN                    NaN
2011-10-05                    NaN                    NaN
2011-10-06                    NaN                    NaN
2011-10-07                    NaN                    NaN
...                           ...                    ...
2018-09-25              62.173261              60.713051
2018-09-26              59.185666              59.362731
2018-09-27              61.026238              60.210235
2018-09-28              61.094793              60.241806
2018-10-01              63.384824              61.305540

Tune Multiple Indicators

Building from the previous example, lets optimize a handful of indicators:

tt.fit(X_train, y_train,
    indicators=['pta.slope', 'pta.stoch', 'tta.MACD', 'tta.MOM', 'fta.SMA'],
    ranges=[(4, 60)],
    trials=100,
    early_stop=20,
)

You can view how long it took to optimize each indicator:

tt.fit_times()
    Indicator      Times
--  -----------  -------
 1  pta.stoch      23.56
 0  tta.MACD       12.03
 2  pta.slope       6.82
 4  fta.SMA         6.42
 3  tta.MOM         5.7

Let's have a look at each indicator's distance correlation to target as well as each other:

    tt.report(target_corr=True, features_corr=True)
Indicator Correlation to Target:
                                                       Correlation
---------------------------------------------------  -------------
tta_MACD_fastperiod_43_slowperiod_4_signalperiod_52       0.236575
pta_stoch_k_57_d_29_smooth_k_2                            0.231091
pta_slope_length_15                                       0.215603
tta_MOM_timeperiod_15                                     0.215603
fta_SMA_period_30                                         0.080596

Indicator Correlation to Each Other:
                                                       tta_MACD_fastperiod_43_slowperiod_4_signalperiod_52    pta_stoch_k_57_d_29_smooth_k_2    pta_slope_length_15    tta_MOM_timeperiod_15    fta_SMA_period_30
---------------------------------------------------  -----------------------------------------------------  --------------------------------  ---------------------  -----------------------  -------------------
tta_MACD_fastperiod_43_slowperiod_4_signalperiod_52                                               0                                 0.886265               0.779794                 0.779794             0.2209
pta_stoch_k_57_d_29_smooth_k_2                                                                    0.886265                          0                      0.678311                 0.678311             0.110129
pta_slope_length_15                                                                               0.779794                          0.678311               0                        1                    0.167069
tta_MOM_timeperiod_15                                                                             0.779794                          0.678311               1                        0                    0.167069
fta_SMA_period_30                                                                                 0.2209                            0.110129               0.167069                 0.167069             0

Notice above that both slope(15) and mom(15) are perfectly correlated in the intercorrelation report (indicated by value of 1) as well as having the same correlation to the target. Initially, I thought this had to be a bug, but they are indeed identically correlated on a different scale (notice the same heat color coding):

Lets remove correlated indicators with a maximum threshold of .85 for demonstration purposes. Based on the above correlation report, the two indicator pairs that have a correlation of greater than .85 are MACD/Stoch and Slope/Mom. We can easily remove the worst correlated to the target of each pair (removes Stoch as MACD is more correlated to the target and either slope or mom can be removed as they are both identically correlated to the target). Notice that all indicators now have an intercorrelation less than .85:

tt.prune(max_inter_correlation=.85)
Indicator Correlation to Target:
                                                       Correlation
---------------------------------------------------  -------------
tta_MACD_fastperiod_43_slowperiod_4_signalperiod_52       0.236576
pta_slope_length_15                                       0.215603
fta_SMA_period_6                                          0.099375
Indicator Correlation to Each Other:
                                                       tta_MACD_fastperiod_43_slowperiod_4_signalperiod_52    pta_slope_length_15    fta_SMA_period_6
---------------------------------------------------  -----------------------------------------------------  ---------------------  ------------------
tta_MACD_fastperiod_43_slowperiod_4_signalperiod_52                                               0                      0.779794            0.252834
pta_slope_length_15                                                                               0.779794               0                   0.188658
fta_SMA_period_6                                                                                  0.252834               0.188658            0
Backend TkAgg is interactive backend. Turning interactive mode on.

As in the previous example, we can easily create features:

features = tt.transform(X_train)

Tune and Prune all Indicators

Building from the previous examples, lets optimize all available indicators. Note the addition of min_target_correlation which removes indicators below target correlation threshold:

tt.fit(X_train, y_train,
    indicators=['all'],
    ranges=[(4, 30)],
    trials=500,
    early_stop=100,
    min_target_correlation=.05,
)

As in the previous examples we can see the correlation to the target with the report function:

tt.report(target_corr=True, features_corr=False)

For brevity, only showing the top 10 of the many results:

Indicator Correlation to Target:
                                                                              Correlation
--------------------------------------------------------------------------  -------------
pta_natr_length_4_scalar_27                                                      0.253049
tta_NATR_timeperiod_6                                                            0.247999
tta_MACD_fastperiod_3_slowperiod_29_signalperiod_25                              0.240217
pta_macd_fast_3_slow_29_signal_25                                                0.240217
pta_pgo_length_26                                                                0.239584
pta_tsi_fast_28_slow_2_signal_25_scalar_15                                       0.238303
pta_smi_fast_29_slow_2_signal_20_scalar_26                                       0.238294
fta_TSI_long_3_short_29_signal_26                                                0.234654
tta_RSI_timeperiod_19                                                            0.23393
pta_rsi_length_19_scalar_26                                                      0.23393
...

Let's prune the indicators to have a maximum of .7 correlation with any of the other indicators:

tt.prune(max_inter_correlation=.7)

Show the correlation for both target and intercorrelation after prune:

tt.report(target_corr=True, features_corr=True)

Again, showing only top 10 rows of each for brevity (intercorrelation omitted as well):

                                                       Correlation
---------------------------------------------------  -------------
pta_natr_length_4_scalar_27                               0.253049
tta_MACD_fastperiod_3_slowperiod_29_signalperiod_25       0.240217
pta_pvol_                                                 0.199302
pta_kc_length_3_scalar_27                                 0.193162
fta_VZO_period_20                                         0.171986
fta_DMI_period_4                                          0.148614
pta_pvo_fast_27_slow_28_signal_29_scalar_15               0.14692
pta_cfo_length_28_scalar_26                               0.141013
fta_IFT_RSI_rsi_period_28_wma_period_4                    0.140977
pta_stc_fast_18_slow_27                                   0.140789
...

Tune Market

TuneTA supports tuning indicators across a market of equities. Simply, index the input dataframe with the date and symbol similar to the following. Notice the dataframe still contains OHLCV but is indexed by both date and symbol (see tune_market.py in examples folder):

Use TuneTA in the same way as the previous examples

Prune Existing Features

If you have preexisting features in your dataframe (regardless if you use TuneTA to create new ones), I've added a helper prune_df function to prune the all of the features based on intercorrelation. This is helpful, for example, if you have custom features that you would like to combine with TuneTA and select only the features with maximum correlation with minimal intercorrelation. The prune_df helper function takes a dataframe and returns the column names of the appropriate features to keep. The column names can then be used to filter your datasets:

# Features to keep
feature_names = tt.prune_df(X_train, y_train, min_target_correlation=.05, max_inter_correlation=.7, report=False)

# Filter datasets
X_train = X_train[feature_names]
X_test = X_test[feature_names]

See prune_dataframe.py in the examples folder

TuneTA fit usage

tt.fit(X, y, indicators, ranges, trials, early_stop)

Parameters:

  • indicators: List of indicators to optimize
    • ['all']: All indicators
    • ['pta']: All pandas-ta indicators
    • ['tta']: All ta-lib indicators
    • ['fta']: All fin-ta indicators
    • ['tta.RSI']: RSI indicator from ta-lib
    • See config.py for available indicators and the parameters that are optimized
  • ranges: Time periods to optimize
    • [(2-30)]: Single time period (2 to 30 days)
    • [(2-30, 31-90)]: Two time periods (short and long term)
  • trials: Number of trials to search for optimal parameters
  • early_stop: Max number of trials without improvement
  • min_target_correlation: Minimum correlation to target required

tuneta's People

Contributors

aakashkeswani avatar cmobley7 avatar iamkucuk avatar jmrichardson avatar wouldayajustlookatit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tuneta's Issues

Optuna execution error of multiple pta indicators

Well, after changing the code to my personal usage, I found that particularly some indicators of pandas-ta both have strange errors like my previous issue :tos_stdevall, but the message is slightly different.

It seems to be a common error for pta.stochrsi, pta.tsi, pta.smi. Originally, I think it is an individual issue for one indicator. So I just drop it and rerun, but after I running “all” for three times, error of pta.stochrsi, pta.tsi, pta.smi came out sequentially. Afterwards, I think these errors might all have one thing in common. That is, when the indicator has multiple period parameters. So there might be more indicators with the error like below but not found by me yet. Hopefully you can solve it soon, thanks again.

Below is the error message of pta.smi.


RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 123, in _objective
res = [eval_res(X, self.function, self.idx, trial, sym=sym) for sym, X in X.groupby(level=1)]
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 123, in
res = [eval_res(X, self.function, self.idx, trial, sym=sym) for sym, X in X.groupby(level=1)]
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 98, in eval_res
res = eval(function)
File "", line 1, in
File "/usr/local/lib/python3.7/dist-packages/pandas_ta/momentum/smi.py", line 23, in smi
tsi_df = tsi(close, fast=fast, slow=slow, signal=signal, scalar=scalar)
File "/usr/local/lib/python3.7/dist-packages/pandas_ta/momentum/tsi.py", line 34, in tsi
tsi_signal = ma(mamode, tsi, length=signal)
File "/usr/local/lib/python3.7/dist-packages/pandas_ta/overlap/ma.py", line 73, in ma
else: return ema(source, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pandas_ta/overlap/ema.py", line 22, in ema
ema = EMA(close, length)
File "/usr/local/lib/python3.7/dist-packages/talib/init.py", line 35, in wrapper
result = func(*args, **kwargs)
File "talib/_func.pxi", line 2931, in talib._ta_lib.EMA
File "talib/_func.pxi", line 68, in talib._ta_lib.check_begidx1
Exception: inputs are all NaN

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 189, in fit
n_trials=self.n_trials, callbacks=[_early_stopping_opt])
File "/usr/local/lib/python3.7/dist-packages/optuna/study/study.py", line 409, in optimize
show_progress_bar=show_progress_bar,
File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 76, in _optimize
progress_bar=progress_bar,
File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 163, in _optimize_sequential
trial = _run_trial(study, func, catch)
File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 264, in _run_trial
raise func_err
File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 213, in _run_trial
value_or_values = func(trial)
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 188, in
self.study.optimize(lambda trial: _objective(self, trial, X, y),
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 128, in _objective
raise RuntimeError(f"Optuna execution error: {self.function}")
RuntimeError: Optuna execution error: pta.smi(X.close, fast=trial.suggest_int('fast', 2, 30), slow=trial.suggest_int('slow', 2, 30), signal=trial.suggest_int('signal', 2, 30), scalar=trial.suggest_int('scalar', 2, 30), )
"""

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last)
in ()
7 ranges=[(2, 30)],
8 trials=100,
----> 9 early_stop=50,
10 )
11

2 frames
/usr/local/lib/python3.7/dist-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):

RuntimeError: Optuna execution error: pta.smi(X.close, fast=trial.suggest_int('fast', 2, 30), slow=trial.suggest_int('slow', 2, 30), signal=trial.suggest_int('signal', 2, 30), scalar=trial.suggest_int('scalar', 2, 30), )

——————————————————————-

Below is the error message of pta.stochrsi

RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 123, in _objective
res = [eval_res(X, self.function, self.idx, trial, sym=sym) for sym, X in X.groupby(level=1)]
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 123, in
res = [eval_res(X, self.function, self.idx, trial, sym=sym) for sym, X in X.groupby(level=1)]
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 98, in eval_res
res = eval(function)
File "", line 1, in
File "/usr/local/lib/python3.7/dist-packages/pandas_ta/momentum/stochrsi.py", line 30, in stochrsi
stochrsi_d = ma(mamode, stochrsi_k, length=d)
File "/usr/local/lib/python3.7/dist-packages/pandas_ta/overlap/ma.py", line 65, in ma
elif name == "sma": return sma(source, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pandas_ta/overlap/sma.py", line 20, in sma
sma = SMA(close, length)
File "/usr/local/lib/python3.7/dist-packages/talib/init.py", line 35, in wrapper
result = func(*args, **kwargs)
File "talib/_func.pxi", line 4538, in talib._ta_lib.SMA
File "talib/_func.pxi", line 68, in talib._ta_lib.check_begidx1
Exception: inputs are all NaN

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 189, in fit
n_trials=self.n_trials, callbacks=[_early_stopping_opt])
File "/usr/local/lib/python3.7/dist-packages/optuna/study/study.py", line 409, in optimize
show_progress_bar=show_progress_bar,
File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 76, in _optimize
progress_bar=progress_bar,
File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 163, in _optimize_sequential
trial = _run_trial(study, func, catch)
File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 264, in _run_trial
raise func_err
File "/usr/local/lib/python3.7/dist-packages/optuna/study/_optimize.py", line 213, in _run_trial
value_or_values = func(trial)
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 188, in
self.study.optimize(lambda trial: _objective(self, trial, X, y),
File "/usr/local/lib/python3.7/dist-packages/tuneta/optimize.py", line 128, in _objective
raise RuntimeError(f"Optuna execution error: {self.function}")
RuntimeError: Optuna execution error: pta.stochrsi(X.close, length=trial.suggest_int('length', 2, 30), k=trial.suggest_int('k', 2, 30), d=trial.suggest_int('d', 2, 30), )
"""

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last)
in ()
27 ranges=[(2, 30)],
28 trials=100,
---> 29 early_stop=50,
30 )
31

2 frames
/usr/local/lib/python3.7/dist-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):

RuntimeError: Optuna execution error: pta.stochrsi(X.close, length=trial.suggest_int('length', 2, 30), k=trial.suggest_int('k', 2, 30), d=trial.suggest_int('d', 2, 30), )

AttributeError: module 'optuna.samplers' has no attribute 'NSGAIISampler'

Hello,
This is a note for all future users of tuneta. You must have Optuna>=2.5.0 in order to avoid the below error.

multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
  File "D:\Anaconda\envs\XGBoost\lib\site-packages\multiprocess\pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "D:\Anaconda\envs\XGBoost\lib\site-packages\tuneta\optimize.py", line 344, in fit
    sampler = optuna.samplers.NSGAIISampler()
AttributeError: module 'optuna.samplers' has no attribute 'NSGAIISampler'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "d:\ReinforcementLearning\KNN\StockKNN.py", line 148, in <module>
    weights=None,  # Optional weights for correlation evaluation
  File "D:\Anaconda\envs\XGBoost\lib\site-packages\tuneta\tune_ta.py", line 100, in fit
    self.fitted = [fit.get() for fit in self.fitted]
  File "D:\Anaconda\envs\XGBoost\lib\site-packages\tuneta\tune_ta.py", line 100, in <listcomp>
    self.fitted = [fit.get() for fit in self.fitted]
  File "D:\Anaconda\envs\XGBoost\lib\site-packages\multiprocess\pool.py", line 644, in get
    raise self._value
  File "D:\Anaconda\envs\XGBoost\lib\site-packages\multiprocess\pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "D:\Anaconda\envs\XGBoost\lib\site-packages\tuneta\optimize.py", line 344, in fit
    sampler = optuna.samplers.NSGAIISampler()
AttributeError: module 'optuna.samplers' has no attribute 'NSGAIISampler'

TuneTa idles after tuning

I am interested in fitting indicator settings on a training set.

The experienced issue occurs when tuning with a high amount of trials. It runs as expected when launched, but after a few hours of tuning when it reaches the very last indicator and finishes it due to early stopping, it stays idle in that state afterwards.

I'm running in jupyter notebook. The kernel goes in idle after some time, although the cell is still running (cmd window is also still active).

I tried to do the same test with a very low amount of trials, which was completed successfully. So the problem purely arrises when a high amount of trials is searched. Already tried to decrease the search amount by only searching on 'pta' but it did not help.

When I ctrl C out of the loop, it immediatly prints the amount of ProcessPools and afterwards aborts, so this probably indicates the problem rests in completing the tuning part?

Help would be greatly appreciated!

Exception: cannot reindex from a duplicate axis

Hello,I meet this error several time,The full info is:
I check my pandas dataframe sveral time,There is no duplicate axis.
My X and y like:
image

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/tuneta/optimize.py", line 128, in eval_res
    res = eval(function)
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.9/dist-packages/pandas_ta/volume/kvo.py", line 51, in kvo
    df = DataFrame(data)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/frame.py", line 614, in __init__
    mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/construction.py", line 464, in dict_to_mgr
    return arrays_to_mgr(
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/construction.py", line 124, in arrays_to_mgr
    arrays = _homogenize(arrays, index, dtype)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/construction.py", line 571, in _homogenize
    val = val.reindex(index, copy=False)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/series.py", line 4580, in reindex
    return super().reindex(index=index, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 4818, in reindex
    return self._reindex_axes(
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 4839, in _reindex_axes
    obj = obj._reindex_with_indexers(
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 4883, in _reindex_with_indexers
    new_data = new_data.reindex_indexer(
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/internals/managers.py", line 670, in reindex_indexer
    self.axes[axis]._validate_can_reindex(indexer)
  File "/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py", line 3785, in _validate_can_reindex
    raise ValueError("cannot reindex from a duplicate axis")
ValueError: cannot reindex from a duplicate axis

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/multiprocess/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/usr/local/lib/python3.9/dist-packages/tuneta/optimize.py", line 237, in fit
    self.study.optimize(
  File "/usr/local/lib/python3.9/dist-packages/optuna/study/study.py", line 419, in optimize
    _optimize(
  File "/usr/local/lib/python3.9/dist-packages/optuna/study/_optimize.py", line 66, in _optimize
    _optimize_sequential(
  File "/usr/local/lib/python3.9/dist-packages/optuna/study/_optimize.py", line 160, in _optimize_sequential
    frozen_trial = _run_trial(study, func, catch)
  File "/usr/local/lib/python3.9/dist-packages/optuna/study/_optimize.py", line 234, in _run_trial
    raise func_err
  File "/usr/local/lib/python3.9/dist-packages/optuna/study/_optimize.py", line 196, in _run_trial
    value_or_values = func(trial)
  File "/usr/local/lib/python3.9/dist-packages/tuneta/optimize.py", line 238, in <lambda>
    lambda trial: _objective(self, trial, X, y),
  File "/usr/local/lib/python3.9/dist-packages/tuneta/optimize.py", line 162, in _objective
    res = eval_res(X, self.function, self.idx, trial)
  File "/usr/local/lib/python3.9/dist-packages/tuneta/optimize.py", line 131, in eval_res
    raise Exception(e)
Exception: cannot reindex from a duplicate axis
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/notebooks/feature_select.py", line 197, in <module>
    tt.fit(X_train, y_train,
  File "/usr/local/lib/python3.9/dist-packages/tuneta/tune_ta.py", line 197, in fit
    self.fitted = [fit.get() for fit in self.fitted]
  File "/usr/local/lib/python3.9/dist-packages/tuneta/tune_ta.py", line 197, in <listcomp>
    self.fitted = [fit.get() for fit in self.fitted]
  File "/usr/local/lib/python3.9/dist-packages/multiprocess/pool.py", line 771, in get
    raise self._value
Exception: cannot reindex from a duplicate axis

Enhancement: mamode, db_url, and get_indicator_params func

@jmrichardson ,

I plan to add a PR in about 2 weeks with the following enhancements from @wouldayajustlookatit branch.

  • tuneable mamode
  • db_url; so that, users can see all runs tried in the optuna-dashboard
  • function similar to wouldayajustlookatit's get_indicator_params, but I'm thinking something similar to the following; so that, it can easily be used with pandas-ta's strategies.
    ta=[ {"kind": "sma", "length": 50}, {"kind": "sma", "length": 200}, {"kind": "bbands", "length": 20}, {"kind": "rsi"}, {"kind": "macd", "fast": 8, "slow": 21}, {"kind": "sma", "close": "volume", "length": 20, "prefix": "VOLUME"}, ]

Unfortunately, I didn't have enough time this weekend to finish adding those features and won't have time to add them until the following week. Are these are features you'd be interested in merging?

Question: Ranges Variable and Walk Forward Optimization

To Whom It May Concern,

I see in https://github.com/jmrichardson/tuneta/blob/main/examples/tune_all.py, that you are tuning all indicator over a period of 10 years of daily data given the next day's returns. What does ranges=[(2, 30)] do in this instance? I see that ranges is defined here, https://github.com/jmrichardson/tuneta/blob/main/tuneta/tune_ta.py#L44. It specifies that it is a parameter search space. It appears to be used to define a low and high here, https://github.com/jmrichardson/tuneta/blob/main/tuneta/tune_ta.py#L82, which then appears to be used as a parameter in an Optuna trial here, https://github.com/jmrichardson/tuneta/blob/main/tuneta/tune_ta.py#L125. Unfortunately, it's still not clear to me what this parameter does. My apologies for my ignorance.

I'm using pandas-ta on minute data. So, I need to adjust the default setting for most of indicators I'm using. I had hope to use this library to get these settings using a walk forward optimization strategy, which ensures that I'm not overfitting my data and/or getting stuck in a local minima. Just for completeness as I wasn't aware of this a few months back, but a walk forward optimization is defined here, https://en.wikipedia.org/wiki/Walk_forward_optimization and https://www.youtube.com/watch?v=GowmmrSMw9I, and shown in code https://github.com/polakowo/vectorbt/blob/master/examples/WalkForwardOptimization.ipynb on VectorBT, as well as visualized here, http://www.adaptivetradingsystems.com/blog/modeling_sofware/walk-forward-simulations-in-synergy/.

Is it possible to use tuneta to adjust the setting for pandas-ta indicators in this way. If not, could we discuss how to make this possible?

Thanks for the hard work on this library, as well as your time and attention to this matter. I hope that this message finds you well and that you have a great week. God bless.

Very Respectfully,
CMobley7

n_jobs ignored

Following any example from this rep while I set n_jobs=1 I still have 100% load on my cores,
each process occupies all cores (128 in my case).
Any ideas why ?

Why are you doing the K-means clustering?

Hi,

First I'd like to say thanks for publishing this repo! It's very helpful.

My question specifically refers to this description in the README:

_After the specified Optuna trials are complete, a 3-step KMeans clustering method is used to select the optimal parameter(s):

Each trial is placed in its nearest neighbor cluster based on its distance correlation to the target. The optimal number of clusters is determined using the elbow method. The cluster with the highest average correlation is selected with respect to its membership. In other words, a weighted score is used to select the cluster with the highest correlation but also with the most trials.
After the best correlation cluster is selected, the parameters of the trials within the cluster are also clustered. Again, the best cluster of indicator parameter(s) is selected with respect to its membership.
Finally, the centered best trial is selected from the best parameter cluster._

Since you are clustering by the correlation, and then picking the cluster with the best mean-correlation to the target, I'm not really sure what this is achieving. Why not just use the parameters from the trial with the highest correlation itself?

I can see how this would be useful if you were clustering by the parameters instead of the correlations. (That way you avoid outlier/overfit parameters by making sure you're using a cluster with similar parameters having a high correlation). But the description and the implementation don't seem to be actually using the parameter values in the clustering, they only cluster the scores.

Alternatively doing a k-fold optimization could help control for overfitting as well. Although I guess the user can implement that themselves if they want to.

Thanks again!
-Aakash

TypeError: Object of type Series is not JSON serializable error while saving res_y

I encountered with an issue with my setup. I did solve the problem though, and will open a PR soon which resolves that possible issue. This issue is opened to be linked with the PR.

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\optuna\trial\_trial.py", line 681, in set_user_attr
    self.storage.set_trial_user_attr(self._trial_id, key, value)
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\optuna\storages\_cached_storage.py", line 333, in set_trial_user_attr
    self._flush_trial(trial_id)
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\optuna\storages\_cached_storage.py", line 437, in _flush_trial
    datetime_complete=updates.datetime_complete,
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\optuna\storages\_rdb\storage.py", line 710, in _update_trial
    for k, v in user_attrs.items()
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\sqlalchemy\orm\collections.py", line 1276, in extend
    for value in iterable:
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\optuna\storages\_rdb\storage.py", line 711, in <genexpr>
    if k not in trial_user_attrs_dict
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\json\__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\json\encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\json\encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\json\encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Series is not JSON serializable

Optimize on all available indicators in a package

Hello,
I've recently just found this library and must say it looks promising. I am wondering if there is a way to optimize on all indicators in one of the 3 packages (or perhaps mix and match all the indicators in the packages). Right now it looks as if you must input each indicator individually which is obviously not optimal.

ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required by check_pairwise_arrays.

multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/multiprocess/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/usr/local/lib/python3.8/dist-packages/tuneta/optimize.py", line 240, in fit
    ke.fit(correlations)
  File "/usr/local/lib/python3.8/dist-packages/yellowbrick/cluster/elbow.py", line 316, in fit
    self.k_scores_.append(self.scoring_metric(X, self.estimator.labels_))
  File "/usr/local/lib/python3.8/dist-packages/yellowbrick/cluster/elbow.py", line 104, in distortion_score
    distances = pairwise_distances(instances, center, metric=metric)
  File "/usr/local/lib/python3.8/dist-packages/sklearn/metrics/pairwise.py", line 1884, in pairwise_distances
    return _parallel_pairwise(X, Y, func, n_jobs, **kwds)
  File "/usr/local/lib/python3.8/dist-packages/sklearn/metrics/pairwise.py", line 1425, in _parallel_pairwise
    return func(X, Y, **kwds)
  File "/usr/local/lib/python3.8/dist-packages/sklearn/metrics/pairwise.py", line 299, in euclidean_distances
    X, Y = check_pairwise_arrays(X, Y)
  File "/usr/local/lib/python3.8/dist-packages/sklearn/metrics/pairwise.py", line 156, in check_pairwise_arrays
    X = check_array(
  File "/usr/local/lib/python3.8/dist-packages/sklearn/utils/validation.py", line 797, in check_array
    raise ValueError(
ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required by check_pairwise_arrays.
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "tuneta_opti.py", line 29, in <module>
    tt.fit(X_train, y_train,
  File "/usr/local/lib/python3.8/dist-packages/tuneta/tune_ta.py", line 137, in fit
    self.fitted = [fit.get() for fit in self.fitted]
  File "/usr/local/lib/python3.8/dist-packages/tuneta/tune_ta.py", line 137, in <listcomp>
    self.fitted = [fit.get() for fit in self.fitted]
  File "/usr/local/lib/python3.8/dist-packages/multiprocess/pool.py", line 771, in get
    raise self._value
ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required by check_pairwise_arrays.

Went smoothly before that until:

[I 2021-11-09 11:17:59,067] Trial 71 finished with value: 0.15180702108510608 and parameters: {'length': 40, 'atr_length': 8}. Best is trial 22 with value: 0.15180702108510608.
[I 2021-11-09 11:17:59,091] A new study created in memory with name: pta.kc(X.high, X.low, X.close, length=trial.suggest_int('length', 2, 48), scalar=trial.suggest_int('scalar', 2, 48), )
[I 2021-11-09 11:17:59,115] Trial 72 finished with value: 0.1516975180327597 and parameters: {'length': 38, 'atr_length': 13}. Best is trial 22 with value: 0.15180702108510608.
[I 2021-11-09 11:17:59,162] Trial 0 finished with value: 0.20201019266930986 and parameters: {'length': 34, 'scalar': 15}. Best is trial 0 with value: 0.20201019266930986.

So might have been caused by pta.kc
Any ideas how to solve this?

Thank you for this great package!

<built-in function duplicated_object> returned a result with an error set

Hi, and once again, thanks for this great work!

I encounter an error and wanted to discuss if this is a bug or I'm not using this library as intended. I'm running it via a Jupyter kernel. Here is my driver code:

#%%

import pandas as pd
import pandas_ta as ta
from sklearn.model_selection import train_test_split
from tuneta.tune_ta import TuneTA

#%%

X = pd.read_parquet('D:\data\\finance\candle\BTC-USDT.parquet').loc['2021-1-1':].iloc[:, :5]
y = ta.percent_return(X.close, offset=-1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, shuffle=False)

#%%
tt = TuneTA(verbose=True)
tt.fit(X_train, y_train,
    indicators=['pta.rsi'],
    trials=10,
    early_stop=100,
)

Running this gives me an error at the end with this traceback:

RemoteTraceback                           Traceback (most recent call last)

RemoteTraceback: 
"""
TypeError: unhashable type: 'dict'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\multiprocess\pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\tuneta\optimize.py", line 178, in fit
    trials = trials[~trials.params.duplicated(keep='first')]
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\pandas\core\series.py", line 2034, in duplicated
    res = base.IndexOpsMixin.duplicated(self, keep=keep)
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\pandas\core\base.py", line 1302, in duplicated
    return duplicated(self._values, keep=keep)
  File "C:\Users\Furkan\miniconda3\envs\finance\lib\site-packages\pandas\core\algorithms.py", line 900, in duplicated
    return f(values, keep=keep)
SystemError: <built-in function duplicated_object> returned a result with an error set
"""


The above exception was the direct cause of the following exception:

SystemError                               Traceback (most recent call last)

~\AppData\Local\Temp/ipykernel_2116/4239046016.py in <module>
      3     indicators=['pta.rsi'],
      4     trials=10,
----> 5     early_stop=100,
      6 )

~\miniconda3\envs\finance\lib\site-packages\tuneta\tune_ta.py in fit(self, X, y, trials, indicators, ranges, early_stop)
    121 
    122         # Blocking wait to retrieve results
--> 123         self.fitted = [fit.get() for fit in self.fitted]
    124 
    125         # Fits must contain best trial data

~\miniconda3\envs\finance\lib\site-packages\tuneta\tune_ta.py in <listcomp>(.0)
    121 
    122         # Blocking wait to retrieve results
--> 123         self.fitted = [fit.get() for fit in self.fitted]
    124 
    125         # Fits must contain best trial data

~\miniconda3\envs\finance\lib\site-packages\multiprocess\pool.py in get(self, timeout)
    655             return self._value
    656         else:
--> 657             raise self._value
    658 
    659     def _set(self, i, obj):

SystemError: <built-in function duplicated_object> returned a result with an error set

[Question] Is this package available for custom functions?

Thank you for the fantastic package.
I am wonder if I can use this package for the custom function, for example SMA-EMA and RSI1-RSI2 (e.g. different time periods).
It seems to me than it is a bit difficult to modify this package for the custom function.

All indicators that don’t require volume?

Some assets, like FX, don’t have true volume data. I would like to be able to generate all features that do not require the volume column. This could be generalized to any of the OHLCV. How would I use tuneta to make all indicators that don’t require the volume column?

How to not search feature correlation with all y target?

Hi,
First of all, thanks for developing this tool, it's an excellent tool for feature selecting.
Not only for the algorithm but also for the integration and processing of all indicators.

Here's my situation,
So I got a strategy, and I want to search features correlating to win or lose.
That is, there's only a few of points in my y target that is "activated" instead of using n-point return or the other.
Therefore I tried to make y target like the following:
Assume there's a 10-day OHLCV, and the strategy activated at the third day and seventh day.
y = {0,0,1,0,0,0,-1,0,0,0} where 1 stands for win and -1 stands for loss.

It's probably not a reasonable way to do this.
Cause the tool print like 5-6 features whose correlation to targets over 0.9.
And I realized that those features the tool found only correlated to "activated" points instead of win or lose.
So I think it would be good if the algorithm can search the points that is "activated" and mask the other points.

Do you have any suggestion of implementing this kind of usage? Thanks!

Can y.index be a subset of X.index?

AFML introduces the idea of event filtering. Not every period in X is equally predictable, so filter only events we believe are predictable. This leads to fewer targets than all candles. Does TuneTA currently support the index of y being a subset of the index of X?

Using Tuneta on long short portfolio of stocks

Hi

Thanks for creating this library. It is working out good on the dataset. However, I'm trying to use tuneta over the long short portfolio ( which contains multiple stocks). Is there a way to use the tuneta for the same indicator with same param over the group of stocks or any other way you can suggest?

Thanks,
Ankit Aggarwal

Strange errors occur when running on Colab

Hi, very glad to see you make a huge update regarding my previous requests.

However, when I just copy paste all the code you provide in the example folder and execute on Colab, two strange errors emerges.

The first one is in tune_market.py.


from tuneta.tune_ta import TuneTA
import pandas as pd
from pandas_ta import percent_return
from sklearn.model_selection import train_test_split
import yfinance as yf
import joblib

aapl = yf.download("AAPL", period="10y", interval="1d", auto_adjust=True)
aapl['sym'] = "AAPL"
aapl.set_index('sym', append=True, inplace=True)
aapl['return'] = percent_return(aapl.Close, offset=-1)

msft = yf.download("MSFT", period="10y", interval="1d", auto_adjust=True)
msft['sym'] = "MSFT"
msft.set_index('sym', append=True, inplace=True)
msft['return'] = percent_return(msft.Close, offset=-1)

goog = yf.download("GOOG", period="10y", interval="1d", auto_adjust=True)
goog['sym'] = "GOOG"
goog.set_index('sym', append=True, inplace=True)
goog['return'] = percent_return(goog.Close, offset=-1)

X = pd.concat([aapl, msft, goog], axis=0).sort_index()
y = X['return']
X = X.drop(columns=['return'])

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, shuffle=False)

tt = TuneTA(n_jobs=4, verbose=True)

tt.fit(X_train, y_train,
indicators=['tta.RSI', 'tta.MACD', 'tta.SMA', 'tta.CMO'],
ranges=[(2, 30), (31, 60)],
trials=300,
early_stop=50,
)


Running the code above raise an error below, I am not sure what is the reason. Maybe some bugs exist when using Colab.

The above exception was the direct cause of the following exception:

ValueError Traceback (most recent call last)
in ()
38 ranges=[(2, 30), (31, 60)],
39 trials=300,
---> 40 early_stop=50,
41 )
42

2 frames
/usr/local/lib/python3.7/dist-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):

ValueError: ('Lengths must match to compare', (5285,), (2,))

The second one is in tune_all.py.


from tuneta.tune_ta import TuneTA
import pandas as pd
from pandas_ta import percent_return
from sklearn.model_selection import train_test_split
import yfinance as yf

X = yf.download("SPY", period="10y", interval="1d", auto_adjust=True)
y = percent_return(X.Close, offset=-1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, shuffle=False)

tt = TuneTA(n_jobs=6, verbose=True)

tt.fit(X_train, y_train,
indicators=['all'],
ranges=[(2, 30)],
trials=500,
early_stop=100,
)


Running the code above raise an error below, I am not sure what is the reason too. Maybe some bugs exist when using Colab.

The above exception was the direct cause of the following exception:

AttributeError Traceback (most recent call last)
in ()
20 ranges=[(2, 30)],
21 trials=500,
---> 22 early_stop=100,
23 )
24

2 frames
/usr/local/lib/python3.7/dist-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):

AttributeError: 'KMeans' object has no attribute 'elbow_value_'

Thanks for you contribution again.

Support for Custom features & Multiple tickers

Hi, awesome job! This repo really fits my needs.

Based on my initial understanding, I would like to ask two questions.

  1. From the given example, I think the dataframe format must be DATE + OHLCV.

However, I think the OHLCV columns are just the input of technical indicators, so if users are able to make some custom features by themselves (generated from OHLCV as well but are not library built-in technical indicators) and just want to see how these custom features correlates with a target variable, is this possible? Please give a short code example if it can be already be done.

  1. From the given example, I think the tuneta optimizes correlation one ticker a time.

Thus, if users want to calculate correlation of multiple tickers, users can only execute the example multiple times by changing data with different tickers, but the features sets suggested by tuneta may not be the same.

As a result, I think it is nice to have tuneta can receive a dataframe format like below, so the importance of a feature can be evaluated by not only multi-time period but also multiple tickers. Also, the suggested features can be the same so users can use the data instantly after running tuneta once.

Finally, I'm not sure whether the idea is viable. Please correct me if my understanding is wrong.

  date       ticker feature_a, feature_b, feature_c

2019-01-05 A ... ... ...
2019-01-05 AAL ... ... ...
2019-01-05 AAPL ... ... ...
2019-01-06 A ... ... ...
2019-01-06 AAL ... ... ...
2019-01-06 AAPL ... ... ...
2019-01-07 A ... ... ...
2019-01-07 AAL ... ... ...
2019-01-07 AAPL ... ... ...
... ... ... ... ...
2020-01-07 A ... ... ...
2020-01-07 AAL ... ... ...
2020-01-07 AAPL ... ... ...

Thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.