Comments (9)
Probably something to do with the latest NumPy changes. Can you provide some more information about the line you use to call MCS
, including the data types and shape?
from arch.
I used
mcs_mse = MCS(error_mse, size=0.05, method='max')
mcs_mae = MCS(error_mae, size=0.05, method='max')
mcs_mape = MCS(error_mape, size=0.05, method='max')
mcs_smape = MCS(error_smape, size=0.05, method='max')
mcs_qlike = MCS(error_qlike, size=0.05, method='max')
mcs_mae.compute()
mcs_mse.compute()
mcs_qlike.compute()
mcs_mape.compute()
mcs_smape.compute()
to call MCS
Here the information of my error functions is
qlike :[[0.42840797 0.39531405 0.49885183]] type :<class 'numpy.ndarray'> shape :(1, 3)
mse :[[0.09699626 0.08983257 0.52129932]] type :<class 'numpy.ndarray'> shape :(1, 3)
mae :[[0.27644839 0.26637054 0.63971759]] type :<class 'numpy.ndarray'> shape :(1, 3)
mape :[[0.80116911 0.6866475 0.54970106]] type :<class 'numpy.ndarray'> shape :(1, 3)
smape :[[65.99426035 62.52946571 82.58094636]] type :<class 'numpy.ndarray'> shape :(1, 3)
from arch.
Not all of them will go wrong every time, and the error function that goes wrong when the same program is run twice is not necessarily the same.
from arch.
Are you running on the same computer all times? Or possibly in a cloud install This is a bit strange. Could you paste the result of
import pandas as pd
pd.show_versions()
from a run the failed.
from arch.
Something may be wrong in your understanding of MCS. If you want to compare loss functions, your input should be T
by m
where m
is the number of models and T
is the sample size. For example, to use MCS with qlik losses, you compute the loss for each time period for each model.
from arch.
Are you running on the same computer all times? Or possibly in a cloud install This is a bit strange. Could you paste the result of
import pandas as pd pd.show_versions()
from a run the failed.
Yes, I run it on the same computer all times.
Here is the information of pandas.show_versions()
INSTALLED VERSIONS
------------------
commit : 2e218d10984e9919f0296931d92ea851c6a6faf5
python : 3.10.5.final.0
python-bits : 64
OS : Darwin
OS-release : 22.4.0
Version : Darwin Kernel Version 22.4.0: Mon Mar 6 21:00:41 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8103
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 1.5.3
numpy : 1.24.3
pytz : 2022.7.1
dateutil : 2.8.2
setuptools : 57.0.0
pip : 23.1.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : 4.11.2
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : 3.6.3
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.0
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None
Something may be wrong in your understanding of MCS. If you want to compare loss functions, your input should be
T
bym
wherem
is the number of models andT
is the sample size. For example, to use MCS with qlik losses, you compute the loss for each time period for each model.
I use MCS to compare three time series models. I calculate the loss functions of the model every 22 days as a sample, then I get several samples in a longer interval, and then I use these samples to compare.
This problem also occurs when the input shape is (24, 3) or other times.
from arch.
When it fails, will it always fail with the same data?
from arch.
I tested my models with the same data, but because two of the models were a bit random, the results of the calculated loss function were not the same each time.
I ran the case twice more where the shape of the input loss was (1,3). Even though the values of the input loss are not exactly the same both times, the error is in the same place.
Here is the data information of where the error occurred this time.
qlike :[[0.38443391 0.56317765 0.49531343]] type :<class 'numpy.ndarray'> shape :(1, 3)
mse :[[0.10918251 0.14925831 0.53674966]] type :<class 'numpy.ndarray'> shape :(1, 3)
mae :[[0.27002907 0.31174622 0.49793814]] type :<class 'numpy.ndarray'> shape :(1, 3)
mape :[[0.45228915 0.71662962 0.4635835 ]] type :<class 'numpy.ndarray'> shape :(1, 3)
smape :[[56.47898975 64.50894961 65.16465342]] type :<class 'numpy.ndarray'> shape :(1, 3)
qlike :[[0.38443391 0.39939706 0.2619653 ]] type :<class 'numpy.ndarray'> shape :(1, 3)
mse :[[0.10918251 0.12636466 0.35217552]] type :<class 'numpy.ndarray'> shape :(1, 3)
mae :[[0.27002907 0.27632961 0.35660937]] type :<class 'numpy.ndarray'> shape :(1, 3)
mape :[[ 0.45228915 0.43890637 16.10352968]] type :<class 'numpy.ndarray'> shape :(1, 3)
smape :[[56.47898975 56.12104852 59.04854384]] type :<class 'numpy.ndarray'> shape :(1, 3)
They succeeded with mse and mae, but there was an error with qlike.
from arch.
I found the issue. This was happening because there were ties when removing models. For what is it is worth, it is never valid to use MCS with a single loss as you did above. MCS will now warn in this case and related cases.
from arch.
Related Issues (20)
- Exogenous variable in the Volatility Equation HOT 1
- ENH: Add NGARCH specification to set of volatility processes
- My scratch implementation does not match the result for EGARCH HOT 1
- Tests fail: ImportError while loading conftest '/usr/ports/science/py-arch/work-py39/arch-6.1.0/arch/conftest.py'. HOT 3
- Probably bug in FIGARCH implementation with horizon > 1 HOT 1
- ModuleNotFoundError: No module named 'arch' when importing arch_model HOT 7
- add topic garch
- Clarification on `.simulated_variances` attribute of `ARCHModelForecast` object. HOT 3
- `arch_model` estimator is forecasting same value irrespective of the `horizon` parameter. HOT 2
- [DOC] Links to example notebooks for unit root tests and cointegration testing analysis HOT 1
- Are we using two-step approach for estimation? HOT 2
- Time series bootstrapping issues HOT 5
- volatility forecast in comparison with realized volatility HOT 3
- How to modify GARCH model to incorporate new terms ? HOT 1
- Source for Long Run Covariance Estimator
- There is no _version.py in arch.
- Details on Hansen's Skewed T
- Can we introduce an user-defined optional lower-bound in the stationary constraint of the variance process?
- How to simulate GARCH data for Monte Carlo simulations?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from arch.