Comments (4)
Hi!
Thanks for the detailed issue.
The way you evaluate it with a for loop may not be correct. You can just set the prediction_length
to be 1, and run it through the entire test data. It will be evaluated for one-step prediction in each timestep i.e.
Predicts timestep T using history until T-1
Predicts timestep T+1 using history until T (not the previous prediction etc.).
Can you try that?
from lag-llama.
Thanks for your response!
I tried a similar approach where I use the context length as a sliding window at each timestamp, and it gave better much results. On this approach it would not consider the context length as lags for each timestamp but the entire historical data until the current timestamp right?
from lag-llama.
Can you elaborate on what you mean by the context length as a sliding window at each timestamp
?
As for your question, your maximum lag taken from the history is the value 1093 timesteps behind the timestep to be predicted. Lags can potentially go beyond context length; the context length is taken for the token independently.
from lag-llama.
Thank you @ashok-arjun I solved the issue by using the context length as the number of lags for each prediction. This ensured consistent use of historical data across models. In the past I used the prediction_length
as you suggested but this only works for a single sample, while I needed to compare for several ground truths so the above was allows me to compare sample by sample over my testing data
from lag-llama.
Related Issues (20)
- Measuring Perplexity (PPL)? HOT 2
- problem dataset weather and mae metric HOT 1
- different prerpint has different computing power (Summit supercomputer) and dataloader PatchTST HOT 1
- Why the freq_str argument to time_features_from_frequency_str() function is always 'S' HOT 1
- forecasts = list(forecast_it) is very slow HOT 6
- dealing with mixed frequency data HOT 1
- Multivariate time series data HOT 13
- Finetune lag-llama using cpu HOT 3
- ModuleNotFoundError: No module named 'gluonts.torch.modules.loss' HOT 1
- Faster inference and more accurate point estimates with Lag-Lama HOT 7
- KeyError: 'past_target' HOT 2
- train/val/test datasets splitting HOT 2
- Autogluon support HOT 1
- Sub-second data with various frequencies
- Enhancing Prediction Performance and Monitoring for Large Datasets
- help with wandb permission access HOT 2
- update the requirements.txt HOT 1
- Is your pre-training "self-supervised"? HOT 1
- Trying to do fine-tuning with my own dataset HOT 1
- forecasts = list(forecast_it) not performed HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lag-llama.