Comments (10)
Also, I do not fully understand how the model gets the prediction for multiple future time points autoregressively at inference time.
Say I want to make 3 future predictions, and 100 trajectories, does it work as follows?
for _ in range(100):
- Get the t-distribution parameters (Parameter_1) for the future first day.
- Sample one data (Data_1) from t-distribution with Parameter_1.
- Get the Parameter_2, by including the Data_1
- Sample one data (Data_3) from t-distribution with Parameter_2.
- trajectories.append([Data_1, Data_2, Data_3])
Additionally, it would be great if you could give me a hint on where I can find the detail code for autoregressive prediction, thanks!
from lag-llama.
lag-llama/lag_llama/gluon/lightning_module.py
Lines 229 to 261 in 7454088
from lag-llama.
Hello, it's Arthur, Thank you for your great work. I would like to ask whether it is possible to get the specific lag indices you use during the pre-training or zero-shot phases.
In the Colab tutorial notebook, it is indicated that the context length is set to 32, and the maximum potential lag index could be 1092. However, the exact indices used to tokenize the 32 historical time points remain unclear to me. Do you employ all of the 1092 lags, or is there a specific subset that is used?
Thank you!
Hi @SpeeeedLee .
The 32 historical time series points are consecutive, and sampled before the timestep to be predicted.
The lags however are sampled possibly even beyond this 32-length context, but sparsely as denoted by the lag indices.
The figure below might be useful to clarify the difference.
As for the indices of the lags, in our experiments, we sample lags of certain frequencies, upto a certain length.
The frequencies are denoted here:
lag-llama/lag_llama/gluon/estimator.py
Line 137 in 35f62a9
The corresponding code to sample lags is here. We use the get_lags_for_frequency function of GluonTS in our code:
lag-llama/lag_llama/gluon/estimator.py
Lines 158 to 161 in 35f62a9
To give an example, the lags of the "D" frequency (daily) frequency look like this:
[0, 7, 12, 13, 14, 19, 20, 21, 26, 27, 28, 29, 30, 55, 83, 362, 363, 364, 726, 727, 728, 1090, 1091, 1092]
As for the actual lag indices that come from all these frequencies:
[0, 7, 8, 10, 11, 12, 13, 14, 19, 20, 21, 22, 23, 24, 26, 27, 28, 29, 30, 34, 35, 36, 46, 47, 48, 50, 51, 52, 55, 57, 58, 59, 60, 61, 70, 71, 72, 83, 94, 95, 96, 102, 103, 104, 117, 118, 119, 120, 121, 142, 143, 144, 154, 155, 156, 166, 167, 168, 177, 178, 179, 180, 181, 334, 335, 336, 362, 363, 364, 502, 503, 504, 670, 671, 672, 718, 719, 720, 726, 727, 728, 1090, 1091, 1092]
For the code GluonTS uses to generate these lag indices, you can refer to the source code of the get_lags_for_frequency function.
from lag-llama.
Feel free to follow up if you have clarifications or close the issue if it answers your questions. Thanks!
from lag-llama.
Related Issues (20)
- True Future Predictions HOT 4
- Fine-tuning on the pedestrian dataset is problematic HOT 4
- just about time HOT 3
- Training lag-llama with GluonTS dataset reporting error HOT 8
- Error. unfortunately the demo colab does not work HOT 5
- How not to use "item_id"? HOT 5
- Getting LightningModule error during fine-tuning HOT 5
- random seed, how to freeze predictions HOT 5
- Finetuning code release schedule HOT 2
- Holiday consideration HOT 1
- Problems on fine-tuning on my own data HOT 2
- Reproduction of experiments in paper HOT 2
- Last hidden states - Time series embeddings HOT 4
- CRPS metric problem HOT 4
- Custom small length dataset HOT 3
- Embeddings from lag-llama HOT 1
- Timeseries with multi-column data HOT 3
- Fails to Reproduce Experiment as Described in Your Paper HOT 4
- How to finetune on Custom Loss function HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lag-llama.