laiguokun / lstnet Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Dear authors,
I want to run it on cpu and remove "--gpu 3" from the *.sh, but having following error:
16619.36172750017
number of parameters: 325871
begin training
TypeError: slice indices must be integers or None or have an index method,
on
File "/Users/baohuaw/PycharmProjects/LSTNet/models/LSTNet.py", line 52, in forward
s = c[:,:, -self.pt * self.skip:].contiguous();
Can you please help?
Thx.
File "/media/yaoshuilin/common/run/LSTNet-master/models/LSTNet.py", line 53, in forward
s = s.view(batch_size, self.hidC, self.pt, self.skip);
TypeError: view(): argument 'size' must be tuple of ints, but found element of type float at pos 3
Hello, do you also have the code to process the data set? If so, can you upload it? Thank you very much!
Hello, When I run LSTNet 'python main.py --gpu 0 --horizon 24 --data data/electricity.txt --save save/elec.pt --output_fun Linear' return error as follows.
Traceback (most recent call last):
File "main.py", line 150, in
val_loss, val_rae, val_corr = evaluate(Data, Data.valid[0], Data.valid[1], model, evaluateL2, evaluateL1, args.batch_size);
File "main.py", line 36, in evaluate
rae = (total_loss_l1/n_samples)/data.rae
RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.FloatTensor for argument #2 'other'
Have you ever had a similar error? How to solve this problem?
Hello Prof. LAI,
At line 125 in utils.py
, inside the _normalized()
function, it seems that a normalization of the whole dataset is performed for both the train, valid, and test sets.
# normalized by the maximum value of each row(sensor).
if normalize == 2:
for i in range(self.m):
self.scale[i] = np.max(np.abs(self.raw_data[:, i])) # <- This line
self.dat[:, i] = self.raw_data[:, i] / np.max(np.abs(self.raw_data[:, i]))
I think it is not proper to use the information in the test data for training.
Traceback (most recent call last):
File "main.py", line 149, in
train_loss = train(Data, Data.train[0], Data.train[1], model, criterion, optim, args.batch_size)
File "main.py", line 60, in train
total_loss += loss.data[0];
IndexError: invalid index of a 0-dim tensor. Use tensor.item()
in Python or tensor.item<T>()
in C++ to convert a 0-dim tensor to a number
When running the code, the following error message gets displayed:
main.py: error: argument --output_fun: expected one argument
why the loss divide by n_samples?
total_loss / n_samples
the l1-loss is the sum loss of all time step(t) and multivariate (i) in this paper.
| Y(t,i) - Y'(t-h, i) |, accumulate the t and i
计算rse与rae为啥要比上样本数,还要比上真实值的标准差与平均绝对误差??求求大佬讲解
In the "Convolutional Component" part of the paper, author proposes that since the convolution operation makes the length of the output vector hk less than T, it is necessary to zero-padding on the left of the input matrix X to make the length of the output vector equal to T. However, I don't see any implementation of this in the code. I hope to get a reasonable explanation, thanks!
想请问一下您是怎么把论文中的370个用户的电力数据集转成321个用户的电力数据集的,并且他对应的日期是多少呢
I was looking at the LSTNet.py code and I found couple of issues:
1- There's a floating point issue that was mentioned in a previous issue that is now closed but the solution mentions that we need to cast the variable (presumably pt) to an int however this does not solve the issue unless pt has no floating point. If it has then int(-self.pt * self.skip) will not be equal to int(-self.pt) * self.skip
Am I missing anything here?
2- The whole calculation of pt is a little weird to me cause from what I understood is that according to default values, the window is 24 * 7 while the skip is 24. 24 * 7 = 168 was transformed into 163 by the CNN layer hence we have changed space and we can no longer consider the skip as 24 but has to be the transformation of it through the CNN. Cause the information that was present in a 24 length is now in a shorter length in the transformed space. What is unchanged by this transformation is how many points there are when I skip 24. In the original space it was 24*7 / 24 = 7 points. Those 7 points still exist but are separated by a shorter than 24 "hours" duration.
Isn't it correct? If so the whole calculation is different
Thanks!
Fadi Badine
if set horizon ={3, 6, 12, 24}, change outputTensor and TrainYTenosr to (batchsize,horizon,Feature dimension)?
But no matter how I change the value of horizon, the output is [batchsize,Feature dimension].
Should I adjust the output layer?
Hi:
I can't find the codes of "Temporal Attention Layer" in LSTNet.py. Where are the codes?
Thanks.
TypeError: view(): argument 'size' must be tuple of ints, but found element of type float at pos 3
when I change self.pt = (self.P - self.Ck) / self.skip to self.pt = int((self.P - self.Ck) / self.skip).
it works well.
Excuse me ? I found your datas are all singe variable, however, your paper said that your work is aimed at multivariate time-series, so do you have another codes or if it is my fault to understand this code?
Are the hyperparameter choices made in the bash script based on the most optimal values you have found based on hyperparameter tuning?
Sorry if this might be a simple question.
All my sh scripts completed the runs and training just fine, but I can't find the predictions saved anywhere.
How do I generate the actual forecasts? I assume the horizon parameter sets how many items ahead to predict on, but how do I generate and actually get to the results of the forecast?
Hello everyone,
when I run the code in my cpu computer, I run :
python main.py --horizon 24 --data data/electricity.txt --save save/elec.pt --output_fun Linear
Then it told me:
Traceback (most recent call last):
File "main.py", line 149, in <module>
train_loss = train(Data, Data.train[0], Data.train[1], model, criterion, optim, args.batch_size)
File "main.py", line 55, in train
output = model(X);
File "/home/jiayi/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/jiayi/LSTNet/models/LSTNet.py", line 54, in forward
s = s.view(batch_size, self.hidC, self.pt, self.skip);
RuntimeError: invalid argument 2: size '[128 x 100 x 6 x 24]' is invalid for input with 2073600 elements at /pytorch/aten/src/TH/THStorage.c:41
It puzzled me for whole day... Anyone meet this error? could you share solution with me?
Thank you anyway.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.