nnzhan / mtgnn Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
class nconv(nn.Module):
def __init__(self):
super(nconv,self).__init__()
def forward(self,x, A):
x = torch.einsum('ncvl,vw->ncwl',(x,A)) # aggregate by each columns of A
return x.contiguous()
class mixprop(nn.Module):
def __init__(self,c_in,c_out,gdep,dropout,alpha):
super(mixprop, self).__init__()
self.nconv = nconv()
self.mlp = linear((gdep+1)*c_in,c_out)
self.gdep = gdep
self.dropout = dropout
self.alpha = alpha
def forward(self,x,adj):
adj = adj + torch.eye(adj.size(0)).to(x.device)
d = adj.sum(1) # Here should be sum(0), because the column represent its neighbors ?
h = x
out = [h]
a = adj / d.view(-1, 1)
for i in range(self.gdep):
h = self.alpha*x + (1-self.alpha)*self.nconv(h,a)
out.append(h)
ho = torch.cat(out,dim=1)
ho = self.mlp(ho)
return ho
代码中关于normalization的部分如下:
def _normalized(self, normalize):
# normalized by the maximum value of entire matrix.
if (normalize == 0):
self.dat = self.rawdat
if (normalize == 1):
self.dat = self.rawdat / np.max(self.rawdat)
# normlized by the maximum value of each row(sensor).
if (normalize == 2):
for i in range(self.m):
self.scale[i] = np.max(np.abs(self.rawdat[:, i]))
self.dat[:, i] = self.rawdat[:, i] / np.max(np.abs(self.rawdat[:, i]))
默认是=2的情况,对于这个我有疑问
使用所有数据进行归一化是否存在测试集信息泄露的问题?
Dear Author , I admire your project but when I was running your code , I met an error and I can't solve it . I build the environment as your instruction and tried to run the traffic dataset. It trace back like this:
Traceback (most recent call last):
File "train_single_step.py", line 218, in
val_acc, val_rae, val_corr, test_acc, test_rae, test_corr = main()
File "train_single_step.py", line 178, in main
train_loss = train(Data, Data.train[0], Data.train[1], model, criterion, optim, args.batch_size)
File "train_single_step.py", line 77, in train
tx = X[:, :, id, :]
IndexError: tensors used as indices must be long, byte or bool tensors.
Do you know how to debug it ? Looking forward to your reply,Thank you very much!
when data prepared, and i try to run train_multi_step and modify the corresponding parameters, there is an error occurs.
MTGNN/layer.py", line 187, in forward s1,t1 = adj.topk(self.k,1)
RuntimeError: invalid argument 5: k not in range for dimension at /tmp/pip-req-build 4baxydiv/aten/src/THC/generic/THCTensorTopK.cu:23
How to fix it?
Line 121 in 12869c7
Dear author,
As provided in the above link, I notice that you use the transpose of the adjacent matrix to calculate another GCN result.
Why is this step necessary and what is the difference between using an undirected acyclic graph in a single GCN and using two GCN with a DAG and its transpose?
I want to know why the Adjacency matrix in multi-step time prediction has been created, and how to create it? Is it the same method as single step time prediction?
hi,i appreciate your work and i'm folllowing it,my question is that where is the prediction code?
怎么解决这个问题
Hello, I configured the environment according to the configuration required by the program, but the program still can't run
begin training
Traceback (most recent call last):
File "train_single_step.py", line 218, in
val_acc, val_rae, val_corr, test_acc, test_rae, test_corr = main()
File "train_single_step.py", line 178, in main
train_loss = train(Data, Data.train[0], Data.train[1], model, criterion, optim, args.batch_size)
File "train_single_step.py", line 77, in train
tx = X[:, :, id, :]
IndexError: tensors used as indices must be long, byte or bool tensors
I use default parameters in signal step forecasting on Solar dataset. 10 average RSE (0.216), best RSE(0.20). There is a big gap between my result and performance in paper. Can u help me?
Is engine.py
used anywhere?
First thank you for this wonderfull model.
I wonder if we can find a way to add extra Features?
So for example given 10 features, but only predicting 2 of them.
Is there any significance to this line of code which only has one iteration of loop (which returns 0)?
Line 31 in ec47c8a
Is there a version where the loop range > 1 ?
I want to interpret the results by looking at the learned graph adjacency matrix .What is the right way to do it ?
Will there be data provided to run the code?
Hi, I just want to confirm that the current gtnet module in net.py
is the Uni-directed-A
version of the model for multi-step forecasting as stated in the paper?
Hi, In MTGNN, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
matplotlib==3.1.1
numpy==1.17.4
pandas==0.25.3
scipy==1.4.1
torch==1.2.0
scikit_learn==0.23.1
The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency pandas can be changed to >=0.25.0,<=1.4.2.
The version constraint of dependency scipy can be changed to >=0.12.0,<=1.7.3.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
pandas.read_hdf
scipy.sparse.identity scipy.sparse.coo_matrix scipy.sparse.eye scipy.sparse.csr_matrix scipy.sparse.diags
predict.data.cpu.numpy.mean torch.eye list torch.sqrt calculate_normalized_laplacian predefined_A.to.to train_time.append torch.load.to os.path.join open numpy.mean self.tconv masked_mse torch.load.zero_grad torch.arange dv.view torch.autograd.Variable masked_mape self.gc generate_train_val_test torch.nn.L1Loss.to masked_mae RuntimeError acc.append self.skip_convs.append self.dilated_inception.super.__init__ self.end_conv_2 mean_g.Ytest.mean_p.predict.mean float numpy.power dy_nconv testx.transpose.transpose numpy.zeros criterion torch.isnan range sum print self.residual_convs.append scipy.sparse.csr_matrix self.emb2 self.nconv self.model i.self.tconv numpy.isnan torch.LongTensor numpy.sort torch.rand_like scipy.sparse.coo_matrix.dot self.loss F.dropout rae.append train_rmse.append num_nodes.time_ind.np.tile.transpose DataLoaderM real.pred.masked_mae.item numpy.float32.adj.d_mat.dot.astype.todense scipy.sparse.identity self.lin2 real.pred.masked_mape.item scale.Y.scale.output.evaluateL2.item d.np.power.flatten nn.functional.pad self.gc.transpose abs idx.size.idx.size.torch.zeros.to.scatter_ real.predict.util.masked_mape.item numpy.argmin trainer.Trainer.eval numpy.sqrt torch.Tensor self.start_conv testy.transpose.transpose self.scaler.inverse_transform torch.where.float nn.ModuleList self._normalized torch.sigmoid self.model.to numpy.repeat d_mat_inv_sqrt.d_mat_inv_sqrt.adj.dot.transpose.dot.astype util.masked_rmse str engine.model.state_dict torch.load.eval trainer.Trainer.model numpy.stack locals train torch.nn.functional.relu.size p.nelement valid_rmse.append torch.tanh.transpose nnodes.nnodes.torch.randn.to rowsum.np.power.flatten torch.optim.Adagrad self.num_nodes.torch.arange.to self.norm.append numpy.float32.L.astype.todense scipy.sparse.coo_matrix.sum device.nnodes.nnodes.torch.randn.to.nn.Parameter.to torch.tanh mixprop self.mlp1 numpy.array.append torch.cat.size self.graph_constructor.super.__init__ idx.size linear data.std math.sqrt data_list.append adj.d_mat.dot.astype self._split valid_loss.append self.tconv.append d_mat_inv_sqrt.adj.dot.transpose evaluateL2 self.linear.super.__init__ self.lin1 torch.load.train vcorr.append ValueError y.torch.Tensor.to log.format self.gconv2.append index.correlation.mean real.predict.util.masked_rmse.item self.prop.super.__init__ masked_rmse vmape.append torch.load enumerate self.scale.torch.from_numpy.float li.split.split numpy.expand_dims load_pickle i.self.residual_convs torch.softmax trainer.Trainer train_mape.append load_adj format scipy.sparse.diags.dot dataloader.torch.Tensor.to F.relu model torch.load.parameters real.pred.masked_rmse.item trainy.transpose.transpose corr.append numpy.arange test.data.cpu.numpy.mean torch.squeeze.size scipy.sparse.linalg.eigsh self._makeOptimizer torch.nn.MSELoss dilated_inception self.gconv1.append val_time.append self.LayerNorm.super.__init__ trainer.Optim min adj.size.torch.eye.to generate_graph_seq2seq_io_data self.model.train torch.nn.init.zeros_ scipy.sparse.eye vrmse.append numpy.std torch.nn.ModuleList self.filter_convs.append self.gate_convs.append torch.nn.L1Loss self.dy_nconv.super.__init__ numpy.stack.append self.mixprop.super.__init__ realy.transpose.size i.self.filter_convs argparse.ArgumentParser torch.nn.functional.relu numpy.abs torch.cat torch.zeros_like self.mlp2 adj.torch.rand_like.adj.topk self.test.size StandardScaler torch.nn.Parameter torch.nn.functional.relu.topk trainer.Trainer.train i.self.norm torch.tensor self.end_conv_2.size self.graph_directed.super.__init__ numpy.maximum.reduce nconv numpy.max criterion.backward scale.Y.scale.output.evaluateL1.item torch.squeeze.unsqueeze numpy.ones adj.sum.np.array.flatten main self.optimizer.step scipy.sparse.csr_matrix.astype evaluate torch.save self.optimizer.zero_grad util.masked_mape i.self.skip_convs numpy.isinf idx.size.idx.size.torch.zeros.to.fill_ net.gtnet argparse.ArgumentParser.parse_args torch.mm torch.optim.Adadelta pandas.read_hdf numpy.loadtxt self.model.parameters predict.data.cpu.numpy nn.Conv2d X.to.to self.loss.item output.transpose.transpose data.get_batches self.end_conv_1 Y.to.to _wrapper self.mlp vrae.append vmae.append torch.mean tuple torch.from_numpy torch.where i.self.gate_convs predict.data.cpu.numpy.std self.reset_parameters X.transpose.transpose torch.nn.functional.relu.sum outputs.append out.append numpy.tile self.scale.to torch.nn.init.ones_ metric criterion.item scipy.sparse.diags load_dataset.shuffle scipy.sparse.coo_matrix realy.transpose.transpose test.data.cpu.numpy s1.fill_ torch.zeros df.index.values.astype self.model.eval torch.squeeze adj.sum.view torch.optim.Adam self.graph_global.super.__init__ self.skip0 valid_mape.append torch.nn.functional.layer_norm torch.nn.functional.relu.transpose numpy.concatenate numpy.array.std torch.set_num_threads trainer.Optim.step max torch.device d_mat_inv_sqrt.d_mat_inv_sqrt.adj.dot.transpose.dot.tocoo self.loss.backward net.gtnet.parameters numpy.random.permutation self.dilated_1D.super.__init__ preds.transpose.squeeze graph_constructor LayerNorm scaler.inverse_transform numpy.array self.skipE torch.unsqueeze self.dy_mixprop.super.__init__ test.data.cpu.numpy.std vacc.append numpy.float32.d_mat_inv_sqrt.d_mat_inv_sqrt.adj.dot.transpose.dot.astype.todense value.lower nn.functional.pad.size torch.optim.SGD super train_loss.append self._batchify predict.data.cpu numpy.timedelta64 self.register_parameter time.time self.scale.expand load_dataset round self.nconv.super.__init__ self.gtnet.super.__init__ torch.randn test.data.cpu DataLoaderS torch.nn.Linear torch.cat.append numpy.sort.reshape preds.transpose.transpose torch.nn.utils.clip_grad_norm_ idx.size.idx.size.torch.zeros.to d_mat_inv_sqrt.adj.dot.transpose.dot numpy.savez_compressed len id.torch.tensor.to normal_std torch.abs torch.nn.Embedding self.emb1 i.self.gconv2 torch.randperm torch.einsum torch.no_grad torch.nn.MSELoss.to StandardScaler.transform pickle.load evaluateL1 torch.cat.contiguous load_dataset.get_iterator isinstance engine.model.load_state_dict int li.split.strip data.mean trainx.transpose.transpose his_loss.append torch.nn.Conv2d self.graph_undirected.super.__init__ numpy.load i.self.gconv1 x.torch.Tensor.to argparse.ArgumentParser.add_argument data.scale.expand
@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.
model = torch.jit.script(gtnet(........
Get the following error information
`
RuntimeError:
Expected integer literal for index. ModuleList/Sequential indexing is only supported with integer literals. Enumeration is supported, e.g. 'for index, v in enumerate(self): ...':
File "D:\pyyj\MTGNN\layer.py", line 146
x = []
for i in range(len(self.kernel_set)):
x.append( self.tconvi)
~~~~~~~~~~~~~ <--- HERE
for i in range(len(self.kernel_set)):
`
Hello, I have a question. In paper's appendix, it said"Following [13], we split these two datasets into a training set (70%),
validation set (20%), and test set (10%) in chronological order." But actually, DCRNN and the code is: training set (70%),
validation set (10%), and test set (20%). I wonder if this is a typo。Thanks!
Hi!
I want to know how you implement the baselines. For example, I want to know if you use time_in_day only (w/o day_in_week) for GMAN and MRA-BGCN. Can you provide the codes for the baseline model?
Thanks!
I'm making attempts to change the codes for multi-GPU training, but have hot got a fine way for it.
Hi!
Thank you for providing the source codes and datasets for the great work.
However, I cannot find the node locations in the dataset (e.g., longitude and latitude for each node)...
Can you tell me where to get the information? (I want to draw Fig. 6 (c) myself.)
Thanks!
Hi! I like your great work! But I have one question when reading your source code:
Line 14 in d9d4235
Should the above code be changed into x = torch.einsum('ncvl,wv->ncwl',(x,A))
so that it would be consistent with the definition in
Line 45 in d9d4235
Thanks!
Dear authors,
I currently try to reproduce the experiments of your fascinating work. Performance on dataset like Electricity
is fine, but I fail to obtain the performance on dataset Traffic
in single-step forecasting scenario with setting suggested in the paper.
Concretely, the RRSE
of Traffic-horizon3
is
Look forward to your reply, thanks a lot!
Best,
JiaWei
class nconv(nn.Module):
def __init__(self):
super(nconv,self).__init__()
def forward(self,x, A):
x = torch.einsum('ncvl,vw->ncwl',(x,A)) # aggregate by each columns of A
return x.contiguous()
class mixprop(nn.Module):
def __init__(self,c_in,c_out,gdep,dropout,alpha):
super(mixprop, self).__init__()
self.nconv = nconv()
self.mlp = linear((gdep+1)*c_in,c_out)
self.gdep = gdep
self.dropout = dropout
self.alpha = alpha
def forward(self,x,adj):
adj = adj + torch.eye(adj.size(0)).to(x.device)
d = adj.sum(1) # Here should be sum(0), because the column represent its neighbors ?
h = x
out = [h]
a = adj / d.view(-1, 1)
for i in range(self.gdep):
h = self.alpha*x + (1-self.alpha)*self.nconv(h,a)
out.append(h)
ho = torch.cat(out,dim=1)
ho = self.mlp(ho)
return ho
Thanks for releasing the code! My question goes to the results reported in the paper. Are they from validation set or test set? Thanks!
I use default parameters in multi-step forecasting on METR dataset. But the MAE results rose to 12 from the sixth step.
I am struggling to run the single step training code for exchange rate data. Getting an CUDA error.
Here is the error exception in I get when I attempt to replicate the process detailed in the readme file.
I am using a windows 10 notebook for this. I believe I have installed all of the dependencies listed inside the repository, but I might be wrong.
@ntubiolin @nnzhan Hello,I had the same problem
return torch._C._VariableFunctions.einsum(equation, operands) RuntimeError: size of dimension does not match previous size, operand 1, dim 1
in
MTGNN-master/layer.py", line 14, in forward x = torch.einsum('ncvl,vw->ncwl',(x,A))
and I printed out the shapes of X and A
torch.Size([64, 32, 206, 13]) torch.Size([207, 207])
I'm using the METR-LA dataset
Hi there,
I am implementing this using the METR-LA data.
The first step to generate the training data seems to ouput a set of 3 .npz, but nothing else. Then the input for training routing is "traffic.txt". I can't see where this .txt file is created, and figure out what it may contain.
Can you help please?
Hi, I noticed that you normalized the input time series by dividing the maximum value in function util.py/DataLoaderS._normalized
and scaled it back when computing loss. Could you please tell the reason that you do it.
adj_mx.pkl and adj_mx_bay.pkl
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.