mianzhang / dialogue_gcn Goto Github PK
View Code? Open in Web Editor NEWPytorch implementation to paper "DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation".
Pytorch implementation to paper "DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation".
Hi, I'm running my own dataset using your code and the result is only 0.3 F1-macro, I'd like to ask you what do you think are the reasons? There are 7 classes in the dataset, then the text feature is a 768-dimensional tensor extracted using bert, and this dataset is a Chinese dataset.
/dgcn/model/functions.py
Line 17: edge_ind.append(edge_perms(lengths[j].cpu().item(), wp, wf))
Line 24: perms = edge_perms(cur_len, wp, wf)
Maybe the perms in line 24 can be assigned directly by edge_ind calculate in Line 17?
Is there any difference?
As title, may I know the difference with this implementation version with the original one?
https://github.com/declare-lab/conv-emotion/tree/master/DialogueGCN
Thanks.
i have downloaded iemocap yet found no pkl for utils to load_pkl.
is there any necessary process missing ?
.first time using this dataset
I am trying to use dialogueGCN for establishing baseline for my model, but I cannot run the code-
while executing ./scripts/iemocap.sh train
Error: RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:1 Long tensor
Error log-
Traceback (most recent call last):
File "train.py", line 93, in
main(args)
File "train.py", line 35, in main
ret = coach.train()
File "/mnt/berry/home/prakhar/dialoguegcn/conv-emotion/DialogueGCN-mianzhang/dgcn/Coach.py", line 41, in train
self.train_epoch(epoch)
File "/mnt/berry/home/prakhar/dialoguegcn/conv-emotion/DialogueGCN-mianzhang/dgcn/Coach.py", line 74, in train_epoch
nll = self.model.get_loss(data)
File "/mnt/berry/home/prakhar/dialoguegcn/conv-emotion/DialogueGCN-mianzhang/dgcn/model/DialogueGCN.py", line 59, in get_loss
graph_out, features = self.get_rep(data)
File "/mnt/berry/home/prakhar/dialoguegcn/conv-emotion/DialogueGCN-mianzhang/dgcn/model/DialogueGCN.py", line 43, in get_rep
node_features = self.rnn(data["text_len_tensor"], data["text_tensor"]) # [batch_size, mx_len, D_g]
File "/mnt/berry/home/prakhar/dialoguegcn/conv-emotion/DialogueGCN-mianzhang/cudagcn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/berry/home/prakhar/dialoguegcn/conv-emotion/DialogueGCN-mianzhang/dgcn/model/SeqContext.py", line 20, in forward
packed = pack_padded_sequence(
File "/mnt/berry/home/prakhar/dialoguegcn/conv-emotion/DialogueGCN-mianzhang/cudagcn/lib/python3.8/site-packages/torch/nn/utils/rnn.py", line 249, in pack_padded_sequence
_VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:1 Long tensor
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.