Coder Social home page Coder Social logo

pytorch_rvae's People

Contributors

kefirski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch_rvae's Issues

How to create MSCOCO dataset

Hi,
I got the MSCOCO captions_train2014.json and captions_val2014.json, as described in the paper, there are 82,783 train samples and 40,504 val samples, every sample contains 5 captions. If I omit one caption and combine the other four into two paraphrase pairs, there will be about 2*(82,783 + 40,504)=246,574 pairs. How can i get the 320k paraphrase pairs?

train.py memory problem

is there a way to use a word embedding genereted with something else (like gensim for example).
This implementation dies after a while on my relatively large data set (with 32gb of memory)

RuntimeError: bool value of Variable objects containing non-empty torch.cuda.FloatTensor is ambiguous

mldl@ub1604:/ub16_prj/pytorch_RVAE$ python3 train_word_embeddings.py
preprocessed data was found and loaded
Traceback (most recent call last):
File "train_word_embeddings.py", line 47, in
out = neg_loss(input, target, args.num_sample).mean()
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/mldl/ub16_prj/pytorch_RVAE/selfModules/neg.py", line 38, in forward
assert parameters_allocation_check(self),
File "/home/mldl/ub16_prj/pytorch_RVAE/utils/functional.py", line 15, in parameters_allocation_check
return fold(f_and, parameters, True) or not fold(f_or, parameters, False)
File "/home/mldl/ub16_prj/pytorch_RVAE/utils/functional.py", line 2, in fold
return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0]))
File "/home/mldl/ub16_prj/pytorch_RVAE/utils/functional.py", line 2, in fold
return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0]))
File "/home/mldl/ub16_prj/pytorch_RVAE/utils/functional.py", line 6, in f_and
return x and y
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py", line 123, in bool
torch.typename(self.data) + " is ambiguous")
RuntimeError: bool value of Variable objects containing non-empty torch.cuda.FloatTensor is ambiguous
mldl@ub1604:
/ub16_prj/pytorch_RVAE$

KLD in Loss function

Hi, would you tell me why the kld calculation only involves what it seems to be the encoder or approximated (to the posterior) function? Isn't it calculated base on 2 distributions?

        mu = self.context_to_mu(context)
        logvar = self.context_to_logvar(context) # to z sampled from
        std = t.exp(0.5 * logvar)
        z = Variable(t.randn([batch_size, self.params.latent_variable_size]))
        if use_cuda:
            z = z.cuda()
        z = z * std + mu
        kld = (-0.5 * t.sum(logvar - t.pow(mu, 2) - t.exp(logvar) + 1, 1)).mean().squeeze()

Error while running train.py

Traceback (most recent call last):
File "train.py", line 59, in
cross_entropy, kld, coef = train_step(iteration, args.batch_size, args.use_cuda, args.dropout)
File "/home/tonygrey/pytorch_RVAE/model/rvae.py", line 113, in train
loss.backward()
File "/home/tonygrey/miniconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/tonygrey/miniconda3/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: invalid argument 1: the number of sizes provided must be greater or equal to the number of dimensions in the tensor at /opt/conda/conda-bld/pytorch_1518243271935/work/torch/lib/THC/generic/THCTensor.c:326

Loss Function

Hi,

Would you like to tell me why you added scalar '79' in front of the cross entropy (RVAE, Line 110)?
loss = 79 * cross_entropy + kld_coef(i) * kld

best

Got a RuntimeError when running train.py

Hi,
I am very interested in the codes. But when I run train.py, I got a runtime error, which is listed below:

preprocessed data was found and loaded
Traceback (most recent call last):
File "train.py", line 59, in
cross_entropy, kld, coef = train_step(iteration, args.batch_size, args.use_cuda, args.dropout)
File "/shared/data/mengqu2/projects/rvae/model/rvae.py", line 113, in train
loss.backward()
File "/home/mengqu2/.local/lib/python3.5/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/mengqu2/.local/lib/python3.5/site-packages/torch/autograd/init.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: invalid argument 1: the number of sizes provided must be greater or equal to the number of dimensions in the tensor at /pytorch/torch/lib/THC/generic/THCTensor.c:309

Could you help fix the problem?
Thank you so much!

RuntimeError: dimension out of range (expected to be in range of [-2, 1], but got 2)

Hi! I just ran python train.py and received this traceback:

Traceback (most recent call last):
  File "train.py", line 59, in <module>
    cross_entropy, kld, coef = train_step(iteration, args.batch_size, args.use_cuda, args.dropout)
  File "/home/****/pytorch-experiments/insane_playgorund/pytorch_RVAE/model/rvae.py", line 104, in train
    z=None)
  File "/home/****/.virtualenvs/pytorch-env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/****/pytorch-experiments/insane_playgorund/pytorch_RVAE/model/rvae.py", line 64, in forward
    encoder_input = self.embedding(encoder_word_input, encoder_character_input)
  File "/home/****/.virtualenvs/pytorch-env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/****/pytorch-experiments/insane_playgorund/pytorch_RVAE/selfModules/embedding.py", line 47, in forward
    character_input = self.TDNN(character_input)
  File "/home/****/.virtualenvs/pytorch-env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/****/pytorch-experiments/insane_playgorund/pytorch_RVAE/selfModules/tdnn.py", line 42, in forward
    xs = [x.max(2)[0].squeeze(2) for x in xs]
  File "/home/****/pytorch-experiments/insane_playgorund/pytorch_RVAE/selfModules/tdnn.py", line 42, in <listcomp>
    xs = [x.max(2)[0].squeeze(2) for x in xs]
  File "/home/****/..virtualenvs/pytorch-env/lib/python3.5/site-packages/torch/autograd/variable.py", line 750, in squeeze
    return Squeeze.apply(self, dim)
  File "/home/****/.virtualenvs/pytorch-env/lib/python3.5/site-packages/torch/autograd/_functions/tensor.py", line 378, in forward
    result = input.squeeze(dim)

Quick glance can't locate the source of the problem.
Python 3.5 and Pytorch 0.2.0_3

allow_pickle=False problem in train_word_embeddings.py

Hi, I really like this code. When I try to train word embeddings, it shows this error. Does anyone have the same problem and have solved it? I changed CUDA default to False since I am running on MacOS (10.14). I run python 3.6.1 on conda and the following modules:
certifi==2019.3.9
cffi==1.12.3
mkl-fft==1.0.10
mkl-random==1.0.2
numpy==1.16.3
olefile==0.46
Pillow==4.2.1
pycparser==2.19
six==1.12.0
torch==1.0.1.post2
torchvision==0.2.2

Traceback (most recent call last):
File "train_word_embeddings.py", line 25, in
batch_loader = BatchLoader('')
File "/Users/davis/Downloads/Projects/side-projects/pytorch_RVAE/utils/batch_loader.py", line 104, in init
self.tensor_files)
File "/Users/davis/Downloads/Projects/side-projects/pytorch_RVAE/utils/batch_loader.py", line 224, in load_preprocessed
for input_type in tensor_files]
File "/Users/davis/Downloads/Projects/side-projects/pytorch_RVAE/utils/batch_loader.py", line 224, in
for input_type in tensor_files]
File "/Users/davis/Downloads/Projects/side-projects/pytorch_RVAE/utils/batch_loader.py", line 223, in
[self.word_tensor, self.character_tensor] = [np.array([np.load(target) for target in input_type])
File "/Users/davis/miniconda3/envs/pytorchRVAE/lib/python3.6/site-packages/numpy/lib/npyio.py", line 447, in load
pickle_kwargs=pickle_kwargs)
File "/Users/davis/miniconda3/envs/pytorchRVAE/lib/python3.6/site-packages/numpy/lib/format.py", line 692, in read_array
raise ValueError("Object arrays cannot be loaded when "
ValueError: Object arrays cannot be loaded when allow_pickle=False

Runtime error during 'python train.py'

I successfully ran the word embedding, but while training I get this runtime error. Any suggestions?

File "train.py", line 59, in
cross_entropy, kld, coef = train_step(iteration, args.batch_size, args.use_cuda, args.dropout)
File "/pytorch_RVAE/model/rvae.py", line 104, in train
z=None)
File "
/pytorch/torch/lib/python2.7/site-packages/torch/nn/modules/module.py", line 325, in call
result = self.forward(*input, **kwargs)
File "/pytorch_RVAE/model/rvae.py", line 64, in forward
encoder_input = self.embedding(encoder_word_input, encoder_character_input)
File "/
/pytorch/torch/lib/python2.7/site-packages/torch/nn/modules/module.py", line 325, in call
result = self.forward(*input, **kwargs)
File "/pytorch_RVAE/selfModules/embedding.py", line 47, in forward
character_input = self.TDNN(character_input)
File "
/pytorch/torch/lib/python2.7/site-packages/torch/nn/modules/module.py", line 325, in call
result = self.forward(*input, **kwargs)
File "~/pytorch_RVAE/selfModules/tdnn.py", line 42, in forward
xs = [x.max(2)[0].squeeze(2) for x in xs]
RuntimeError: dimension out of range (expected to be in range of [-2, 1], but got 2)

RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous

Hi, have you got any idea why I'm getting this error?

(py35_pytorch) ajay@ajay-h8-1170uk:~/PythonProjects/pytorch_RVAE-master$ python3 train_word_embeddings.py
preprocessed data was found and loaded
Traceback (most recent call last):
  File "train_word_embeddings.py", line 47, in <module>
    out = neg_loss(input, target, args.num_sample).mean()
  File "/home/ajay/anaconda3/envs/py35_pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ajay/PythonProjects/pytorch_RVAE-master/selfModules/neg.py", line 38, in forward
    assert parameters_allocation_check(self), \
  File "/home/ajay/PythonProjects/pytorch_RVAE-master/utils/functional.py", line 16, in parameters_allocation_check
    return fold(f_and, parameters, True) or not fold(f_or, parameters, False)
  File "/home/ajay/PythonProjects/pytorch_RVAE-master/utils/functional.py", line 2, in fold
    return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0]))
  File "/home/ajay/PythonProjects/pytorch_RVAE-master/utils/functional.py", line 2, in fold
    return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0]))
  File "/home/ajay/PythonProjects/pytorch_RVAE-master/utils/functional.py", line 6, in f_and
    z = x and y
  File "/home/ajay/anaconda3/envs/py35_pytorch/lib/python3.5/site-packages/torch/autograd/variable.py", line 123, in __bool__
    torch.typename(self.data) + " is ambiguous")
RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous

So I guess there's a type problem - a python bool needs to be converted to a torch float?

Scaling of CE loss

Hey! Was wondering why the CE loss is multiplied by the value 79 here. Is there any intuition behind this?

Thanks for your work :)

About the loss formula

In rvae.py "loss = 79 * cross_entropy + kld_coef(i) * kld".
Could you please explain why it is multiplied by 79 and what's the meaning of kld_coef(i)?

Error while running train_word_embeddings.py

File "/pytorch_RVAE-master/utils/functional.py", line 2, in fold
return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0]))
File "/pytorch_RVAE-master/utils/functional.py", line 2, in fold
return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0]))
File "/pytorch_RVAE-master/utils/functional.py", line 6, in f_and
return (x and y)
RuntimeError: bool value of Tensor with more than one value is ambiguous

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.