Coder Social home page Coder Social logo

ssmba's People

Contributors

nng555 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

ssmba's Issues

codes with roberta not working?

Hi, thanks for making this amazing code available!
But I have a question about the implementation..

When I use bert-base, alberta-base-v2, I do not have any problems.
However, when I use roberta-base model (which is the same as in your paper), I can't find any augmented data, unfortunately.

What am I missing now?
Thanks!

Question about tokenization/masking schemes

Hi, thanks for making your code available! Had some questions about implementation details in your paper and didn't see a contact email there, so I hope it's okay if I open an issue here.

I'm looking at applying SSMBA in the context of a sequence tagging task, where maintaining tokenization is pretty important for the label-preservation setup (and, to a lesser extent, the other two setups). I'm wondering about your masking and detokenization process, and whether you impose any constraints there. Since *BERT models use subword tokenization, it seems like randomly masking out tokens could lead to only part of a word being masked out. More generally, it also seems like masking out suffixes/prefixes would yield pretty similar data to what was originally there, or perhaps introduce another word. Are there any checks to make sure that masking and filling a proper subword results in another proper subword that fits back in with the rest of the word? Alternatively, are there ways to ensure only entire words are masked out, so that full new words can be inserted?

Thanks!

Question about code

Hi, I would like to use SSMBA to perform data augmentation and I've been looking over the code to make sense of it. I noticed that the hf_masked_encode function returns corrupted tokens and binary vector mask indicating whether the token has been corrupted or not. But I'm seeing that the binary mask tensor is then passed as the second argument for hf_reconstruction_prob_tok, but I'm having a hard time seeing how this is supposed to be the target_tokens. I'm assuming target_tokens are the original tokens before the corruption step, so it makes no sense to me why we're suddenly passing the binary mask.

I also noticed that you create a variable named mask_targets (

ssmba/utils.py

Line 38 in 2e98bcc

mask_targets[mask] = tokens[mask == 1]
) but never use it afterwards.

TypeError: Can't convert 'XXX' to PyBool

Hi,

I got one issue for this line in the utils.py in this line.

next_len = len(tokenizer.encode(*next_sents))

When I got a list contains three elements, like the following example.

next_sents = ['Beatriz Haddad Maia played on 2 April 2012', 'in Ribeirão Preto, Brazil', 'on a hard surface.']

I got an error like that. It seems it cannot handle the last element.

Traceback (most recent call last):
  File "/home/qbao775/.pycharm_helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
    exec(exp, global_vars, local_vars)
  File "<input>", line 1, in <module>
  File "/data/qbao775/ssmba/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2028, in encode
    encoded_inputs = self.encode_plus(
  File "/data/qbao775/ssmba/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2344, in encode_plus
    return self._encode_plus(
  File "/data/qbao775/ssmba/venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 458, in _encode_plus
    batched_output = self._batch_encode_plus(
  File "/data/qbao775/ssmba/venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 385, in _batch_encode_plus
    encodings = self._tokenizer.encode_batch(
TypeError: Can't convert 'on a hard surface.' to PyBool

Does anyone know how to solve that issue? Thank you so much.

Quick start - basic CMD fails

Hey.
I created a sample.txt file with a short text "I burst through the cabin doors" and tried to run the basic cmd:

(venv) (base) [ssmba]$ cat sample.txt 
I burst through the cabin doors
(venv) (base) [ssmba]$ python ssmba.py --model bert-base-uncased --in-file sample.txt --output-prefix ssmba_out --noise-prob 0.25 --num-samples 8 --topk 10
/../ssmba/venv/lib/python3.6/site-packages/transformers/modeling_auto.py:837: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
  FutureWarning,
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
/data/users/yonatab/ssmba/venv/lib/python3.6/site-packages/transformers/modeling_bert.py:1152: FutureWarning: The `masked_lm_labels` argument is deprecated and will be removed in a future version, use `labels` instead.
  FutureWarning,
Traceback (most recent call last):
  File "ssmba.py", line 259, in <module>
    gen_neighborhood(args)
  File "ssmba.py", line 155, in gen_neighborhood
    rec, rec_masks = hf_reconstruction_prob_tok(toks, masks, tokenizer, r_model, softmax_mask, reconstruct=True, topk=args.topk)
  File "/../ssmba/utils.py", line 105, in hf_reconstruction_prob_tok
    l[softmax_mask] = float('-inf')
RuntimeError: Output 0 of UnbindBackward is a view and is being modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.

I've installed the latest torch and transformers for cuda 10.1:

>>> torch.__version__
'1.7.0+cu101'
>>> transformers.__version__
'3.5.1'
>>> 

What am I missing? Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.