Coder Social home page Coder Social logo

phoner_covid19's Introduction

COVID-19 Named Entity Recognition for Vietnamese

PhoNER_COVID19 is a dataset for recognizing COVID-19 related named entities in Vietnamese, consisting of 35K entities over 10K sentences. We define 10 entity types with the aim of extracting key information related to COVID-19 patients, which are especially useful in downstream applications. In general, these entity types can be used in the context of not only the COVID-19 pandemic but also in other future epidemics:

entity types

The construction of PhoNER_COVID19 is detailed in our NAACL 2021 paper:

@inproceedings{PhoNER_COVID19,
title     = {{COVID-19 Named Entity Recognition for Vietnamese}},
author    = {Thinh Hung Truong and Mai Hoang Dao and Dat Quoc Nguyen},
booktitle = {Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year      = {2021}
}  

By downloading the PhoNER_COVID19 dataset, USER agrees:

  • to use PhoNER_COVID19 for research or educational purposes only.
  • to not distribute PhoNER_COVID19 or part of PhoNER_COVID19 in any original or modified form.
  • and to cite our NAACL paper above whenever PhoNER_COVID19 is employed to help produce published results.

Copyright (c) 2021 VinAI Research

THE DATA IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE DATA OR THE USE OR OTHER DEALINGS IN THE
DATA.

phoner_covid19's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

phoner_covid19's Issues

test_predictions.txt file of the models in the paper

I'm doing a minor NLP project for my study at university.
My plan is to reproduce the results reported in the paper and then do some further error analysis on wrong cases. However, I couldn't get the same training set up in the paper.
I got a little differents results across models and I want to examine the results generated from both the original paper and my training.
How can I get test_predictions.txt file of the models in the paper? Thank you.

PhoBERT (phobert-large) model suddenly outputs all "O" labels after some training epochs

I'm trying to replicate the results on the datasets, but my Colab Pro cannot fit into the batch_size of 32 (as in the paper). So I decreased the batch_size down to 4.
I just train PhoBERT model on the dataset. The results of phobert-base model is quite impressive.
However, phobert-large model starts of training at a lower eval_f1 score than phobert-base
Here is my training details:

!python /content/transformers/examples/legacy/token-classification/run_ner.py \
--task_type NER \
--overwrite_output_dir True \
--data_dir '/content/' \
--labels '/content/labels.txt' \
--model_name_or_path 'vinai/phobert-large' \
--output_dir '/content/drive/MyDrive/Môn học/NLP materials/Do an/test-ner/phobert-large' \
--learning_rate 5e-5 \
--max_seq_length  128 \
--num_train_epochs 30 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--save_total_limit=100 \
--seed 1 \
--do_train \
--do_eval \
--do_predict \
--logging_strategy epoch \
--evaluation_strategy epoch \
--save_strategy epoch \
--lr_scheduler_type 'constant' \
--metric_for_best_model 'eval_f1' \
--logging_dir '/content/drive/MyDrive/Môn học/NLP materials/Do an/test-ner/phobert-large'  \
--load_best_model_at_end True \

image

Before using run_ner.py provided by huggingface team, I manually encoded labels to the splitted (sub-word) tokens.
The results were the same. I thought it was because of some mistakes while encoding but it doesn't seem to be the case.

Is the small batch_size causing this or something else?

ValueError: word_ids() is not available when using Python-based tokenizers

Hello!,
Thanks for publishing the dataset.

I'm trying to replicate the results reported in the paper.
I had some trouble when encoding labels from original tokens to tokenized tokens. The tokens ids length are usually longer than the word tokens since they can be split into multiple sub-word tokens. For example, the word "Bệnh" is splitted into 'Be@@', '̣@@', '', 'nh', which also causes the corresponding label to be longer.

The problem can be easily solved if the tokenizer output contains 'word_ids' attribute which indicates which token the each input_id comes from. This is what most tokenizers in huggingface supports.

This function is taken from run_ner.py in example code of huggingface repo:

    def tokenize_and_align_labels(examples):
        tokenized_inputs = tokenizer(
            examples[text_column_name],
            padding=padding,
            truncation=True,
            max_length=data_args.max_seq_length,
            # We use this argument because the texts in our dataset are lists of words (with a label for each word).
            is_split_into_words=True,
        )
        labels = []
        for i, label in enumerate(examples[label_column_name]):
            word_ids = tokenized_inputs.word_ids(batch_index=i)
            previous_word_idx = None
            label_ids = []
            for word_idx in word_ids:
                # Special tokens have a word id that is None. We set the label to -100 so they are automatically
                # ignored in the loss function.
                if word_idx is None:
                    label_ids.append(-100)
                # We set the label for the first token of each word.
                elif word_idx != previous_word_idx:
                    label_ids.append(label_to_id[label[word_idx]])
                # For the other tokens in a word, we set the label to either the current label or -100, depending on
                # the label_all_tokens flag.
                else:
                    if data_args.label_all_tokens:
                        label_ids.append(b_to_i_label[label_to_id[label[word_idx]]])
                    else:
                        label_ids.append(-100)
                previous_word_idx = word_idx

            labels.append(label_ids)
        tokenized_inputs["labels"] = labels
        return tokenized_inputs

When I use PhoBERT tokenizer, I get the error like this:

 0%|          | 0/6 [00:00<?, ?ba/s]
Traceback (most recent call last):
  File "C:\Users\quang\AppData\Local\Programs\Python\Python39\lib\site-packages\IPython\core\interactiveshell.py", line 3441, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-7-66d85e73c564>", line 101, in <module>
    train_tokenized_datasets = train_dataset.map(tokenize_and_align_labels, batched=True)
  File "C:\Users\quang\PycharmProjects\DeepGamingAI_FPS\venv\lib\site-packages\datasets\arrow_dataset.py", line 2035, in map
    return self._map_single(
  File "C:\Users\quang\PycharmProjects\DeepGamingAI_FPS\venv\lib\site-packages\datasets\arrow_dataset.py", line 521, in wrapper
    out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
  File "C:\Users\quang\PycharmProjects\DeepGamingAI_FPS\venv\lib\site-packages\datasets\arrow_dataset.py", line 488, in wrapper
    out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
  File "C:\Users\quang\PycharmProjects\DeepGamingAI_FPS\venv\lib\site-packages\datasets\fingerprint.py", line 406, in wrapper
    out = func(self, *args, **kwargs)
  File "C:\Users\quang\PycharmProjects\DeepGamingAI_FPS\venv\lib\site-packages\datasets\arrow_dataset.py", line 2403, in _map_single
    batch = apply_function_on_filtered_inputs(
  File "C:\Users\quang\PycharmProjects\DeepGamingAI_FPS\venv\lib\site-packages\datasets\arrow_dataset.py", line 2290, in apply_function_on_filtered_inputs
    function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
  File "C:\Users\quang\PycharmProjects\DeepGamingAI_FPS\venv\lib\site-packages\datasets\arrow_dataset.py", line 1990, in decorated
    result = f(decorated_item, *args, **kwargs)
  File "<ipython-input-7-66d85e73c564>", line 75, in tokenize_and_align_labels
    word_ids = tokenized_inputs.word_ids(batch_index=i)
  File "C:\Users\quang\PycharmProjects\DeepGamingAI_FPS\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 353, in word_ids
    raise ValueError("word_ids() is not available when using Python-based tokenizers")
ValueError: word_ids() is not available when using Python-based tokenizers

Can you suggest a solution to this problem?
Thanks for reading!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.