Coder Social home page Coder Social logo

aehrc / cxrmate Goto Github PK

View Code? Open in Web Editor NEW
14.0 14.0 3.0 4.11 MB

CXRMate: Longitudinal Data and a Semantic Similarity Reward for Chest X-Ray Report Generation

Home Page: https://huggingface.co/aehrc/cxrmate

License: Apache License 2.0

Python 100.00%
chest-x-ray-report-generation chest-xray-images image-captioning medical-imaging multimodal-learning radiology-reports

cxrmate's People

Contributors

anicolson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cxrmate's Issues

error when running cxrmate.ipynb

Hi, I am trying to run through the example code in cxrmate.ipynb. When I get to this line:

outputs = encoder_decoder.generate(
    pixel_values=images.to(device),
    decoder_input_ids=prompt['input_ids'],
    special_token_ids=[
        tokenizer.additional_special_tokens_ids[
            tokenizer.additional_special_tokens.index('[PMT-SEP]')
        ],
        tokenizer.bos_token_id,
        tokenizer.sep_token_id,
    ],  
    bos_token_id=tokenizer.bos_token_id,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.pad_token_id,
    mask_token_id=tokenizer.pad_token_id,
    return_dict_in_generate=True,
    use_cache=True,
    max_length=256 + prompt['input_ids'].shape[1],
    num_beams=4,
)

I get the following error:

Traceback (most recent call last):

  Cell In[11], line 1
    outputs = encoder_decoder.generate(

  File ~\AppData\Local\anaconda3\lib\site-packages\torch\utils\_contextlib.py:115 in decorate_context
    return func(*args, **kwargs)

  File ~\AppData\Local\anaconda3\lib\site-packages\transformers\generation\utils.py:1593 in generate
    model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(

  File ~\AppData\Local\anaconda3\lib\site-packages\transformers\generation\utils.py:742 in _prepare_encoder_decoder_kwargs_for_generation
    model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)

  File ~\AppData\Local\anaconda3\lib\site-packages\torch\nn\modules\module.py:1501 in _call_impl
    return forward_call(*args, **kwargs)

  File ~\.cache\huggingface\modules\transformers_modules\aehrc\cxrmate\1f014633b98564f21316b32e167b5796381690d8\modelling_longitudinal.py:91 in forward
    return ModelOutputWithProjectionEmbedding(

  File ~\AppData\Local\anaconda3\lib\site-packages\transformers\utils\generic.py:325 in __init__
    raise TypeError(

TypeError: transformers_modules.aehrc.cxrmate.1f014633b98564f21316b32e167b5796381690d8.modelling_longitudinal.ModelOutputWithProjectionEmbedding is not a dataclasss. This is a subclass of ModelOutput and so must use the @dataclass decorator.

Unable to reproduce the result in ./examples/cxrmate.ipynb

I download the weight from https://huggingface.co/aehrc/cxrmate. And run ./examples. The result of cxrmate-multi-tf.ipynb is the same. But when I run cxrmate.ipynb, the result is different with that in ipynb.
First study result:
Findings: Frontal and lateral views of the chest were obtained. A large bore dual lumen central venous catheter terminates within the right internal jugular central venous catheter terminates within the right atrium. The lungs are unchanged. The heart size and right internal jugular central venous catheter ends in unchanged. The heart size is unchanged. The heart size is unchanged. The heart size is unchanged. The aorta remains mildly tortuous and the upper thoracic aorta remains mildly tortuous and tortuous and hilar contours are unchanged. The cardiac silhouette size is unchanged. The aorta remains mildly tortuous and tortuous and tortuous and tortuous and tortuous and hilar contours are unchanged. The aorta is unchanged. Increased interstitial abnormality is unchanged. The cardiac silhouette size is unchanged. The aorta is unchanged. The cardiac silhouette size is unchanged. There is unchanged. The aorta is unchanged. The aorta is unchanged. The cardiac silhouette size of the osseous structures are within normal. The aorta is unchanged. The cardiac silhouette is unchanged. The aorta is unchanged. The aorta is unchanged with atherosclerotic calcifications are unchanged. The aorta is unchanged. The aorta is tortuous and the osseous structures are within normal. The aorta calcified and tortuous atherosclerotic calcifications are within normal. The aorta is tortuous and tortuous and tortuous and tortuous. The aorta is calcified tortuous atherosclerotic calcifications are diffusely calcified thoracic aorta calcified and tortuous. The aorta
Impression:

Findings: Frontal and lateral views of the chest were obtained. A large bore dual lumen central venous catheter terminates within the right internal jugular central venous catheter terminates within the right atrium. The lungs are unchanged. The heart size and right internal jugular central venous catheter ends in unchanged. The heart size is unchanged. The heart size is unchanged. The heart size is unchanged. The aorta remains mildly tortuous and the upper thoracic aorta remains mildly tortuous and tortuous and hilar contours are unchanged. The cardiac silhouette size is unchanged. The aorta remains mildly tortuous and tortuous and tortuous and tortuous and tortuous and hilar contours are unchanged. The aorta is unchanged. Increased interstitial abnormality is unchanged. The cardiac silhouette size is unchanged. The aorta is unchanged. The cardiac silhouette size is unchanged. There is unchanged. The aorta is unchanged. The aorta is unchanged. The cardiac silhouette size of the osseous structures are within normal. The aorta is unchanged. The cardiac silhouette is unchanged. The aorta is unchanged. The aorta is unchanged with atherosclerotic calcifications are unchanged. The aorta is unchanged. The aorta is tortuous and the osseous structures are within normal. The aorta calcified and tortuous atherosclerotic calcifications are within normal. The aorta is tortuous and tortuous and tortuous and tortuous. The aorta is calcified tortuous atherosclerotic calcifications are diffusely calcified thoracic aorta calcified and tortuous. The aorta
Impression:

Seconde study result:
Findings: PA and the lungs are within normal. There is moderately tortuous and tortuous and the heart size is moderately tortuous and the lungs are within normal. There is moderately tortuous. There is moderately tortuous and the aortic knob is moderately tortuous. There is moderately tortuous. There is moderately tortuous and tortuous and the aorta is moderately tortuous. There is unchanged. There is a large bore central pulmonary vascularity is moderately tortuous and the aortic knob and tortuous and the aorta is unchanged. There is unchanged. There is unchanged. There is moderately tortuous. There is moderately tortuous. There is moderately tortuous. There is moderately tortuous. The cardiac silhouette is unchanged. Low lung volumes are within normal. The cardiac silhouette is moderately tortuous. There is unchanged. The cardiac silhouette is moderately tortuous. There is moderately tortuous. The cardiac silhouette is moderately tortuous. There is unchanged. There is unchanged. The cardiac silhouette is moderately tortuous and tortuous. Low lung volumes are within normal. There is unchanged. The cardiac silhouette is moderately tortuous. The cardiac silhouette is unchanged. The cardiac silhouette is unchanged. There is moderately tortuous and tortuous. There is unchanged. There is unchanged. There is moderately tortuous. The cardiac silhouette is moderately tortuous and tortuous and the knob calcifications are within normal. There is moderately tortuous and the atherosclerotic calcifications are within normal.
Impression:

Findings: The heart remains moderately enlarged. Low lung volumes are low. Low lung volumes are low. Low lung volumes are low. Low lung volumes are relatively low. Low lung volumes are low. Low lung volumes are low. Low lung volumes are low. Low lung volumes are low. Low lung volumes are low, and there is low, and there is low, and there is grossly clear. Low lung volumes are low, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, and there is low, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however, however,
Impression:

I wonder why? Otherwise, I pip install -r requirements.txt , the version of transformer is 4.42.1, in this version is 4.42, it will report:Traceback (most recent call last):
File "/tmp/cxrmate/examples/cxrmate-multi-tf.py", line 59, in
outputs = encoder_decoder.generate(
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/transformers/generation/utils.py", line 1597, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/generation/utils.py", line 523, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
TypeError: forward() got an unexpected keyword argument 'output_attentions'

I changed version to 4.30.1, it will not report an error.
I would like to know your Torch and Transformers versions, directly following the requirements will download the latest version.
I hope to receive your reply! Thank you very much!

Request: notebook file for training cxrmate

This repo looks very interesting, I would like to try using it to train on a new large XR dataset. However the only training tutorial available is on the main github page, and requires command-line usage of the dlhpcstarter package. Could you please create an .ipynb file showing how to run training within a python IDE, similar to the examples found in cxrmate/examples for how to run inference-only? Thanks!

How to find the prior images of the current image?

Hi! Thanks for your contribution. It is an excellent piece of work!

I would like to know how the prior images of the current image are identified in this paper.
Upon careful examination of the MIMIC-CXR-v2 dataset, I have observed that its documentation states, "These study identifiers are completely random, and their order has no implications for the chronological order of the actual studies. " Additionally, the time information in the reports is masked (e.g., "In comparison with the study of ___"). Therefore, I am uncertain about how to identify the prior images corresponding to a given current image.

Thank you very much for your time and consideration. I eagerly look forward to your response.

Training consultation

When I train with Self-Critical Sequence Training (SCST) with the CXR-BERT reward, I set
devices: 2
mbatch_size: 16
num_workers: 32
but encountered the following error:
'''
(venv) [root@3dc54336e478 home]# dlhpcstarter -t mimic_cxr -c config/train/longitudinal_gen_prompt_cxr-bert.yaml --stages_module tools.stages --train
Seed set to 0
PTL no. devices: 2.
PTL no. nodes: 1.
/usr/local/lib/python3.8/site-packages/lightning/fabric/connector.py:571: precision=16 is supported for historical reasons but its usage is discouraged. Please set your precision to 16-mixed instead!
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
Description, Special token, Index
bos_token, [BOS], 1
eos_token, [EOS], 2
unk_token, [UNK], 0
sep_token, [SEP], 3
pad_token, [PAD], 4
cls_token, [BOS], 1
mask_token, [MASK], 5
additional_special_token, [NF], 6
additional_special_token, [NI], 7
additional_special_token, [PMT], 8
additional_special_token, [PMT-SEP], 9
additional_special_token, [NPF], 10
additional_special_token, [NPI], 11
/home/modules/transformers/longitudinal_model/modelling_longitudinal.py:155: UserWarning: The encoder-to-decoder model was not warm-started before applying low-rank approximation.
warnings.warn('The encoder-to-decoder model was not warm-started before applying low-rank approximation.')
trainable params: 147,456 || all params: 80,916,528 || trainable%: 0.1822
/usr/local/lib/python3.8/site-packages/transformers/models/convnext/feature_extraction_convnext.py:28: FutureWarning: The class ConvNextFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use ConvNextImageProcessor instead.
warnings.warn(
Warm-starting using: /home/experiments/cxrmate/longitudinal_gt_prompt_tf/trial_0/epoch=19-step=78380-val_report_chexbert_f1_macro=0.371041.ckpt.
/usr/local/lib/python3.8/site-packages/dlhpcstarter/utils.py:347: UserWarning: The "last" checkpoint does not exist, starting training from epoch 0.
warnings.warn('The "last" checkpoint does not exist, starting training from epoch 0.')
You are using a CUDA device ('Z100L') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
[rank: 0] Seed set to 0
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
[rank: 1] Seed set to 0
PTL no. devices: 2.
PTL no. nodes: 1.
Description, Special token, Index
bos_token, [BOS], 1
eos_token, [EOS], 2
unk_token, [UNK], 0
sep_token, [SEP], 3
pad_token, [PAD], 4
cls_token, [BOS], 1
mask_token, [MASK], 5
additional_special_token, [NF], 6
additional_special_token, [NI], 7
additional_special_token, [PMT], 8
additional_special_token, [PMT-SEP], 9
additional_special_token, [NPF], 10
additional_special_token, [NPI], 11
/home/modules/transformers/longitudinal_model/modelling_longitudinal.py:155: UserWarning: The encoder-to-decoder model was not warm-started before applying low-rank approximation.
warnings.warn('The encoder-to-decoder model was not warm-started before applying low-rank approximation.')
trainable params: 147,456 || all params: 80,916,528 || trainable%: 0.1822
/usr/local/lib/python3.8/site-packages/transformers/models/convnext/feature_extraction_convnext.py:28: FutureWarning: The class ConvNextFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use ConvNextImageProcessor instead.
warnings.warn(
Warm-starting using: /home/experiments/cxrmate/longitudinal_gt_prompt_tf/trial_0/epoch=19-step=78380-val_report_chexbert_f1_macro=0.371041.ckpt.
/usr/local/lib/python3.8/site-packages/dlhpcstarter/utils.py:347: UserWarning: The "last" checkpoint does not exist, starting training from epoch 0.
warnings.warn('The "last" checkpoint does not exist, starting training from epoch 0.')
[rank: 1] Seed set to 0
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0711 11:46:15.886372 31375 ProcessGroupNCCL.cpp:686] [Rank 1] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=226348544
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0711 11:46:15.892076 31223 ProcessGroupNCCL.cpp:686] [Rank 0] ProcessGroupNCCL initialization options:NCCL_ASYNC_ERROR_HANDLING: 1, NCCL_DESYNC_DEBUG: 0, NCCL_ENABLE_TIMING: 0, NCCL_BLOCKING_WAIT: 0, TIMEOUT(ms): 1800000, USE_HIGH_PRIORITY_STREAM: 0, TORCH_DISTRIBUTED_DEBUG: OFF, NCCL_DEBUG: OFF, ID=229697888

distributed_backend=nccl
All distributed processes registered. Starting with 2 processes

I0711 11:46:16.570466 31223 ProcessGroupNCCL.cpp:1340] NCCL_DEBUG: N/A
/usr/local/lib/python3.8/site-packages/lightning/pytorch/callbacks/model_checkpoint.py:652: Checkpoint directory /home/experiments/mimic_cxr/longitudinal_gen_prompt_cxr-bert/trial_0 exists and is not empty.
/home/data/prompt.py:186: UserWarning: The number of examples is not divisible by the world size. Adding extra studies to account for this. This needs to be accounted for outside of the dataset.
warnings.warn('The number of examples is not divisible by the world size. '
Traceback (most recent call last):
File "/usr/local/bin/dlhpcstarter", line 8, in
sys.exit(main())
File "/usr/local/lib/python3.8/site-packages/dlhpcstarter/main.py", line 126, in main
submit(args, cmd_line_args, stages_fnc)
File "/usr/local/lib/python3.8/site-packages/dlhpcstarter/main.py", line 21, in submit
stages_fnc(args)
File "/home/tools/stages.py", line 85, in stages
trainer.fit(model, ckpt_path=ckpt_path)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 543, in fit
call._call_and_handle_interrupt(
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/call.py", line 43, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/strategies/launchers/subprocess_script.py", line 105, in launch
return function(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 579, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 948, in _run
call._call_setup_hook(self) # allow user to set up LightningModule in accelerator environment
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/call.py", line 96, in _call_setup_hook
_call_lightning_module_hook(trainer, "setup", stage=fn)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/call.py", line 159, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/modules/lightning_modules/longitudinal/scst/gen_prompt.py", line 66, in setup
self.train_set = PreviousReportSubset(
File "/home/data/prompt.py", line 73, in init
self.allocate_subjects_to_rank(shuffle_subjects=False)
File "/home/data/prompt.py", line 212, in allocate_subjects_to_rank
assert len(set(self.examples)) == self.df.study_id.nunique() and
AssertionError
I0711 11:46:24.351401 31223 ProcessGroupNCCL.cpp:874] [Rank 0] Destroyed 1communicators on CUDA device 0
/home/data/prompt.py:186: UserWarning: The number of examples is not divisible by the world size. Adding extra studies to account for this. This needs to be accounted for outside of the dataset.
warnings.warn('The number of examples is not divisible by the world size. '
Traceback (most recent call last):
File "/usr/local/bin/dlhpcstarter", line 8, in
sys.exit(main())
File "/usr/local/lib/python3.8/site-packages/dlhpcstarter/main.py", line 126, in main
submit(args, cmd_line_args, stages_fnc)
File "/usr/local/lib/python3.8/site-packages/dlhpcstarter/main.py", line 21, in submit
stages_fnc(args)
File "/home/tools/stages.py", line 85, in stages
trainer.fit(model, ckpt_path=ckpt_path)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 543, in fit
call._call_and_handle_interrupt(
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/call.py", line 43, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/strategies/launchers/subprocess_script.py", line 105, in launch
return function(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 579, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 948, in _run
call._call_setup_hook(self) # allow user to set up LightningModule in accelerator environment
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/call.py", line 96, in _call_setup_hook
_call_lightning_module_hook(trainer, "setup", stage=fn)
File "/usr/local/lib/python3.8/site-packages/lightning/pytorch/trainer/call.py", line 159, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/modules/lightning_modules/longitudinal/scst/gen_prompt.py", line 66, in setup
self.train_set = PreviousReportSubset(
File "/home/data/prompt.py", line 73, in init
self.allocate_subjects_to_rank(shuffle_subjects=False)
File "/home/data/prompt.py", line 212, in allocate_subjects_to_rank
assert len(set(self.examples)) == self.df.study_id.nunique() and
AssertionError
I0711 11:46:25.112917 31375 ProcessGroupNCCL.cpp:874] [Rank 1] Destroyed 1communicators on CUDA device 1
'''
I want to ask how you set the parameters during training. I saw that your paper used 4×16GB NVIDIA Tesla P100 GPUs. I used 2×32GB NVIDIA V100 GPUs.And I set devices: 1 mbatch_size: 1 without error, but it is too slow. I look forward to your answer,thank you very much!

Printing out accuracy metrics during training

I'm trying to recreate the training on a subset of the mimic-cxr-jpeg dataset and my output during training looks like this:

Epoch 10: 100%
TBTokenizer tokenized 2958 tokens at 43535.30 tokens per second.
PTBTokenizer tokenized 4446 tokens at 62878.10 tokens per second.
{'testlen': 2425, 'reflen': 3839, 'guess': [2425, 2365, 2305, 2245], 'correct': [1180, 456, 223, 112]}
ratio: 0.6316749153423726

Since the chexbert F1 metric is the parameter being used to decide when the final version of the model is saved during training, is there a way to print out this F1 metric after each epoch? That would give an idea for if the model is still improving, or if a bunch of epochs have gone by without any progress on validation F1. It could be interesting to print out other validation set metrics like bleu and meteor after each epoch as well.

Also, what is the "ratio" currently being printed out by the logs here? Thanks!

Multi-card training problem

Hi!
when i run “dlhpcstarter -t cxrmate -c config/train/longitudinal_gt_prompt_tf.yaml --stages_module tools.stages --train”
But the speed is very slow. How should I set the training parameters and data batches to speed up the training process?
Only 4013MiB / 32768MiB of memory is used single gup
and my gpu card is 32G*2
Thank you very much!

Authors name in path causes error in tutorial code

Hi, I tried to download this and run the first example here on github:

dlhpcstarter -t cxrmate_hf -c config/test_huggingface/longitudinal_gen_prompt_cxr-bert.yaml --stages_module tools.stages --test

But I get an error right away that suggests the code authors name is still hardcoded in a path somewhere? Does that need to be fixed? What is that paths.yaml file its referring to? Thanks!

(base) C:\Users\myusername\Desktop\cxrmate>dlhpcstarter -t cxrmate_hf -c config/test_huggingface/longitudinal_gen_prompt_cxr-bert.yaml --stages_module tools.stages --test
Traceback (most recent call last):
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\myusername\AppData\Local\anaconda3\Scripts\dlhpcstarter.exe\__main__.py", line 7, in <module>
    sys.exit(main())
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\dlhpcstarter\__main__.py", line 49, in main
    load_config_and_update_args(args=args, print_args=True)
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\dlhpcstarter\utils.py", line 79, in load_config_and_update_args
    config = compose(config_name=args.config_name)
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\compose.py", line 38, in compose
    cfg = gh.hydra.compose_config(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\hydra.py", line 594, in compose_config
    cfg = self.config_loader.load_configuration(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\config_loader_impl.py", line 142, in load_configuration
    return self._load_configuration_impl(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\config_loader_impl.py", line 253, in _load_configuration_impl
    defaults_list = create_defaults_list(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 745, in create_defaults_list
    defaults, tree = _create_defaults_list(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 715, in _create_defaults_list
    defaults_tree = _create_defaults_tree(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 356, in _create_defaults_tree
    ret = _create_defaults_tree_impl(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 457, in _create_defaults_tree_impl
    return _expand_virtual_root(repo, root, overrides, skip_missing)
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 280, in _expand_virtual_root
    subtree = _create_defaults_tree_impl(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 573, in _create_defaults_tree_impl
    add_child(children, new_root)
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 520, in add_child
    subtree_ = _create_defaults_tree_impl(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 573, in _create_defaults_tree_impl
    add_child(children, new_root)
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 520, in add_child
    subtree_ = _create_defaults_tree_impl(
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 488, in _create_defaults_tree_impl
    config_not_found_error(repo=repo, tree=root)
  File "C:\Users\myusername\AppData\Local\anaconda3\lib\site-packages\hydra\_internal\defaults_list.py", line 799, in config_not_found_error
    raise MissingConfigException(
hydra.errors.MissingConfigException: In 'single_tf': Could not load '/home/anicolson/config/paths.yaml'.

Model architecture adjustment problem

Hi! Thanks for your contribution. It is an excellent piece of work!

Your idea is great, and I want to test my task. But my corpus language is Chinese, do I need to adjust the tokenizer and pre-trained bert? Will it work?

Thank you very much for your time and consideration. I eagerly look forward to your response.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.