dmis-lab / biosyn Goto Github PK
View Code? Open in Web Editor NEWACL'2020: Biomedical Entity Representations with Synonym Marginalization
Home Page: https://arxiv.org/abs/2005.00239
License: MIT License
ACL'2020: Biomedical Entity Representations with Synonym Marginalization
Home Page: https://arxiv.org/abs/2005.00239
License: MIT License
Dear Respected Sir,
I need your preprocessing script. would you please provide me that? i shall be thankful.
Regards,
Haseeb Younas
Thank you for your work. It's been very useful to me
When I access the data: NCbi-Disease, BC5CDR-Disease, bC5CDR-Chemical, the web page will prompt "no preview, the file is in the owner's recycle bin", can you provide other data download methods
If there are any misspelled words in the input, our model is unable to predict the correct ones; moreover, it may predict some random CUI keys.
To address this on our med terms ,can we utilize a high-quality spellchecker. please suggest some
Hi, thank you so much for your work!
I have a question about the composite predictions from the model - in your README example, the model predicts multiple identifiers "D001260|208900" in the top 5. Does this mean that the model found both terms to be similar to the mention text based on probabilities or it is providing alternate possible identifiers? Just want to understand what predicting multiple identifiers implies in BioSyn predictions. Thanks!
Hello
I am opening this issue to get access to your dictionary pre-processing code.
Regards
Shyama
Dear authors,
could you please share preprocessing scripts and MedDRA dictionaries for the TAC2017ADR dump? Thank you in advance!
Hi there, I'm following the NCBI-disease preprocess procedure to preprocess the dataset from scratch, I would be happy if you could help me how fin tune your model on the SNOMED CT database [link]
Thanks,
Milad
May I ask you to provide more information about how to train/evaluate BioSyn on MedMentions dataset please?
Hi,
Thanks to your Git repo, I was able to make 10 runs on the NCBI corpus with BioSyn.
I only did a single parsing of the data, then 10 independent preprocessing + training + prediction.
My results (with your evaluation script) are Acc1=89.89 with a standard deviation of 0.64 (the variability of the results coming almost exclusively from the preprocessing, from Ab3P I guess).
I understand from your article that your Acc1=91.1 result is obtained on a single run, right?
If so, it seems consistent.
If not, would you have an idea where the small difference could come from? (in this case, I could provide my bash commands, but they are basically a copy of the ones in your readme, without any modification of the options)
Kind regards
Hi
Thank you for your work!
Would it be possible to include information relative to the original filenames of the evaluation dataset in the predictions_eval.json file? I.e. to associate a given list of mentions in the predictions file to the original filename (through a key or something)?
I have been trying to compare BioSyn with BERN. But currently for BioSyn only biosyn-ncbi-disease
pre-trained model is available.
Although i can train on the other datasets bc5cdr-disease
, bc5cdr-chemical
(as you have mentioned and shared the scripts).
But are there any pre-trained version which can be shared which are reported in the paper
In the following example, "SpecificDisease", "Modifier" and "CompositeMention" are playing any role in the algorithm?
10441573||444|459||SpecificDisease||ovarian cancers||D010051
10441573||492|498||Modifier||cancer||D009369
10441573||655|683||CompositeMention||breast cancer|ovarian cancer||D001943|D010051
Is it possible to find a relevance score for each prediction in the inference?
If I understood correctly, we have a dictionary which maps "aliases"/"synonyms" to a list of corresponding cuis.
The input of the algorithm is a string (mention) and the output is some items in the dictionary:
{
"mention": "ataxia telangiectasia",
"predictions": [
{"name": "ataxia telangiectasia", "id": "D001260|208900"},
{"name": "ataxia telangiectasia syndrome", "id": "D001260|208900"},
{"name": "ataxia telangiectasia variant", "id": "C566865"},
{"name": "syndrome ataxia telangiectasia", "id": "D001260|208900"},
{"name": "telangiectasia", "id": "D013684"}
]
}
Since our target is to map the mention to a CUI, I'm wondering if there is any functionality that map the above output to a single CUI?
Hi,
I would like to reproduce the BioSyn results on the NCBI Disease Corpus (and to do an ablation study).
I was able to use your core method (+lowercased) on this corpus (and some others), but without the resolution of composite mentions and acronyms, I only get around 0.801 of top 1 accuracy on your published 0.911.
Could you please send me a pre-processing script?
Kind regards,
Arnaud
Hello, there. After read the related paper, I got a question about the loss calculation. Formula 7 in the paper pointed out the definition of the marginal probability of the positive synonyms of a mention m. What if all of the top-k synonyms don't satisfy
EQUAL(m, n) = 1
then the marginal probability could be zero. And in formula 8, log 0 could be infinite, which seems like problematic.
Looking forward to your reply ~~
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.