Comments (1)
I am interested in evaluating performance on the OntoNotes dataset using the provided trained BERT-base model (I am not training from scratch). In order to evaluate performance, I need to first run
./setup_training.sh <ontonotes/path/ontonotes-release-5.0> $data_dir
. However, I am unsure how to specify<ontonotes/path/ontonotes-release-5.0>
anddata_dir
. What is the difference between the two? I have set data_dir to be '.' and the <ontonotes/path/ontonotes-release-5.0> path to be the full path to where I have stored onotonotes-release-5.0, but I am not sure if this is correct (I also tried making data_dir the path to the directory where ontonotes-release-5.0 is stored and <ontonotes/path/ontonotes-release-5.0> to be 'ontonotes-release-5.0', but that didn't help)? Could someone provide an example of how to correctly specify these paths?I tried running
GPU=0 python evaluate.py 'bert_base'
and find that it evaluates on 0 examples. I assume that is because there is an issue with the data paths. Would appreciate any help in getting evaluate to work!Update: I saw that minimize_partition() in
minimize.py
currently writes 0 documents, which makes sense that we are evaluating on 0 examples. Not sure if this is a data path-related issue as I discussed above, or something else altogether.
Hi there, can I ask how did you manage to produce the temp files? @preethiseshadri518
from coref.
Related Issues (20)
- Has anyone got this to work? HOT 3
- why not pytorch code for that?
- How to evaluate SpanBERT using sample test data? HOT 3
- assert num_words == np.sum(input_mask), (num_words, np.sum(input_mask))
- Found too many repeated mentions (> 10) in the response, so refusing to score
- Requirements txt is broken HOT 1
- libprotobuf FATAL : CHECK failed: it != end(): key not found
- converting predicted (subtoken) output to normal text HOT 1
- Num_docs and evaluating docs
- Predicting singletons HOT 1
- Does this model apply to Chinese data๏ผ HOT 2
- Too many errors while installing requirements HOT 1
- Custom training data for BERT
- Has anyone reproduced successfully on windows?
- can anyone explain why the execution kept stuck here without generating any error or something else ?
- how can i choose the batch size, in which code file should I modify ?
- can anyone help me with the gpu configuration? it works well on cpu but when i turn to the model to run on gpu it opens succefully all the related libraries but crashes it some step
- Sentence index when splitting long sentences into non-overlapping chunks
- F1 79.96 on ontonotes 5 with your pretrained spanbert_large HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from coref.