jflanigan / jamr Goto Github PK
View Code? Open in Web Editor NEWJAMR Parser and Generator
License: BSD 2-Clause "Simplified" License
JAMR Parser and Generator
License: BSD 2-Clause "Simplified" License
./tokenize-anything.sh < ~/Documents/ZeroShot/sen/AFP_ENG_20030417.0764 > out_file
Invalid range "\x{0CE6}-\x{0BEF}" in transliteration operator at ./support/quote-norm.pl line 149
I met this issue, could you please give me the suggestion to solve it? cheers. @jflanigan
I wanted to train JAMR on the recently released LDC dataset, but ran into issues while trying. PREPROCESS.sh
and TRAIN.sh
both failed silently after having generated a few preprocessed files. I am running macOS 10.12.5.
To combine the datasets .txt files into three unified training, dev, and test files, I simply modified the make_splits.sh
file in jamr/scripts/preprocessing/LDC2015E86
to fix the paths and add the new files, then ran my modified version. This caused some problems.
Preprocessing kept breaking silently, so I opened the script and ran each command individually. I might've even done that with each script inside of preprocess too. Eventually, I found the error that was killing preprocessing: character encoding issues brought in by make_splits.sh
To fix this, I found and replaced the following characters in dev.txt, training.txt, and test.txt:
\x{93} -> "
\x{85} -> .
\x{92} -> '
’ -> '
To avoid this, I would recommend concatenating the text files with bash cat
as is done in make_splits.sh
. Use a language that can better handle the encoding, such as python3, or do the concatenation by hand.
Despite this fix, I believe PREPROCESS.sh
still wouldn't run straight through, but I successfully ran each inner command consecutively and completed preprocessing.
I then commented the preprocessing step out of TRAIN.sh
, because it took 3 hours on the large new dataset, and ran TRAIN.sh
.
There I encountered the final issue. error: command 'wf' not found
during jamr/scripts/training/cmd.conceptTable.train
wf
is a small shell script in the same directory, so I fixed this error by appending ./
to wf
in cmd.conceptTable.train
Hope this helps someone. If you have trained JAMR on the new LDC dataset, I would love to compare smatch scores. I received lackluster results in parsing after training, and I'd like to know if others experience the same.
P.S. With 16GB RAM and quad i7 at 3.4GHz, it took about 20 hours to train 10 iterations.
Using the splits given in the dataset:
----- Evaluation on Dev: Smatch (all stages) -----
Precision: 0.708
Recall: 0.651
Document F-score: 0.678
----- Evaluation on Dev: Smatch (gold concept ID) -----
Precision: 0.805
Recall: 0.718
Document F-score: 0.759
----- Evaluation on Dev: Smatch (oracle) -----
Precision: 0.871
Recall: 0.833
Document F-score: 0.851
----- Evaluation on Dev: Spans -----
Precision: 0.764265094281678
Recall: 0.799165061014772
F1: 0.7813255470785846
----- Evaluation on Test: Smatch (all stages) -----
Precision: 0.700
Recall: 0.643
Document F-score: 0.670
----- Evaluation on Test: Smatch (gold concept ID) -----
Precision: 0.795
Recall: 0.705
Document F-score: 0.747
----- Evaluation on Test: Smatch (oracle) -----
Precision: 0.870
Recall: 0.832
Document F-score: 0.851
----- Evaluation on Test: Spans -----
Precision: 0.7584370512206797
Recall: 0.7950197578874741
F1: 0.7762976573265962
The following link in setup
is unfortunately dead:
wget http://demo.clab.cs.cmu.edu/cdec/cdec-2014-10-12.tar.gz
I ended up following the instructions to download it here: https://github.com/redpony/cdec.
After the compilation and split, I try to run PREPROCESS.sh and I got the above error.
Does anyone have idea what the problem might be?
I encounter this warning when doing parsing. Anyone knows what it means? How much will it hurt the performance?
One of my students ran some test sentences through JAMR, and we noticed that multiword named entities (possibly just out-of-vocabulary ones) were represented backwards. E.g., with "John Smith" as the input, the resulting AMR was
(p / person
:name (n / name
:op1 "Smith"
:op2 "John"))
Is this due to a bug in the NER heuristics?
When parsing sentences containing the token '24/7', the JAMR parser returned AMRs with things like:
:ARG1 24/7
Which results in errors when attempting to further process these AMRs; for example, when running smatch, it returns the error:
Traceback (most recent call last):
File "smatch.py", line 927, in
main(args)
File "smatch.py", line 827, in main
amr1.rename_node(prefix1)
AttributeError: 'NoneType' object has no attribute 'rename_node'
This can be avoided by adding quotation marks in the parser output to treat the problematic token as a string, e.g.
:ARG1 "24/7"
Which appears to be the approach used in gold AMR data.
while trying to train I came across this error,
panic: swash_fetch got swatch of unexpected bit width, slen=1024, needents=64 at /home/tempuser/AMRParsing/jamr/tools/cdec/corpus/support/quote-norm.pl line 149, line 1.
Hello,
I would like use JAMR parser on Mac os X.
- Is-it possible ?
- What modifications are necessary ?
Best regards,
Bernard.
The original JAMR was installed in 2015 or 2016 in the server, thus some packages were broken or not updated. The default setup script in JAMR repos is set to a version no longer available.
I have installed JAMR again on my Macbook (Mojave 10.14.6) and uploaded a new version in JAMR and share the details as following.
First, please ensure the following env:
JAVA=8.0 (Download from oracle website)
sbt=1.0.2, scala=2.11.8 (Use SDKMAN! to install, like sdk install scala 2.11.8
)
Then, modify these files:
jamr/project/build.properties
:sbt.version=1.0.2
jamr/build.sbt
:// import AssemblyKeys._
// assemblySettings
name := "jamr"
version := "0.1-SNAPSHOT"
organization := "edu.cmu.lti.nlp"
scalaVersion := "2.11.8"
crossScalaVersions := Seq("2.11.8","2.11.12", "2.12.4","2.10.6")
libraryDependencies ++= Seq(
"com.jsuereth" %% "scala-arm" % "2.0",
"edu.stanford.nlp" % "stanford-corenlp" % "3.4.1",
"edu.stanford.nlp" % "stanford-corenlp" % "3.4.1" classifier "models",
"org.scala-lang.modules" %% "scala-parser-combinators" % "1.1.2",
"org.scala-lang.modules" %% "scala-pickling" % "0.10.1"
// "org.scala-lang" % "scala-swing" % "2.10.3"
)
jamr/project/plugins.sbt
:resolvers += "Sonatype releases" at "https://oss.sonatype.org/content/repositories/releases"
resolvers += "Sonatype snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/"
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")
// addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.6.0")
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "5.2.4")
PS: sbt-idea plugin had been unsupported since it was included into official Scala plugin about two years ago, and project files generated by outdated version of the plugin are not compatible with IntelliJ IDEA starting with version 14, if I recall correctly.
Finally, update the scala version in the CLASSPATH env variable in scripts/config.sh
(Thanks @danielhers and @zhangzx-sjtu for pointing this.)
export CLASSPATH=".:${JAMR_HOME}/target/scala-2.11/jamr-assembly-0.1-SNAPSHOT.jar"
After doing the above, install JAMR by.
. scripts/config.sh
./setup
2.11.12
version and modified the corresponding lines in jamr/build.sbt
and java/scripts/config.sh
sbt
before running ./setup
in the JAMR directory if the problem raises.Thanks @LeonardoEmili for giving us a good summary #43 (comment) on success setup and created a PR on this issue.
This issue was originated by issue under CoNLL2019 shared task and @danielhers gave many suggestions.
Hi, this is more of a question rather than an issue. The aligner is giving me the message "WARNING ADDING ANOTHER SPAN TO NODE" for a couple of sentences. What does it mean?
Regards,
Marco
Hi, I was trying to run scripts/PARSE.sh < input_file > output_file 2> output_file.err
with just one test sentence in the input file, but there was nothing in the output file, and in the log file it had the following:
### Tokenizing input ###
Unicode character 0xfdd3 is illegal at /home/nahgnaw/jamr/tools/cdec/corpus/support/quote-norm.pl line 56.
### Running NER system ###
~/jamr/tools/IllinoisNerExtended ~/jamr
Adding feature: Forms
Adding feature: Capitalization
Adding feature: WordTypeInformation
Adding feature: Affixes
Adding feature: PreviousTag1
Adding feature: PreviousTag2
Adding feature: PreviousTagPatternLevel1
Adding feature: PreviousTagPatternLevel2
Adding feature: PrevTagsForContext
Adding feature: PredictionsLevel1
Adding feature: GazetteersFeatures
Adding feature: BrownClusterPaths
Loading gazetteers....
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
loading gazetteer:....ner-ext/KnownLists/WikiPeople.lst
loading gazetteer:....ner-ext/KnownLists/ordinalNumber.txt
loading gazetteer:....ner-ext/KnownLists/WikiSongs.lst
loading gazetteer:....ner-ext/KnownLists/WikiManMadeObjectNames.lst
loading gazetteer:....ner-ext/KnownLists/WikiArtWorkRedirects.lst
loading gazetteer:....ner-ext/KnownLists/known_name.lst
loading gazetteer:....ner-ext/KnownLists/Occupations.txt
loading gazetteer:....ner-ext/KnownLists/WikiLocations.lst
loading gazetteer:....ner-ext/KnownLists/known_state.lst
loading gazetteer:....ner-ext/KnownLists/WikiCompetitionsBattlesEventsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/WikiOrganizationsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/known_nationalities.lst
loading gazetteer:....ner-ext/KnownLists/WikiManMadeObjectNamesRedirects.lst
loading gazetteer:....ner-ext/KnownLists/WikiSongsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/cardinalNumber.txt
loading gazetteer:....ner-ext/KnownLists/currencyFinal.txt
loading gazetteer:....ner-ext/KnownLists/known_names.big.lst
loading gazetteer:....ner-ext/KnownLists/known_jobs.lst
loading gazetteer:....ner-ext/KnownLists/known_title.lst
loading gazetteer:....ner-ext/KnownLists/WikiFilmsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/temporal_words.txt
loading gazetteer:....ner-ext/KnownLists/measurments.txt
loading gazetteer:....ner-ext/KnownLists/known_place.lst
loading gazetteer:....ner-ext/KnownLists/known_country.lst
loading gazetteer:....ner-ext/KnownLists/known_corporations.lst
loading gazetteer:....ner-ext/KnownLists/WikiOrganizations.lst
loading gazetteer:....ner-ext/KnownLists/VincentNgPeopleTitles.txt
loading gazetteer:....ner-ext/KnownLists/WikiFilms.lst
loading gazetteer:....ner-ext/KnownLists/WikiLocationsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/WikiArtWork.lst
loading gazetteer:....ner-ext/KnownLists/WikiPeopleRedirects.lst
loading gazetteer:....ner-ext/KnownLists/WikiCompetitionsBattlesEvents.lst
loading gazetteer:....ner-ext/KnownLists/KnownNationalities.txt
found 33 gazetteers
1288301 words added
95262 words added
85963 words added
Working parameters are:
inferenceMethod=GREEDY
beamSize=5
thresholdPrediction=false
predictionConfidenceThreshold=-1.0
labelTypes
PER ORG LOC MISC
logging=false
debuggingLogPath=null
forceNewSentenceOnLineBreaks=true
keepOriginalFileTokenizationAndSentenceSplitting=false
taggingScheme=BILOU
tokenizationScheme=DualTokenizationScheme
pathToModelFile=data/Models/CoNLL/finalSystemBILOU.model
Brown clusters resource:
-Path: brown-clusters/brown-english-wikitext.case-intact.txt-c1000-freq10-v3.txt
-WordThres=5
-IsLowercased=false
Brown clusters resource:
-Path: brown-clusters/brownBllipClusters
-WordThres=5
-IsLowercased=false
Brown clusters resource:
-Path: brown-clusters/brown-rcv1.clean.tokenized-CoNLL03.txt-c1000-freq1.txt
-WordThres=5
-IsLowercased=false
Tagging file: /tmp/jamr-25472.snt.tmp
Reading model file : data/Models/CoNLL/finalSystemBILOU.model.level1
Reading model file : data/Models/CoNLL/finalSystemBILOU.model.level2
Extracting features for level 2 inference
Done - Extracting features for level 2 inference
~/jamr
nahgnaw@el:~/jamr/scripts$ vi /home/nahgnaw/jamr/tools/cdec/corpus/support/quote-norm.pl
nahgnaw@el:~/jamr/scripts$
nahgnaw@el:~/jamr/scripts$ vi PARSE
nahgnaw@el:~/jamr/scripts$ vi PARSE
PARSE_IT.sh PARSE.sh
nahgnaw@el:~/jamr/scripts$ vi PARSE.sh
nahgnaw@el:~/jamr/scripts$ cat ../data/test.txt.err
### Tokenizing input ###
Unicode character 0xfdd3 is illegal at /home/nahgnaw/jamr/tools/cdec/corpus/support/quote-norm.pl line 56.
### Running NER system ###
~/jamr/tools/IllinoisNerExtended ~/jamr
Adding feature: Forms
Adding feature: Capitalization
Adding feature: WordTypeInformation
Adding feature: Affixes
Adding feature: PreviousTag1
Adding feature: PreviousTag2
Adding feature: PreviousTagPatternLevel1
Adding feature: PreviousTagPatternLevel2
Adding feature: PrevTagsForContext
Adding feature: PredictionsLevel1
Adding feature: GazetteersFeatures
Adding feature: BrownClusterPaths
Loading gazetteers....
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
loading gazetteer:....ner-ext/KnownLists/WikiPeople.lst
loading gazetteer:....ner-ext/KnownLists/ordinalNumber.txt
loading gazetteer:....ner-ext/KnownLists/WikiSongs.lst
loading gazetteer:....ner-ext/KnownLists/WikiManMadeObjectNames.lst
loading gazetteer:....ner-ext/KnownLists/WikiArtWorkRedirects.lst
loading gazetteer:....ner-ext/KnownLists/known_name.lst
loading gazetteer:....ner-ext/KnownLists/Occupations.txt
loading gazetteer:....ner-ext/KnownLists/WikiLocations.lst
loading gazetteer:....ner-ext/KnownLists/known_state.lst
loading gazetteer:....ner-ext/KnownLists/WikiCompetitionsBattlesEventsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/WikiOrganizationsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/known_nationalities.lst
loading gazetteer:....ner-ext/KnownLists/WikiManMadeObjectNamesRedirects.lst
loading gazetteer:....ner-ext/KnownLists/WikiSongsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/cardinalNumber.txt
loading gazetteer:....ner-ext/KnownLists/currencyFinal.txt
loading gazetteer:....ner-ext/KnownLists/known_names.big.lst
loading gazetteer:....ner-ext/KnownLists/known_jobs.lst
loading gazetteer:....ner-ext/KnownLists/known_title.lst
loading gazetteer:....ner-ext/KnownLists/WikiFilmsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/temporal_words.txt
loading gazetteer:....ner-ext/KnownLists/measurments.txt
loading gazetteer:....ner-ext/KnownLists/known_place.lst
loading gazetteer:....ner-ext/KnownLists/known_country.lst
loading gazetteer:....ner-ext/KnownLists/known_corporations.lst
loading gazetteer:....ner-ext/KnownLists/WikiOrganizations.lst
loading gazetteer:....ner-ext/KnownLists/VincentNgPeopleTitles.txt
loading gazetteer:....ner-ext/KnownLists/WikiFilms.lst
loading gazetteer:....ner-ext/KnownLists/WikiLocationsRedirects.lst
loading gazetteer:....ner-ext/KnownLists/WikiArtWork.lst
loading gazetteer:....ner-ext/KnownLists/WikiPeopleRedirects.lst
loading gazetteer:....ner-ext/KnownLists/WikiCompetitionsBattlesEvents.lst
loading gazetteer:....ner-ext/KnownLists/KnownNationalities.txt
found 33 gazetteers
1288301 words added
95262 words added
85963 words added
Working parameters are:
inferenceMethod=GREEDY
beamSize=5
thresholdPrediction=false
predictionConfidenceThreshold=-1.0
labelTypes
PER ORG LOC MISC
logging=false
debuggingLogPath=null
forceNewSentenceOnLineBreaks=true
keepOriginalFileTokenizationAndSentenceSplitting=false
taggingScheme=BILOU
tokenizationScheme=DualTokenizationScheme
pathToModelFile=data/Models/CoNLL/finalSystemBILOU.model
Brown clusters resource:
-Path: brown-clusters/brown-english-wikitext.case-intact.txt-c1000-freq10-v3.txt
-WordThres=5
-IsLowercased=false
Brown clusters resource:
-Path: brown-clusters/brownBllipClusters
-WordThres=5
-IsLowercased=false
Brown clusters resource:
-Path: brown-clusters/brown-rcv1.clean.tokenized-CoNLL03.txt-c1000-freq1.txt
-WordThres=5
-IsLowercased=false
Tagging file: /tmp/jamr-25472.snt.tmp
Reading model file : data/Models/CoNLL/finalSystemBILOU.model.level1
Reading model file : data/Models/CoNLL/finalSystemBILOU.model.level2
Extracting features for level 2 inference
Done - Extracting features for level 2 inference
~/jamr
Was it because the unicode error? or there is something else I'm missing?
Thanks!
I'm trying to run jamr on provided models, however I get the following error:
### Running JAMR ###
Stage1 features = List(bias, corpusIndicator, length, corpusLength, conceptGivenPhrase, count, phraseGivenConcept, phraseConceptPair, phrase, firstMatch, numberIndicator, sentenceMatch, andList, pos, posEvent, phraseConceptPairPOS, badConcept)
Exception in thread "main" java.io.FileNotFoundException: /home/reza/Documents/jamr-Semeval-2016/models/Semeval-2016_LDC2014T12/wordCounts.train (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at scala.io.Source$.fromFile(Source.scala:90)
at scala.io.Source$.fromFile(Source.scala:75)
at scala.io.Source$.fromFile(Source.scala:53)
at edu.cmu.lti.nlp.amr.ConceptInvoke.package$.Decoder(package.scala:25)
at edu.cmu.lti.nlp.amr.AMRParser$.main(AMRParser.scala:113)
at edu.cmu.lti.nlp.amr.AMRParser.main(AMRParser.scala)
any help on how to get passed this would be much appreciated.
I cannot find the output of PARSE.sh
where can I find?
I believe I have followed all the steps in the readme successfully. I am now trying the last step,
scripts/PARSE.sh < input_file > output_file 2> output_file.err
I have created an input file with a few sentences on separate lines, with no blank lines.
When I run the script, it gets to ### Running Jamr ### and then has an error which I have pasted below:
`Reading weights
done
Sentence: This is a very short test.
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
at edu.cmu.lti.nlp.amr.AMRParser$$anonfun$main$3.apply(AMRParser.scala:358)
at edu.cmu.lti.nlp.amr.AMRParser$$anonfun$main$3.apply(AMRParser.scala:211)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at edu.cmu.lti.nlp.amr.AMRParser$.main(AMRParser.scala:211)
at edu.cmu.lti.nlp.amr.AMRParser.main(AMRParser.scala)`
When I parse sentences with JAMR sometimes I get minus concepts which can have relations.
For example:
# ::snt There is no basketball player on the court floor and no one is grabbing the ball
# ::tok There is no basketball player on the court floor and no one is grabbing the ball
# ::alignments 15-16|0.1.1 13-14|0.1 10-11|0.0.1 9-10|0 8-9|0.0 7-8|0.0.0 4-5|0.1.0.0+0.1.0 3-4|0.1.0.0.0 2-3|0.0.1.0 ::annotator JAMR dev v0.3 ::date 2019-01-16T14:53:26.909
(a / and
:op1 (f / floor
:mod (c / court)
:mod (- / -
:domain-of -))
:op2 (g / grab-01
:ARG0 (t / thing
:ARG0-of (p / play-12
:ARG1 (b2 / basketball)))
:ARG1 (b / ball)))
Also smatch complains for -/-
concepts.
Hi
I would like to use the jamr as a library in java maven code. I have setup all the required path and models. is there any way to not call script and call the jamr from the code and have the result as object.
Thanks in advance
When I run ./setup, I get this error
--2018-06-13 22:35:32-- https://github.com/jflanigan/jamr/releases/download/JAMR_v0.2/IllinoisNerExtended-2.7.tgz0
Resolving github.com (github.com)... 192.30.255.112, 192.30.255.113
Connecting to github.com (github.com)|192.30.255.112|:443... connected.
OpenSSL: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
Unable to establish SSL connection.
Tried including --no-check-certificate in the wget command for the Illinois tagger, but no dice. Any idea how to fix this?
Thanks!
Hi guys
I'm trying to parse a file of ~500k lines, and I always get the following error:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at edu.illinois.cs.cogcomp.LbjNer.LbjTagger.NEWord.addTokenToSentence(NEWord.java:156)
at edu.illinois.cs.cogcomp.LbjNer.ParsingProcessingData.PlainTextReader.parseText(PlainTextReader.java:33)
at edu.illinois.cs.cogcomp.LbjNer.ParsingProcessingData.PlainTextReader.parsePlainTextFile(PlainTextReader.java:24)
at edu.illinois.cs.cogcomp.LbjNer.LbjTagger.NETagPlain.tagData(NETagPlain.java:38)
at edu.illinois.cs.cogcomp.LbjNer.LbjTagger.NerTagger.main(NerTagger.java:21)
Is there a way to avoid the OOM issue without allocating more memory to the JVM?
Also, is it possible to get alignments between a text file and the resulting parsed AMR file without running the align script, especially because the output of JAMR isn't in the format that the align script expects?
Thanks
Kris
Hi Jeff,
According to https://github.com/jflanigan/jamr/blob/Semeval-2016/docs/Hand_Alignments.md, hand alignment is annotated for LDC2013E117. We would like to work on the alignment problem but we don't have access to the LDC2013E117 data. But we have the LDC2014T12. When I look into scripts/hand_alignments/LDC2013E117/snt.ids, it seems the alignment was created on the proxy portion. So I'm wondering if it's possible to adopt the alignments on LDC2013E117 to LDC2014T12.
I tried to replace tar -xzOf "$JAMR_HOME/data/LDC2013E117.tgz" ./LDC2013E117_DEFT_Phase_1_AMR_Annotation_R3/data/deft-amr-release-r3-proxy.txt
with cat $JAMR_HOME/data/amr_anno_1.0/data/unsplit/amr-release-1.0-proxy.txt
but lots of the patches in scripts/hand_alignments/LDC2013E117/patch.hand_align
were rejected.
Is there any other things I should pay attention in order to get it work on LDC2014T12. Thanks!
Regards,
Hi,
In the README it says there should be a parser model trained on the Little Prince data, with corresponding config file. Is it correct that it isn't there and if so could you please share it? Would be very helpful!
Many thanks.
Hi,
Due to recent SBT move to httpS, lots of download fails. Any plan to cope with this?
Also scala compiler not found when running ./compile. I have modified some of build.sbt to deal with point 1.
unresolved dependency: org.scala-lang#scala-compiler;2.11.3: not found
at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:217)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:126)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:125)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:103)
at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:48)
at sbt.IvySbt$$anon$3.call(Ivy.scala:57)
at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:98)
at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:81)
at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:102)
at xsbt.boot.Using$.withResource(Using.scala:11)
at xsbt.boot.Using$.apply(Using.scala:10)
at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:62)
at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:52)
at xsbt.boot.Locks$.apply0(Locks.scala:31)
at xsbt.boot.Locks$.apply(Locks.scala:28)
at sbt.IvySbt.withDefaultLogger(Ivy.scala:57)
at sbt.IvySbt.withIvy(Ivy.scala:98)
at sbt.IvySbt.withIvy(Ivy.scala:94)
at sbt.IvySbt$Module.withModule(Ivy.scala:115)
at sbt.IvyActions$.update(IvyActions.scala:125)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1223)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1221)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1244)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1242)
at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1246)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1241)
at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
at sbt.Classpaths$.cachedUpdate(Defaults.scala:1249)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1214)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1192)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42)
at sbt.std.Transform$$anon$4.work(System.scala:64)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.Execute.work(Execute.scala:244)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Thanks in advance.
I'm using to parse a given text using the following command.
scripts/PARSE.sh < ../text.in > ../text.out 2> output_file.err
The model that I was trying to use was LDC2014T12. But I get the following error.
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0 at edu.cmu.lti.nlp.amr.AMRParser$$anonfun$main$3.apply(AMRParser.scala:307) at edu.cmu.lti.nlp.amr.AMRParser$$anonfun$main$3.apply(AMRParser.scala:192) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) at edu.cmu.lti.nlp.amr.AMRParser$.main(AMRParser.scala:192) at edu.cmu.lti.nlp.amr.AMRParser.main(AMRParser.scala)
I tried using other models given. But the same error occurred.
I tried using scripts/EVAL.sh
also. It also gave the same error.
Any help..?
Thanks..
Hi,
I'm having problems running the setup script.
It's failing me and reporting these unresolved dependencies:
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: com.eed3si9n#sbt-assembly;0.10.2: not found
[warn] :: com.github.mpeltonen#sbt-idea;1.5.2: not found
[warn] :: com.typesafe.sbteclipse#sbteclipse-plugin;2.4.0: not found
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
I'm running Scala 2.12.3 and sbt 1.0.2 on macOS Sierra 10.12.6.
Any clue on how to fix this? I'm not really familiar with Scala.
Thanks
During the procedure of running ./setup, it said connecting to github-cloud.s3.amazonaws.com refused, like this:
Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 52.216.81.88
Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|52.216.81.88|:443... failed: Connection refused.
If I manually visit "github-cloud.s3.amazonaws.com" from browser, it says:
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>FD0C2818226FFC6E</RequestId><HostId>xgNMiNLCyhvwoiva6l30OhOtKUONyQPykNljIVGQcjJkL9rfe45gmo7lCXOFWd/kpJGbnSsWUcc=</HostId></Error>
Is this normal?
On a small test sentence I got JAMR to run fine. But on a harder document it gave lots of array out of bound errors. Is this serious? If the syntactic dependency fails, does the AMR parser always return an empty semantic graph?
This was running scripts/PARSE.sh < LICENSE.txt > LICENSE.out
Hi thanks for this tool,
I got this error while running the aligner:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException
at edu.cmu.lti.nlp.amr.CorpusTool$$anonfun$main$1.apply(CorpusTool.scala:48)
at edu.cmu.lti.nlp.amr.CorpusTool$$anonfun$main$1.apply(CorpusTool.scala:43)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at edu.cmu.lti.nlp.amr.CorpusTool$.main(CorpusTool.scala:43)
at edu.cmu.lti.nlp.amr.CorpusTool.main(CorpusTool.scala)
(I removed the last line of ALIGN.sh to keep the /tmp files) and by looking at the /tmr/*.tok file it seems that the last amr graph is not printed (while its ::snt, ::id are printed).
I tried removing the last sentence and running it again, it didn't help. Same behavior.
I tried hacking it doing the tokenization by myself; it didn't help.
I managed to run it in the first sentence of the same dataset without problems.
I managed to run it with other AMR datasets.
any ideas?
thank you,
Miguel
ERROR MESSAGE:
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: com.eed3si9n#sbt-assembly;0.10.2: not found
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn] Note: Some unresolved dependencies have extra attributes. Check that these dependencies exist with the requested attributes.
[warn] com.eed3si9n:sbt-assembly:0.10.2 (sbtVersion=0.13, scalaVersion=2.10)
[warn]
sbt.ResolveException: unresolved dependency: com.eed3si9n#sbt-assembly;0.10.2: not found
at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:217)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:126)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:125)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:103)
at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:48)
at sbt.IvySbt$$anon$3.call(Ivy.scala:57)
at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:98)
at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:81)
at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:102)
at xsbt.boot.Using$.withResource(Using.scala:11)
at xsbt.boot.Using$.apply(Using.scala:10)
at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:62)
at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:52)
at xsbt.boot.Locks$.apply0(Locks.scala:31)
at xsbt.boot.Locks$.apply(Locks.scala:28)
at sbt.IvySbt.withDefaultLogger(Ivy.scala:57)
at sbt.IvySbt.withIvy(Ivy.scala:98)
at sbt.IvySbt.withIvy(Ivy.scala:94)
at sbt.IvySbt$Module.withModule(Ivy.scala:115)
at sbt.IvyActions$.update(IvyActions.scala:125)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1223)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1221)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1244)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1242)
at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1246)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1241)
at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
at sbt.Classpaths$.cachedUpdate(Defaults.scala:1249)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1214)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1192)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42)
at sbt.std.Transform$$anon$4.work(System.scala:64)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.Execute.work(Execute.scala:244)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
error sbt.ResolveException: unresolved dependency: com.eed3si9n#sbt-assembly;0.10.2: not found
Please let me know if you need any more information.
Thanks!
I got this result from the parsing which seemed like a bug. The sentence was taken from DUC 2004 dataset.
# ::snt There were no boos.
# ::tok There were no boos .
# ::alignments 2-3|0 ::annotator JAMR dev v0.3 ::date 2017-11-19T19:54:42.228
# ::node 0 - 2-3
# ::root 0 -
(- / -)
I have successfully installed JAMR parser in Google colab and I am using LDC2014T12 pretrained model. I was trying to run some sample sentences to understand where the AMR parser fails. When I try sentences with words like Doctor, sandwich, medicine etc I got an empty graph. when I check the model, these words are present in the pretrained model.
I am not sure why JAMR fails when a particular word is there in the model. I don't think I am doing any mistake running the parser because it works fine for few other sentences. Could you please let me know the possible reason of why this may happen or what can I do to rectify it?
Thank you.
I downloaded the JAMR parser.
git clone https://github.com/jflanigan/jamr.git
git checkout Semeval-2016
Then I run the command ./setup
But an error occurred as follows:
:::: ERRORS
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-lang/scala-library/2.10.4/scala-library-2.10.4.pom
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-lang/scala-library/2.10.4/scala-library-2.10.4.jar
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-lang/scala-compiler/2.10.4/scala-compiler-2.10.4.pom
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-lang/scala-compiler/2.10.4/scala-compiler-2.10.4.jar
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/jline/jline/2.11/jline-2.11.pom
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/jline/jline/2.11/jline-2.11.jar
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.3.0/ivy-2.3.0.pom
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.3.0/ivy-2.3.0.jar
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.pom
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.jar
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-sbt/test-interface/1.0/test-interface-1.0.pom
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar
:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
unresolved dependency: org.scala-lang#scala-library;2.10.4: not found
unresolved dependency: org.scala-lang#scala-compiler;2.10.4: not found
unresolved dependency: jline#jline;2.11: not found
unresolved dependency: org.apache.ivy#ivy;2.3.0: not found
unresolved dependency: com.jcraft#jsch;0.1.46: not found
unresolved dependency: org.scala-sbt#test-interface;1.0: not found
Error during sbt execution: Error retrieving required libraries
(see /home/teqip-ii-cse-nlp-01-01/.sbt/boot/update.log for complete log)
Error: Could not retrieve sbt 0.13.5
What can I do to resolve this issue.
Thanks in advance.
~/jamr/scripts/preprocessing ~/jamr
It seems terminated during the preprecess step.
Appreciate your help.
Hello,
When running ./setup to install, I encounter the following error from the ./compile command:
scala.reflect.internal.MissingRequirementError: object scala.runtime in compiler mirror not found.
Is this a bug, something I can work around, or very strange indeed (possibly a problem my end)?
thanks, Andrew
Hi, I followed the instruction and tried to align the following AMR:
cat amr_input
(f / flower
:mod (e / even)
:ARG0-of (h / have-03
:ARG1 (t / thorn)))
using the command "scripts/ALIGN.sh < amr_input".
I get this warning:
### Tokenizing ###
which: no uconv in (/sbin:/bin:/usr/sbin:/usr/bin)
[...]/jamr/tools/cdec/corpus/support/utf8-normalize.sh: FFFF Cannot find ICU uconv (http://site.icu-project.org/) ... falling back to iconv. Quality may suffer.
but I don't get any output. Is ucon necessary for the aligner to work, or is there something else that's going wrong here? The parser works properly.
Thanks in advance,
Marco
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.