Coder Social home page Coder Social logo

richardpaulhudson / coreferee Goto Github PK

View Code? Open in Web Editor NEW
105.0 8.0 17.0 248.59 MB

Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages

License: MIT License

Python 99.76% Shell 0.24%

coreferee's Introduction

Coreferee

Author: Richard Paul Hudson

1. Introduction

1.1 The basic idea

Coreferences are situations where two or more words within a text refer to the same entity, e.g. John went home because he was tired. Resolving coreferences is an important general task within the natural language processing field.

Coreferee is a Python 3 library (tested with versions 3.6—3.11) that is used together with spaCy (tested with versions 3.0.0—3.5.0) to resolve coreferences within English, French, German and Polish texts. It is designed so that it is easy to add support for new languages. It uses a mixture of neural networks and programmed rules.

The library was originally developed at msg systems and was also maintained for a while at Explosion AI.

1.2 Getting started

1.2.1 English

Presuming you have already installed spaCy and one of the English spacy models, install Coreferee from the command line by typing:

python3 -m pip install coreferee
python3 -m coreferee install en

Note that:

  • the required command may be python rather than python3 on some operating systems;
  • in order to use the transformer-based spaCy model en_core_web_trf with Coreferee, you will need to install the spaCy model en_core_web_lg as well (see the explanation here).

Then open a Python prompt (type python3 or python at the command line):

>>> import spacy
>>> nlp = spacy.load('en_core_web_trf')
>>> nlp.add_pipe('coreferee')
<coreferee.manager.CorefereeBroker object at 0x000002DE8E9256D0>
>>>
>>> doc = nlp("Although he was very busy with his work, Peter had had enough of it. He and his wife decided they needed a holiday. They travelled to Spain because they loved the country very much.")
>>>
>>> doc._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
1: work(7), it(14)
2: [He(16); wife(19)], they(21), They(26), they(31)
3: Spain(29), country(34)
>>>
>>> doc[16]._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
2: [He(16); wife(19)], they(21), They(26), they(31)
>>>
>>> doc._.coref_chains.resolve(doc[31])
[Peter, wife]
>>>

1.2.2 French

Presuming you have already installed spaCy and one of the French spacy models, install Coreferee from the command line by typing:

python3 -m pip install coreferee
python3 -m coreferee install fr

Note that the required command may be python rather than python3 on some operating systems.

Then open a Python prompt (type python3 or python at the command line):

>>> import spacy
>>> nlp = spacy.load('fr_core_news_lg')
>>> nlp.add_pipe('coreferee')
<coreferee.manager.CorefereeBroker object at 0x000001F556B4FF10>
>>>
>>> doc = nlp("Même si elle était très occupée par son travail, Julie en avait marre. Alors, elle et son mari décidèrent qu'ils avaient besoin de vacances. Ils allèrent en Espagne car ils adoraient le pays")
>>>
>>> doc._.coref_chains.print()
0: elle(2), son(7), Julie(10), elle(17), son(19)
1: travail(8), en(11)
2: [elle(17); mari(20)], ils(23), Ils(29), ils(34)
3: Espagne(32), pays(37)
>>>
>>> doc[17]._.coref_chains.print()
0: elle(2), son(7), Julie(10), elle(17), son(19)
2: [elle(17); mari(20)], ils(23), Ils(29), ils(34)
>>>
>>> doc._.coref_chains.resolve(doc[34])
[Julie, mari]
>>>

1.2.3 German

Presuming you have already installed spaCy and one of the German spacy models, install Coreferee from the command line by typing:

python3 -m pip install coreferee
python3 -m coreferee install de

Note that the required command may be python rather than python3 on some operating systems.

Then open a Python prompt (type python3 or python at the command line):

>>> import spacy
>>> nlp = spacy.load('de_core_news_lg')
>>> nlp.add_pipe('coreferee')
<coreferee.manager.CorefereeBroker object at 0x0000026E84C63B50>
>>>
>>> doc = nlp("Weil er mit seiner Arbeit sehr beschäftigt war, hatte Peter davon genug. Er und seine Frau haben entschieden, dass ihnen ein Urlaub gut tun würde. Sie sind nach Spanien gefahren, weil ihnen das Land sehr gefiel.")
>>>
>>> doc._.coref_chains.print()
0: er(1), seiner(3), Peter(10), Er(14), seine(16)
1: Arbeit(4), davon(11)
2: [Er(14); Frau(17)], ihnen(22), Sie(29), ihnen(36)
3: Spanien(32), Land(38)
>>>
>>> doc[14]._.coref_chains.print()
0: er(1), seiner(3), Peter(10), Er(14), seine(16)
2: [Er(14); Frau(17)], ihnen(22), Sie(29), ihnen(36)
>>>
>>> doc._.coref_chains.resolve(doc[36])
[Peter, Frau]
>>>

1.2.4 Polish

Presuming you have already installed spaCy and one of the Polish spacy models, install Coreferee from the command line by typing:

python3 -m pip install coreferee
python3 -m coreferee install pl

Note that the required command may be python rather than python3 on some operating systems.

Then open a Python prompt (type python3 or python at the command line):

>>> import spacy
>>> nlp = spacy.load('pl_core_news_lg')
>>> nlp.add_pipe('coreferee')
<coreferee.manager.CorefereeBroker object at 0x0000027304C63B50>
>>>
>>> doc = nlp("Ponieważ bardzo zajęty był swoją pracą, Janek miał jej dość. Postanowili z jego żoną, że potrzebują wakacji. Pojechali do Hiszpanii, bo bardzo im się ten kraj podobał.")
>>>
>>> doc._.coref_chains.print()
0: był(3), swoją(4), Janek(7), Postanowili(12), jego(14)
1: pracą(5), jej(9)
2: [Postanowili(12); żoną(15)], potrzebują(18), Pojechali(21), im(27)
3: Hiszpanii(23), kraj(30)
>>>
>>> doc[12]._.coref_chains.print()
0: był(3), swoją(4), Janek(7), Postanowili(12), jego(14)
2: [Postanowili(12); żoną(15)], potrzebują(18), Pojechali(21), im(27)
>>>
>>> doc._.coref_chains.resolve(doc[27])
[Janek, żoną]
>>>

1.3 Background information

Handling coreference resolution successfully requires training corpora that have been manually annotated with coreferences. The state of the art in coreference resolution is progressing rapidly, but is largely focussed on techniques that require training corpora that are larger than what is available for most languages and software developers. The CONLL 2012 training corpus, which is most widely used, has the following restrictions:

  • CONLL 2012 covers English, Chinese and Arabic; there is nothing of comparable size for most other languages. For example, the corpus we used to train Coreferee for German is around a tenth of the size of CONLL 2012;

  • CONLL 2012 is not publicly available and has a relatively restrictive license.

Earlier versions of spaCy had an extension, Neuralcoref, that was excellent but that was never made publicly available for any language other than English. The aim of Coreferee, on the other hand, is to get coreference resolution working for a variety of languages: our focus is less on necessarily achieving the best possible precision and recall for English than on enabling the functionality to be reproduced for new languages as easily and as quickly as possible. Because training data is in such short supply for most languages and is very effort-intensive to produce, it is important to use what is available as effectively as possible.

There are three essential strategies that human readers employ to recognise coreferences within a text:

  1. Hard grammatical rules that completely preclude entities within a text from coreferring, e.g. The house stood tall. They went on walking. Such rules play an especially important role in languages that have grammatical gender, which includes most continental European languages.

  2. Pragmatic tendencies, e.g. a word that begins a sentence and that is a grammatical subject is more likely than a word that is in the middle of a sentence and that forms part of a prepositional phrase to be referred back to by a pronoun that follows it in the next sentence.

  3. Semantic restrictions, i.e. which entities can realistically do what to which entities in the world being described. For example, in the sentence The child saddled her up, a reader's experience of the world will make it clear that her must refer to a horse.

With unlimited training data, it would be possible to train a system to employ all three strategies effectively from first principles using word vectors. The features of Coreferee that allow effective learning with the limited training data that is available are:

  • Strategy 1) is covered by hardcoded rules for each language that the system is then not required to learn from the training data. Because detailed knowledge of the grammar of a specific natural language is a separate skill set from knowledge of machine learning, the two concerns have been fully separated in Coreferee: rules are covered in a separate module from tendencies. This means that a model for a new language can be generated by a competent Python programmer with no knowledge of machine learning or neural networks;

  • Because the pragmatic tendencies for strategy 2) are very complex and only partially understood by linguists, machine learning and neural networks represent the only realistic way of tackling them. In order to reduce the amount of training data required for neural networks to learn effectively, the syntactic and morphological information supplied by the spaCy models, which have typically been trained with considerably more training data than will be available for coreference resolution, is used as input to neural networks alongside the standard word vectors.

  • Especially with limited training data but probably even with the largest available training datasets, it is unlikely that a system will learn more than the very simplest tendencies for strategy 3). However, making word vectors available to neural networks ensures that Coreferee can make use of whatever tendencies are discernable.

Coreferee started life to assist the Holmes project, which is used for information extraction and intelligent search. Coreferee is in no way dependent on Holmes, but this original aim has led to several design decisions that may seem somewhat atypical. Several of them could easily be altered by someone with a requirement to do so:

  • A mention within Coreferee does not consist of a span, but rather of a single token or of a list of tokens that stand in a coordination relationship to one another.

  • Coreferee does not capture coreferences that are unambiguously evident from the structure of a sentence. For example, the identity of he and doctor in the sentence He was a doctor is not reported by Coreferee because it can easily be derived from a simple analysis of the copular structure of the phrase.

  • Repetitions of first- and second-person pronouns (I was tired. I went home) are not captured as they add no value either for information extraction or for intelligent search.

  • Coreferee focusses heavily on anaphors (for English: pronouns). There is only relatively limited capture of coreference between noun phrases, and it is entirely rule-based. (In turn, however, this serves the aim of working with limited training data: noun-phrase coreference is a more exacting task than anaphor resolution.)

  • Because search performance is much more important for Holmes than document parsing performance, Coreferee performs all analysis eagerly as each document passes through the pipe.

1.4 Facts and figures

1.4.1 Covered relevant linguistic features
ISO 639-1LanguageAnaphor expressionAgreement classesCoordination expression
PronominalVerbalPrepositionalConjunctiveComitative
enEnglishMy friend came in. He was happy.--Three singular (natural genders) and one plural class.Peter and Mary-
deGermanMein Freund kam rein. Er war glücklich.-Ich benutzte das Auto und hatte damit einige Probleme.Three singular (grammatical genders) and one plural class.Peter und Maria-
frFrenchMon ami entra. Il était heureux.--Two singular (grammatical genders) and two plural (grammatical genders) classes.Pierre et Marie-
plPolishWszedł mój kolega. Widzieliście, jaki on był szczęśliwy?Wszedł mój kolega. Szczęśliwy był.1-2Three singular (grammatical genders) and two plural (natural genders) classes.Piotr i Kasia1) Piotr z Kasią przyjechali do Warszawy;
2) Widziałem Piotra i przyszli z Kasią
  1. Only subject zero anaphors are covered. Object zero anaphors, e.g. Wypiłeś wodę? Tak, wypiłem. are not in scope because they are mainly used colloquially and do not normally occur in the types of text for which Coreferee is primarily designed. Handling them would require creating or locating a detailed dictionary of verb valencies.

  2. Polish has a restricted use of anaphoric prepositions in some formal registers, e.g. Skończyło się to dlań smutno. Because the Polish spaCy models were trained on news texts, they do not recognise such prepositions, meaning that Coreferee cannot capture them either.

1.4.2 Model performance
ISO 639-1LanguageTraining corporaTotal words in training corpora*_trf models*_lg models*_md models*_sm models
Anaphors in 20%Accuracy (%)Anaphors in 20%Accuracy (%)Anaphors in 20%Accuracy (%)Anaphors in 20%Accuracy (%)
enEnglishParCor/ LitBank3935642500—258080—832480—252081—822480—251081-832510—256081—82
deGermanParCor164300--530—57079—80520—55076—80530—55076—79
frFrenchDEMOCRAT323754--1270—128071—721280—130068—701130—114063—64
plPolishPCC548268--1730—179072—761740—180070—75--

Coreferee produces a range of neural-network models for each language corresponding to the various spaCy models for that language. The neural network inputs include word vectors. With _sm (small) models, both spaCy and Coreferee use context-sensitive tensors as an alternative to word vectors. _trf (transformer-based) models, on the other hand, do not use or offer word vectors at all. To remedy this problem, the model configuration files (config.cfg in the directory for each language) allow a vectors model to be specified for use when a main model does not have its own vectors. Coreferee then combines the linguistic information generated by the main model with vector information returned for the individual words in each document by the vectors model.

Because the Coreferee models are rather large (20GB-30GB for the group of models for a given language) and because many users will only be interested in one language, the group of models for a given language is installed using python3 -m coreferee install as demonstrated in the introduction. All Coreferee models are more or less the same size; a larger spaCy model does not equate to a larger Coreferee model. As the figures above demonstrate, the accuracy of Coreferee corresponds closely to the size of the underlying spaCy model, and users are urged to use the larger spaCy models. It is in any case unclear whether there is a situation in which it would make sense to use Coreferee with an _sm model as the Coreferee model would then be considerably larger than the spaCy model! As this discrepancy is especially extreme for the Polish models, Coreferee no longer supports pl_core_news_sm from version 1.1.0 onwards.

The English, German and Polish models support spaCy versions from 3.0.0 to 3.5.0, while the French models support spaCy versions from 3.1.0 to 3.2.0. Because the accuracies and number of anaphors found differ slightly depending on the spaCy version used, the table above cites ranges for each model.

Assessing and comparing the precision and recall of anaphor resolution algorithms is notoriously difficult. For one thing, two human annotators of the same data will not always agree (and, indeed, there are some cases where Coreferee and a training annotator disagree where Coreferee's interpretation seems the more plausible!) And the same algorithm may perform with wildly different accuracies with different test documents depending on how clearly the documents are written and how often there are competing interpretations of individual anaphors.

Because Coreferee decides where there are anaphors to resolve (as opposed to what to resolve them to) in a purely rule-based fashion and because there is not necessarily a perfect correspondence between the types of anaphor these rules are aiming to capture and the types of anaphor covered by any given training corpus, a recall measure would not be meaningful. Instead, we compare the performance between spaCy models — and, during tuning, between different hyperparameter values — by counting the total number of anaphors that the rules find within the test documents as parsed by the spaCy model being used and that are also annotated with a coreference within the training data. The accuracy then expresses the percentage of these anaphors for which the coreference annotated by the corpus author is part of the chain(s) suggested by Coreferee. In situations where the training data specifies a chain C->B->A and B is a type of coreference that Coreferee is not aiming to capture, C->A is used as a valid training reference.

The corpus for each language is split up into a training corpus (around 80%) and a test corpus (around 20%) using a random procedure with a constant seed, meaning that both sets contain documents from throughout each corpus and that the same documents end up in each set on all runs. Note that the corpora were not split up in this way prior to version 1.2.0, meaning that accuracy figures obtained for earlier versions are not directly comparable with accuracy figures obtained for subsequent versions.

Since coreference between noun phrases is restricted to a small number of cases captured by simple rules, the model assessment figures presented here refer solely to anaphor resolution. When anaphor resolution accuracy is being assessed for a test document, noun pairs are detected and added to chains according to the standard rules, but they do not feature in the accuracy figures. On some rare occasions, however, they may have an indirect effect on accuracy by affecting the semantic considerations that determine which anaphors can be added to which chains.

Note that Total words in training corpora in the table above refers to 100% of the available data for each language, while the Anaphors in 20% columns specify the number of anaphors found in the roughly 20% of this data that is used for model assessment.

2 Interacting with the data model

Coreferee generates Chain objects where each chain is an ordered collection of Mention objects that have been analysed as referring to the same entity. Each mention holds references to one or more spaCy token indexes; a chain can have a maximum of one mention with more than one token (most often its leftmost mention). A given token index occurs in a maximum of two mentions; if it belongs to two mentions the mentions will belong to different chains and one of the mentions will contain multiple tokens. All chains that refer to a given Doc or Token object are managed on a ChainHolder object which is accessed via ._.coref_chains. Reproducing part of the example from the introduction:

>>> doc = nlp("Although he was very busy with his work, Peter had had enough of it. He and his wife decided they needed a holiday. They travelled to Spain because they loved the country very much.")
>>>
>>> doc._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
1: work(7), it(14)
2: [He(16); wife(19)], they(21), They(26), they(31)
3: Spain(29), country(34)
>>>
>>> doc[16]._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
2: [He(16); wife(19)], they(21), They(26), they(31)
>>>

Chains and mentions can be navigated much as if they were lists:

>>> len(doc._.coref_chains)
4
>>> doc._.coref_chains[1].pretty_representation
'1: work(7), it(14)'
>>> len(doc._.coref_chains[1])
2
>>> doc._.coref_chains[1][1]
[14]
>>> len(doc._.coref_chains[1][1])
1
>>> doc._.coref_chains[1][1][0]
14
>>>
>>> for chain in doc._.coref_chains:
...     for mention in chain:
...             print(mention)
...
[1]
[6]
[9]
[16]
[18]
[7]
[14]
[16, 19]
[21]
[26]
[31]
[29]
[34]
>>>

A document with Coreferee annotations can be saved and loaded using the normal spaCy methods: the annotations survive the serialization and deserialization. To facilitate this, Coreferee does not store references to spaCy objects, but merely to token indexes. However, each class has a pretty representation designed for human consumption that contains information from the spaCy document and that is generated eagerly when the object is first instantiated. Additionally, the ChainHolder object has a print() method that prints its chains' pretty representations with one chain on each line:

>>> doc._.coref_chains
[0: [1], [6], [9], [16], [18], 1: [7], [14], 2: [16, 19], [21], [26], [31], 3: [29], [34]]
>>> doc._.coref_chains.pretty_representation
'0: he(1), his(6), Peter(9), He(16), his(18); 1: work(7), it(14); 2: [He(16); wife(19)], they(21), They(26), they(31); 3: Spain(29), country(34)'
>>> doc._.coref_chains.print()
0: he(1), his(6), Peter(9), He(16), his(18)
1: work(7), it(14)
2: [He(16); wife(19)], they(21), They(26), they(31)
3: Spain(29), country(34)
>>>
>>> doc._.coref_chains[0]
0: [1], [6], [9], [16], [18]
>>> doc._.coref_chains[0].pretty_representation
'0: he(1), his(6), Peter(9), He(16), his(18)'
>>>
>>> doc._.coref_chains[0][0]
[1]
>>> doc._.coref_chains[0][0].pretty_representation
'he(1)'
>>>

Each chain has an index number that is unique within the document. It is displayed in the representations of Chain and ChainHolder and can also be accessed directly:

>>> doc._.coref_chains[2].index
2

Each chain can also return the index number of the mention within it that is most specific: noun phrases are more specific than anaphors and proper names more specific than common nouns:

>>> doc = nlp("He went to Spain. He loved the country. He often told his friends about it.")
>>> doc._.coref_chains.print()
0: He(0), He(5), He(10), his(13)
1: Spain(3), country(8), it(16)
>>>
>>> doc._.coref_chains[1].most_specific_mention_index
0
>>> doc._.coref_chains[1][doc._.coref_chains[1].most_specific_mention_index].pretty_representation
'Spain(3)'

This information is used as the basis for the resolve() method shown in the initial example: the method traverses multiple chains to find the most specific mention or mentions within the text that describe a given anaphor or noun phrase head.

Note that a mention that heads a complex proper noun phrase only refers to the head of that phrase. Some users have expressed a requirement to retrieve all the tokens in such a phrase. Although this functionality is regarded as outside the main scope of Coreferee and is hence not available via the main data model, the information can be retrieved as follows:

rules_analyzer = nlp_en.get_pipe('coreferee').annotator.rules_analyzer
rules_analyzer.get_propn_subtree(doc[1])

3 How it works

3.1 General operation and rules

3.1.1 Anaphor pair analysis

For each language, methods are implemented that determine:

  • for each token, its dependent siblings, e.g. Jane is a dependent sibling of Peter in the phrase Peter and Jane;
  • for each token, whether the token is an anaphor (broadly speaking for English: a third-person pronoun);
  • for each token, whether the token heads an independent noun phrase that an anaphor could refer to;
  • for any independent-noun/anaphor or anaphor/anaphor pair within a text, whether or not semantic and syntactic constraints would permit coreference between the members of the pair. For example, there are no circumstances in which they and her could ever corefer within a text. When an entity has dependent siblings, the method is called twice, once with and once without the siblings. Possible coreferents are considered up to five sentences away from each anaphor looking backwards through the text. The method returns 2 (coreference permitted), 1 (coreference unlikely but possible) or 0 (coreference impossible). Alongside the language-specific rules, there are a number of language-independent rules which can lead to a 1 rather than a 2 analysis.

Each anaphor in a document emerges from an analysis using these methods with a list of elements to which it could conceivably refer. The list for each anaphor is scored using the neural ensemble and the possible referents are ordered by decreasing likelihood. Regardless of their neural ensemble score, any pairs with the rules analysis 1 (coreference unlikely but possible) are ordered behind pairs with the rules analysis 2 (coreference permitted).

Note that anaphora is understood in a broad sense that includes cataphora, i.e. pronouns that refer forwards rather than backwards like the initial pronoun in the English example in the introduction. Language-independent rules are used to determine situations in which the syntactic relationship between two elements within the same sentence permits cataphora.

Replacing the neural ensemble scoring with a naive algorithm that always selects the closest potential referent for each anaphor with rules analysis 2 (or 1 if there is no 2) yields an accuracy of around 60% as opposed to the 84% reported above. This demonstrates the respective contribution of each processing strategy to the overall result and provides a useful benchmark for any further machine learning experiments.

3.1.2 Noun pair detection

For each language the following are implemented:

  • a method that determines whether a noun phrase is indefinite, or, in languages that do not mark indefiniteness, whether it could be interpreted as being indefinite;
  • a method that determines whether a noun phrase is definite, or, in languages that do not mark definiteness, whether it could be interpreted as being definite;
  • a dictionary from named entity labels to common nouns that refer to members of each named entity class. For example, the English named entity class ORG maps to the nouns ['company', 'firm', 'organisation'].

This information is used in a purely rule-based fashion to determine probable coreference between pairs of noun phrases: broadly, definite noun phrases that do not contain additional new information refer back to indefinite or definite noun phrases with the same head word, and named entities are referred back to by the common nouns that describe their classes. Noun pairs can be a maximum of two sentences apart as opposed to the five sentences that apply to anaphoric references.

3.1.3 Building the chains

Coreferee goes through each document in natural reading order from left to right building up chains of anaphors and independent noun phrases. For each anaphor, the highest scoring interpretation as suggested by the neural ensemble is preferred. However, because the semantic (but not the syntactic) restrictions on anaphoric reference apply between all pairs formed by members of a chain rather than merely between adjacent members, it may turn out that the highest scoring interpretation is not permissible because it would lead to a semantically inconsistent chain. The interpretation with the next highest score is then tried, and so on until no interpretations remain.

In the unusual situation that all suggested interpretations of a given anaphor have been found to be semantically impossible, it is likely that one of the interpretations of the preceding anaphors in the text was incorrect: authors do not normally use anaphors that do not refer to anything. Reading the text:

The woman looked down and saw Lesley. She stood up and greeted him.

most readers will initially understand she as referring to Lesley. Only when one reaches the end of the sentence does it become clear that Lesley must be a man and that she actually refers to the woman. A quick test shows that Coreferee is capable of handling such ambiguity:

>>> doc = nlp('The woman looked down and saw Lesley. She stood up and greeted her.')
>>> doc._.coref_chains.print()
0: woman(1), her(13)
1: Lesley(6), She(8)
>>>
>>> doc = nlp('The woman looked down and saw Lesley. She stood up and greeted him.')
>>> doc._.coref_chains.print()
0: woman(1), She(8)
1: Lesley(6), him(13)

This is achieved using a rewind: at a point in a text where no suitable interpretation can be found for an anaphor, alternative interpretations of preceding anaphors are investigated in an attempt to find an overall interpretation that fits.

3.2 The neural ensemble

The likelihood scores for anaphoric pairs are calculated using an ensemble of five identical multilayer perceptrons using a rectified linear activation in the input and hidden layers. Each of the five networks outputs a probability between 0 and 1 for a given potential anaphoric pair. These probabilities are then fed into a softmax layer that selects the best potential referent for each anaphor.

The inputs to each of the five networks consist of:

  1. A feature map for each member of the pair. As the first step in training, Coreferee goes through the entire training corpus and notes all the relevant morphological and syntactic information that relevant tokens, their syntactic head tokens and their syntactic children can have. This information is stored with the neural ensemble for each model as a feature table. The feature map for a given token (or list of tokens) is a oneshot representation with respect to the feature table.

  2. A position map for each member of the pair capturing such information as its position within its sentence and its depth within the dependency tree generated for its sentence.

  3. Vector squeezers for each member of the pair and, where existent, for the syntactic head of each member of the pair. The input to a vector squeezer is the vector or context-sensitive tensor for the spaCy token in question. A vector squeezer consists of three neural layers and outputs a representation that is only three neurons wide and that is fed into the rest of the network within the same layer as the other, non-vector inputs.

  4. A compatibility map capturing the relationship between the members of the pair. Alongside the distance separating them in words and in sentences, this includes the number of common features in their feature maps and the cosine similarity between their syntactic heads.

Using a vector squeezer has been consistently found to offer slightly better results either than feeding the full-width vectors into the network directly or than omitting them entirely. Possible intuitions that might explain this behaviour are: the reduced width forces the network to learn and attend to a constrained number of specific semantic features relevant to coreference resolution; and the reduced width limits the attention of the network on the raw vectors in a situation where the training data is insufficient to make effective use of them.

Perhaps somewhat unusually, when a vector is required to represent a coordinated phrase, the mean of the vectors of the individual coordinated tokens is used rather than the mean of the vectors of all the tokens in the coordinated span.

The structure shared by each of the five networks in the ensemble is shown in the attached diagram:

Structure of an ensemble member

Training for all relevant spaCy models for a given language takes between one and two hours on a high-end laptop.

4. Adding support for a new language

One of the main design goals of Coreferee was to make it easy to add support for further languages. The prerequisites are:

  • you will need to know the grammar of the language you are adding well enough to make detailed decisions about which coreferences are normal, which are marginally possible and which are impossible;
  • you will need to be able to program in Python.

You should not need to get involved in the details of the neural ensemble; Coreferee should do that for you.

The steps involved are:

  1. Create a directory under coreferee/lang/ with the same structure as the existing language-specific directories; it is probably easiest to copy one of them.

  2. The file config.cfg lists the spaCy models for which you wish to generate Coreferee models. You will need to specify a separate vectors model for any of the spaCy models that lack vectors or context-dependent tensors of their own — see the English config.cfg for an example. Each config entry specifies a minimum (from_version) and maximum (to_version) spaCy model version number that the generated Coreferee model will support, as well as the spaCy model version number with which the Coreferee model is trained (train_version). During development, all three numbers will normally refer to a single version number. Later, when an updated spaCy model version is brought out, testing will be required to see whether the existing Coreferee model still supports the new spaCy model version. If so, the maximum version number can be increased; if not, a new config entry will be necessary to accommodate the new Coreferee model that will then be required.

  3. The file rules.py in the main code directory contains an abstract class RulesAnalyzer that must be implemented by a class LanguageSpecificRulesAnalyzer within a file called language_specific_rules.py in each language-specific directory. The abstract class RulesAnalyzer contains docstrings that specify for each abstract property and method the contract to which implementing classes should adhere. Looking at the existing language-specific rules is also likely to be helpful. The method is_potential_anaphor() is normally the most work to create: here it is probably worth looking at the existing English method for languages with natural gender or at the existing German method for languages with grammatical gender. (Polish has an unusually complex gender system, so the Polish example is unlikely to be helpful even as a basis for working with other Slavonic languages.)

  4. There are some situations where word lists can be helpful. If a list is placed in a file <name>.dat within the data directory under a language-specific directory, the contents will be automatically made available within the LanguageSpecificRulesAnalyzer for the language in question as a variable self.<name> that contains a list where each entry corresponds to a line from the file; comments with # are supported. If you use a word list, please ensure it can be published under the MIT license and give appropriate attribution within the language-specific directory in the LICENSE and, where appropriate, in a COPYING file.

  5. Male and female names are managed on a cross-linguistic basis because there is no reason why one would not want e.g. a German female name to be recognised within an English text. Names are automatically made available to all RulesAnalyzer implementations as properties self.male_names, self.female_names, self.exclusively_male_names and self.exclusively_female_names. If you can locate a suitable names list for the language you are working on that is available under a suitable license, add the attribution to the LICENSE file under common/ and merge your names into the two files. Please tidy up the result so that the files are free of duplicates and in alphabetical order.

  6. Create a language-specific directory under tests/ with a file test_rules_<ISO 639-1>.py to test the rules you have written in 3-5). Although one of the corresponding files for one of the existing languages is likely to be the best starting point, you should also be sure to test any extra features specific to the language you are working on. The test tooling is designed to run each test against all spaCy models specified in config.cfg. At this stage in development, you will need to add temporarily a parameter add_coreferee=False to the call to get_nlps() in the setUp() method. Otherwise, all tests will fail because the test tooling will attempt to add the as yet non-existent Coreferee model to the pipe.

  7. Some tests may fail with one of the smaller spaCy models because it produces incorrect syntactic representations rather than because of any issue with your rule code. For such cases, a parameter excluded_nlps can be specified within a test method to prevent it from being executed with specific spaCy models.

  8. Locate a training corpus or corpora. Again, you should make sure that the resulting models can be published under the MIT license. Add new loader class(es) for the corpus or corpora to the existing loader classes in the train/loaders.py file. Loader classes must implement the GenericLoader abstract class that is located at the top of this file. The job of a loader is to read a specific training corpus format and to create and annotate spaCy documents with coreferences marked within corpora of that format. All the data for a single training run should be placed in a single directory; if there are multiple types of training data loaded by different loaders, each loader will need to be able to recognise the data it is required to read by examining the names of the files within the directory. It is worth spending some time checking with print() statements that the loaders annotate as expected, otherwise the training step that follows has little chance of success!

  9. You are now ready to begin training. The training command must be issued from the coreferee/ root directory. Coreferee will place a zip file into <log-dir>. Alongside the accuracy for each model, the files in the zip file show the coreference chains produced for each test document as well as a list of incorrect annotations where the Coreferee interpretation differed from the one specified by the training corpus author — information that is invaluable for debugging and rules improvement. As an example, the training command for English is:

python3 -m coreferee train --lang en --loader ParCorLoader,LitBankANNLoader --data <training-data-dir> --log <log-dir>
  1. Measure the performance of your model against older versions of spaCy and the corresponding spaCy models: create a virtual environment for each version of spaCy, and from it measure the performance against the standard test corpus using the coreferee check command, of which an example is:
python3 -m coreferee check --lang en --loader ParCorLoader,LitBankANNLoader --data <training-data-dir> --log <log-dir>
  1. Once you are happy with your models, install them. The command must be issued from the coreferee/ root directory, otherwise Coreferee will attempt to download the models from GitHub where they are not yet present:
python3 -m coreferee install <ISO 639-1>
  1. Before you attempt any regression tests that involve running Coreferee as part of the spaCy pipe, you must remove the add_coreferee=False parameter you added above. A setup where the parameter is present in one test file but absent in the other test file will not work because the spaCy models are loaded once per test run.

  2. Again using one of the existing languages as an starting point, create a test_smoke_tests_<ISO 639-1>.py file in your test directory. The smoke tests are designed to make sure that the basic features of Coreferee are working properly for the language in question and should also cover any features that have posed a particular challenge while developing the rules.

  3. Format your language_specific_rules.py using black.

  4. Go through the documentation (README.md and SHORTREADME.md) adding information about the new language wherever the supported languages are listed in some way.

  5. Issue a pull request. We ask that you supply us with the zip file placed into <log-dir> in point 9. Because this will contain a considerable amount of raw information from the training corpora, it will normally be preferable from a licensing viewpoint to send it out of band rather than attaching it to the pull request.

5. Adding support for a custom spaCy model

If you are using a custom spaCy model, you should generate a corresponding custom Coreferee model. Use points 2), 8), 9) and 10) from the preceding section as a guide. If you do not have your own training data, you can use the same training data that was used to generate the standard Coreferee models.

The language-specific rules expect specific entity tags as 'magic values'. This is unfortunate but there is no obvious alternative solution because there is no way of knowing which entities a new tag might refer to. The best advice is to use the same entity tags in your custom model as are used in the standard spaCy models when referring to similar entity classes.

For many entity tags, the impact will be minimal if you cannot adhere to this, but what is crucial is that you use the PERSON and PER tags to refer to people in English and German respectively. If this is not possible, change the language-specific-rule code and reinstall Coreferee locally (python -m pip install . from the root directory).

6 Version history

6.1 Version 1.0.0

The initial open-source version.

6.2 Version 1.0.1
  • Fixing of a bug where already installed models were reinstalled from site-packages rather than the new model being pulled from GitHub.

6.3 Version 1.1.0
  • Upgrade to Python 3.9 and spaCy 3.1
  • Fixing of minor issues in all three rule-sets
  • Regeneration of all models
  • Improvement of the Polish examples in section 1.4.1 to make them more pragmatically correct - many thanks to Małgorzata Styś for her valuable advice on this.

6.4 Version 1.1.1
  • Changed the dependencies to allow Coreferee to run on the Apple M1 chipset
  • Sorted out a problem with the supported spaCy versions
  • Improved some of the tests

6.5 Version 1.1.2
  • Added support for French, which was kindly supplied by Pantalaymon

6.6 Version 1.1.3
  • Updated French rules to new version, again supplied by Pantalaymon
  • Fixed an endless-loop problem in language_independent_is_anaphoric_pair()

6.7 Version 1.2.0
  • Removed dependencies to TensorFlow and Keras, switching to Thinc as the neural network platform. Switching to Thinc has led to serialized models that are around 30% of the size of the old models, and has also allowed the old limitation to be removed where nlp.pipe() could not be called with n_process > 1 with forked processes.
  • Implemented a softmax layer to select the best potential referent for each anaphor as opposed to calculating independent scores for each pair.
  • Added matrix tests to support a variety of Python and spaCy versions, including spaCy 3.2 and spaCy 3.3.
  • Implemented a stable-random split into train and test corpora as opposed to using the last 20% of loaded documents as the test corpus.
  • Improved the training script so that it remembers the model state at each epoch and chooses the best-performing state from the training history as the model to save.
  • Added the coreferee check command to enable performance measurement for an existing Coreferee model with a new spaCy model.

6.8 Version 1.3.0
  • Added support for spaCy v3.4 for English, German and Polish.

6.9 Version 1.3.1
  • Added support for the v3.4.1 English models.

6.10 Version 1.4.0
  • Made it possible to package spaCy pipelines containing Coreferee.
  • Added an entry point for Coreferee so it does not need to be imported explicitly alongside spaCy.
  • Added support for spaCy v3.5 for English, German and Polish.

6.11 Version 1.4.1
  • Added support for Python v3.11.

7. Open issues / requests for assistance

  1. Because optimising parsing speed was not a priority in the project within which Coreferee came into being, Coreferee is written purely in Python; it would be helpful if somebody could convert relevant parts of it to Cython.

  2. It would be useful if somebody could find a way of benchmarking Coreferee against other coreference resolution solutions, especially for English. One problem this would probably present is that using a benchmark necessitates a normative scope where a system aims to find exactly those types of coreference marked within the benchmark corpus, whereas the scope of Coreferee was determined by project requirements.

coreferee's People

Contributors

adrianeboyd avatar richardpaulhudson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

coreferee's Issues

How can I know how confident the model is for a specific mention?

To explain better, I want to check certainty in a percentage of a specific coreference. I am sorry, if that feature is already present in the code, but I dug up, and could not find something that I could use myself.

Examples:

"Peter and Jane went to the park. He forgot to bring his phone."

Mension : "He", Reference: "Peter", Confidence: "92%"

"Peter went to the park. He forgot to bring his phone."

Mension : "He", Reference: "Peter", Confidence: "99%"

About model performance

Thank you for your contributions to the NLP field. I would like to know more about 1.4.2 model performance, such as the meaning of Anaphors in 20% and Accuracy (%), as well as how to align the format of the coreferee with the corpus. Since "A mention within Coreferee does not consist of a span", the output of coreferee seems incompatible with the answer key of most corpora ( I tried ontonotes and litbank). For example, the answer key is "Gaza strip”, but the output of the coreferee is ”Gaza”. Thank you!

ModelNotSupportedError: spaCy model en_coreference_web_trf version 3.4.0a2 is not supported by Coreferee.

I followed the instructions, but It doesn't work. I'm getting the same error everytime ,and there is nothing left that I didn't try for fixing it.

Here is my spacy info:

image

And here is the my environment's versions:

coreferee==1.4.1
coreferee-model-en @ https://github.com/richardpaulhudson/coreferee/raw/master/models/coreferee_model_en.zip#sha256=aec5662b4af38fbf4b8c67e4aada8b828c51d4a224b5e08f7b2b176c02d8780f

spacy==3.4.4
spacy-alignments==0.9.0
spacy-experimental==0.6.2
spacy-legacy==3.0.12
spacy-loggers==1.0.4
spacy-transformers==1.1.9

What is wrong here ? It is so annoying. I really need this module.

Complete error below:

✘ spaCy model en_coreference_web_trf version 3.4.0a2 is not supported
by Coreferee. Please examine /coreferee/lang/en/config.cfg to see the supported
models/versions.
---------------------------------------------------------------------------
ModelNotSupportedError                    Traceback (most recent call last)
Cell In[5], line 2
      1 nlp_corr = spacy.load("en_coreference_web_trf")
----> 2 nlp_corr.add_pipe('coreferee')

File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/spacy/language.py:801, in Language.add_pipe(self, factory_name, name, before, after, first, last, source, config, raw_config, validate)
    793     if not self.has_factory(factory_name):
    794         err = Errors.E002.format(
    795             name=factory_name,
    796             opts=", ".join(self.factory_names),
   (...)
    799             lang_code=self.lang,
    800         )
--> 801     pipe_component = self.create_pipe(
    802         factory_name,
    803         name=name,
    804         config=config,
    805         raw_config=raw_config,
    806         validate=validate,
    807     )
    808 pipe_index = self._get_pipe_index(before, after, first, last)
    809 self._pipe_meta[name] = self.get_factory_meta(factory_name)

File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/spacy/language.py:680, in Language.create_pipe(self, factory_name, name, config, raw_config, validate)
    677 cfg = {factory_name: config}
    678 # We're calling the internal _fill here to avoid constructing the
    679 # registered functions twice
--> 680 resolved = registry.resolve(cfg, validate=validate)
    681 filled = registry.fill({"cfg": cfg[factory_name]}, validate=validate)["cfg"]
    682 filled = Config(filled)

File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/confection/__init__.py:728, in registry.resolve(cls, config, schema, overrides, validate)
    719 @classmethod
    720 def resolve(
    721     cls,
   (...)
    726     validate: bool = True,
    727 ) -> Dict[str, Any]:
--> 728     resolved, _ = cls._make(
    729         config, schema=schema, overrides=overrides, validate=validate, resolve=True
    730     )
    731     return resolved

File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/confection/__init__.py:777, in registry._make(cls, config, schema, overrides, resolve, validate)
    775 if not is_interpolated:
    776     config = Config(orig_config).interpolate()
--> 777 filled, _, resolved = cls._fill(
    778     config, schema, validate=validate, overrides=overrides, resolve=resolve
    779 )
    780 filled = Config(filled, section_order=section_order)
    781 # Check that overrides didn't include invalid properties not in config

File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/confection/__init__.py:849, in registry._fill(cls, config, schema, validate, resolve, parent, overrides)
    846     getter = cls.get(reg_name, func_name)
    847     # We don't want to try/except this and raise our own error
    848     # here, because we want the traceback if the function fails.
--> 849     getter_result = getter(*args, **kwargs)
    850 else:
    851     # We're not resolving and calling the function, so replace
    852     # the getter_result with a Promise class
    853     getter_result = Promise(
    854         registry=reg_name, name=func_name, args=args, kwargs=kwargs
    855     )

File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/coreferee/manager.py:140, in CorefereeBroker.__init__(self, nlp, name)
    138 self.nlp = nlp
    139 self.pid = os.getpid()
--> 140 self.annotator = CorefereeManager().get_annotator(nlp)

File ~/anaconda3/envs/cihat/lib/python3.10/site-packages/coreferee/manager.py:132, in CorefereeManager.get_annotator(nlp)
    118 error_msg = "".join(
    119     (
    120         "spaCy model ",
   (...)
    129     )
    130 )
    131 msg.fail(error_msg)
--> 132 raise ModelNotSupportedError(error_msg)

ModelNotSupportedError: spaCy model en_coreference_web_trf version 3.4.0a2 is not supported by Coreferee. Please examine /coreferee/lang/en/config.cfg to see the supported models/versions.

coreferee does not take into account merged tokens

While trying to use Coreferee to replace proper nouns with their corresponding references, Coreferee will return the wrong token indexes. This issue only occure if a merge was done beforehand.

doc = nlp("the big bad wolf is small, he is also bad")
with doc.retokenize() as retokenizer:
    retokenizer.merge(doc[1:4])

def coref(sentences):
#     nlp = spacy.load('en_core_web_trf')
#     nlp.add_pipe('coreferee')

    resolved_text = ""
    for token in doc:
        print('token:',token)
        repres = doc._.coref_chains.resolve(token)
        if repres:
            print("refer to: ",repres)
            resolved_text += " " + " and ".join([t.text for t in repres])
        else:
            resolved_text += " " + token.text
    return(resolved_text)

resolved_text = coref(doc)
print(resolved_text)

I expect "he" to refer to "big bad wolf"
I get "small" instead

Example?

Hi,
Thanks for writing this library! I'm trying to replace pronouns with proper nouns (except in quotations). Is there an example on how to do this?
Thank you!

spaCy model en_core_web_trf version 3.4.1

After following the installation instructions of holmes extractor I run into the following error:

"spaCy model en_core_web_trf version 3.4.1 is not supported by Coreferee. Please examine /coreferee/lang/en/config.cfg to see the supported models/versions."

however, if I try:
python -m spacy download en_core_web_trf==3.4.0

I get the error:

✘ No compatible package found for 'en_core_web_trf==3.4.0' (spaCy v3.4.1)

Hints how to solve this issue, i.e. how to uninstall/install a setup of libraries that work together would be highly appreciated :-)

Thanks,

coreferee.errors.ModelNotSupportedError: en_core_web_md version 3.1.0

When executing the following code -

nlp=spacy.load('en_core_web_md')
nlp.add_pipe('coreferee')

I am getting the following error -

coreferee.errors.ModelNotSupportedError: en_core_web_md version 3.1.0

Any idea why this is happening? And what can be done in order to resolve this?

Using coreferee with custom model

Hello,

I would like to use coreferee with a custom spacy model that is a slight variation of the en_core_web_lg version 3.4.1 (it's basically the same model that has been trained to recognize one additional entity type using the standard spacy training process).

Trying to add coreferee to the trained pipeline with .add_pipe fails with a model version not supported error. In the readme it says that I'm supposed to train a new coreferee model for this custom model, however I would like to essentially use the same model for en_core_web_lg as my custom model is very similar. Is there any way to just lift that coreferee model for use with a custom spacy model?

Guidelines for annotatiing own dataset to finetune coreferee pretrained model

Hi,
I am interested in annotating my own custom dataset for finetuning existing pretrained model.
I have tried reviewing some of the public datasets available like

  • ParCor
  • LitBank
  • Gap Corefernce
  • OntoNotes / Conll-2012

I am little confused as all are not similar to each other. Can you suggest me some basic guidelines for annotation. It would be great help.
Thanks in advance!

Request for Updated Download Link or Installation Instructions for Chinese Coreference Resolution Model

Hello,

I encountered an issue while trying to install the Chinese coreference resolution model for coreferee. Following the documentation, I attempted to download the model file from the following link:

https://github.com/richardpaulhudson/coreferee/raw/master/models/coreferee_model_zh.zip

However, this link returns a 404 error, and I am unable to download the model file.

Could you please provide an updated download link or installation instructions? If there are any new model files or alternative solutions available, could you share the relevant information? Thank you very much!

Best regards,
rtc

Make English model downloadable through .yml file

Is it possible to host the en model in conda or pypi so that I can download it in a .yml, similar to the spacy models? Basically, just trying to do this:

name: dev
channels:
  - conda-forge
  - defaults
dependencies:
  - pip:
    - spacy
    - coreferee
  - spacy-model-en_core_web_lg
  - spacy-model-en_core_web_trf
  - coreferee-model-en

I can't do the command line install in my setup. Thank you!

Resolving first person references

Hi there and thanks for sharing this incredible model!

I plugged in the following example, and was surprised to not see a chain in the first person. I would expect the instances of "my" to eventually chain with "I" later in the text. I am very new to coreferences, curious why this might be happening. Thanks for any insight you may provide.

"Thank you for your videos. The situation with my mom is now that she is older and has thinner skin she gets really cold. She doesn’t believe this is why she gets colder. She insists that we are the only people that has a cold house. Our temp is set around 72 or 73 degrees. She says everyone else keeps there house temp at 80 degrees and she insists that we kept the house temp at 80 degrees year round for our whole lives. Ex: When my parents were in their 30’s and I was a young child she claims our house temp was always set at 80 degrees. If you tell her it was not and that she gets colder now because of her age she gets really mad. I should also mention this is not a once in a while conversation she has. She talks about this multiple times every day."

0: mom(10), she(14), she(21), She(27), she(34), She(39), She(64), she(76)
1: parents(100), their(103)
2: 30(104), it(129)
3: child(111), she(112), her(128), she(134), her(140), she(142), she(161), She(165)

All the best,
David

Is all future development being done on the explosion repo and not the old msg-systems repo?

Hi @richardpaulhudson, many thanks for your great work on coreferee! I'm working with @Pantalaymon on a project that does French coreference resolution, and we are trying to get coreferee working with spaCy 3.2 for better quality results. I believe @Pantalaymon has made great progress with training a new French coreferee model with spaCy 3.2 (with new rules) and has seen improved results on the benchmarks, so we were wondering, how would the upcoming versions of coreferee handle new PRs and updates?

For now there's an open PR on the old coreferee repo -- however, the commit history and refs from that repo wouldn't transfer to this new repo -- is there a reason you didn't transfer ownership of the msg-systems repo to the explosion org so that the commit histories would carry over? Please advise on next steps when you can, thanks!

Support for Spacy 3.7?

Hello,
I am unable to test corefree with Spacy 3.7:

✘ spaCy model fr_core_news_lg version 3.7.0 is not supported by
Coreferee. Please examine /coreferee/lang/fr/config.cfg to see the supported
models/versions.

Are there any plans to support Spacy 3.7 with the en_core_news_lg and fr_core_news_lg models?
Thanks so much,
Yann

can not add coreferee to spacy pipe.

use this code:

import coreferee
import spacy
nlp = spacy.load("en_core_web_trf")

nlp.add_pipe("coreferee") <<<
got error:

*** ValueError: [E002] Can't find factory for 'coreferee' for language English (en). This usually happens when spaCy calls nlp.create_pipe with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator @Language.component (for function components) or @Language.factory (for class components).

I followed the instructions as defined here https://github.com/explosion/coreferee#version-131
and installed 'spacy-transformers'

reuven

Minimum python version?

Hello,

First, great project and good job!

I'm trying to use coreferee for French data. I tried the public example for French in your doc.
I got the following error when I'm running it with Python 3.8.13.

⚠ Unexpected error in Coreferee annotating document, skipping ....
⚠ <class 'TypeError'>
⚠ unsupported operand type(s) for |: 'dict' and 'dict'

  File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/manager.py", line 144, in __call__
    self.annotator.annotate(doc)
  File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/annotation.py", line 377, in annotate
    self.rules_analyzer.initialize(doc)
  File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/rules.py", line 314, in initialize
    if self.language_independent_is_potential_anaphoric_pair(
  File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/rules.py", line 474, in language_independent_is_potential_anaphoric_pair
    if self.is_potential_coreferring_noun_pair(
  File "/home/jerome/miniconda3/envs/origami_conda/lib/python3.8/site-packages/coreferee/lang/fr/language_specific_rules.py", line 1276, in is_potential_coreferring_noun_pair
    new_reverse_entity_noun_dictionary = {

The error does not occur with Python 3.9.

Would it be possible to fix this problem? (the project I want to use it is still using 3.8...)

Best,
Jérôme

Potential degradation in more recent spaCy versions

Hi Richard,

I've been doing some tests comparing the performance of neuralcoref (on older version of Python/spaCy) with coreferee for English, and I'm noticing some rather concerning degradations in performance with newer versions of coreferee. I'm not ready to share the comparison report for the neuralcoref/coreferee yet -- the data and tests need to be cleaned up, but in the interim, I've been inspecting coreferee's coreference chains across the following versions (both using coreferee 1.2.0)

  • spaCy 3.2.4, with en_core_web_md and en_core_web_lg
  • spaCy 3.3.1, with en_core_web_md and en_core_web_lg

I tried generating chains for the below sentences:

Victoria Chen, a well-known business executive, says she is 'really honoured' to see her pay jump to $2.3 million, as she became MegaBucks Corporation's first female executive. Her colleague and long-time business partner, Peter Zhang, says he is extremely pleased with this development. The firm's CEO, Lawrence Willis will be onboarding the new CFO in a few months. He said he is looking forward to the whole experience.

spaCy 3.2.4, en_core_web_md

▶ python test_coref.py
Loaded spaCy language model: en_core_web_md
0: Chen(1), she(11), her(19), she(28), Her(37)
1: Corporation(31), firm(59)
2: Zhang(47), he(50)
3: Willis(64), He(76), he(78)
None

spaCy 3.3.1, en_core_web_md

▶ python test_coref.py
Loaded spaCy language model: en_core_web_md
0: Chen(1), she(11), her(19), she(28), Her(37)
1: Corporation(31), firm(59)
2: Zhang(47), he(50), He(76), he(78)
None

spaCy 3.2.4, en_core_web_lg

▶ python test_coref.py
Loaded spaCy language model: en_core_web_lg
0: Chen(1), she(11), her(19), she(28), Her(37)
1: Corporation(31), firm(59)
2: colleague(38), he(50), He(76), he(78)
None

spaCy 3.3.1, en_core_web_lg

▶ python test_coref.py
Loaded spaCy language model: en_core_web_lg
0: Chen(1), she(11), her(19), she(28), Her(37)
1: Corporation(31), firm(59)
2: colleague(38), he(50)
3: Willis(64), He(76), he(78)
None

In both cases, the en_core_web_lg language model returns a result that's considerably worse than the en_core_web_md model, which is itself quite surprising. I'd expect that the dependency parse from the large model would be far superior to the medium model, and so should not produce this noticeably different a result. As can be seen, the en_core_web_lg model is missing entire named entities altogether, and the total number of results in the chain is lower than what we get from the medium model.

Observation

The best result (in which we capture all three named entities -- "Chen", "Zhang" and "Willis" in the coref chain) is obtained with the smallest (en_core_web_md) model in spaCy 3.2.4, and not the newest version with the largest model, which is rather counter-intuitive.

I understand that the most general guideline you can offer is that these sorts of examples are single cases, and that statistically, the models should be more or less comparable. But that's definitely not true in the case of my own private tests (which I will attempt to share shortly) -- in my tests, in which I perform a range of tasks, including parsing, named entity recognition, coreference resolution and gender identification across a dataset of ~100 news articles, I am noticing a recognizable drop in coreferee performance across both these dimensions:

  • spaCy version (3.3.1 performs worse than 3.2.4, comparing the medium and large models head to head)
  • Language model size (Medium performs better than large, w.r.t. coreference results)

Again, I fully understand that the one-off example I gave above might seem that it's indeed one-off, but I was wondering if there's something you've noticed in terms of accuracy numbers in your tests. My concern is that the rules for coreferee's English version are not carrying over well with the new spaCy models, particularly in v3.3.x, potentially due to whatever internal changes were made to the language models in the recent release.

The issue comparing neuralcoref (whose performance also seems to be better than coreferee) is a totally different one, and is unrelated to this one I've posted. I'll do my best to clean up my comparison tests of neuralcoref and coreferee and document them (I'm currently trying to separate the different functions I'm performing for my own project, so that I document only the coreference resolution results as clearly as possible. Looking forward to hearing your thoughts!

Finetuning on my own data

Hi @richardpaulhudson,

Earlier I tried training my custom NER spacy model on Litbank dataset, which was working. But when I tried training on my own own data, it seems that coref_chains attribute doesnt mark any text to true. Can you help me? How can I proceed?
I have attacehd the self annotated sample dataset too, can you check if that is alright?

Thanks in advance!
(Link to custom dataset)
https://drive.google.com/drive/folders/1WzRogtvg81TMCHmVR0Kw4iqrbVWCFgO7?usp=sharing

pip subprocess to install build dependencies did not run successfully.

I'm new to Python so please forgive me for any simple errors, but I'm using spacy-3.1.7 and python 3.12.2 on Windows 64bit. I have successfully installed Spacy and was able to run it without a problem, but I run into this error when trying to install coreferee. I can post more error logs if needed.

python3 -m pip install coreferee
I get this error:

Collecting coreferee
  Using cached coreferee-1.1.3-py3-none-any.whl.metadata (2.2 kB)
Collecting spacy<3.2.0,>=3.1.0 (from coreferee)
  Using cached spacy-3.1.7.tar.gz (1.0 MB)
  Installing build dependencies ... error
  error: subprocess-exited-with-error

  × pip subprocess to install build dependencies did not run successfully.
  │ exit code: 1
  ╰─> [771 lines of output]
      Collecting setuptools
        Using cached setuptools-69.2.0-py3-none-any.whl.metadata (6.3 kB)
      Collecting cython<3.0,>=0.25
        Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB)
      Collecting cymem<2.1.0,>=2.0.2
        Using cached cymem-2.0.8-cp312-cp312-win_amd64.whl.metadata (8.6 kB)
      Collecting preshed<3.1.0,>=3.0.2
        Using cached preshed-3.0.9-cp312-cp312-win_amd64.whl.metadata (2.2 kB)
      Collecting murmurhash<1.1.0,>=0.28.0
        Using cached murmurhash-1.0.10-cp312-cp312-win_amd64.whl.metadata (2.0 kB)
      Collecting thinc<8.1.0,>=8.0.12
        Using cached thinc-8.0.17.tar.gz (189 kB)
        Installing build dependencies: started
        Installing build dependencies: finished with status 'done'
        Getting requirements to build wheel: started
        Getting requirements to build wheel: finished with status 'done'
        Installing backend dependencies: started
        Installing backend dependencies: finished with status 'done'
        Preparing metadata (pyproject.toml): started
        Preparing metadata (pyproject.toml): finished with status 'done'
      Collecting blis<0.8.0,>=0.4.0
        Using cached blis-0.7.11-cp312-cp312-win_amd64.whl.metadata (7.6 kB)
      Collecting pathy
        Using cached pathy-0.11.0-py3-none-any.whl.metadata (16 kB)
      Collecting numpy>=1.15.0
        Using cached numpy-1.26.4-cp312-cp312-win_amd64.whl.metadata (61 kB)
      Collecting wasabi<1.1.0,>=0.8.1 (from thinc<8.1.0,>=8.0.12)
        Using cached wasabi-0.10.1-py3-none-any.whl.metadata (28 kB)
      Collecting srsly<3.0.0,>=2.4.0 (from thinc<8.1.0,>=8.0.12)
        Using cached srsly-2.4.8-cp312-cp312-win_amd64.whl.metadata (20 kB)
      Collecting catalogue<2.1.0,>=2.0.4 (from thinc<8.1.0,>=8.0.12)
        Using cached catalogue-2.0.10-py3-none-any.whl.metadata (14 kB)
      Collecting pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 (from thinc<8.1.0,>=8.0.12)
        Using cached pydantic-1.8.2-py3-none-any.whl.metadata (103 kB)
      Collecting smart-open<7.0.0,>=5.2.1 (from pathy)
        Using cached smart_open-6.4.0-py3-none-any.whl.metadata (21 kB)
      Collecting typer<1.0.0,>=0.3.0 (from pathy)
        Using cached typer-0.9.0-py3-none-any.whl.metadata (14 kB)
      Collecting pathlib-abc==0.1.1 (from pathy)
        Using cached pathlib_abc-0.1.1-py3-none-any.whl.metadata (18 kB)
      Collecting typing-extensions>=3.7.4.3 (from pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4->thinc<8.1.0,>=8.0.12)
        Using cached typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB)
      Collecting click<9.0.0,>=7.1.1 (from typer<1.0.0,>=0.3.0->pathy)
        Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
      Collecting colorama (from click<9.0.0,>=7.1.1->typer<1.0.0,>=0.3.0->pathy)
        Using cached colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB)
      Using cached setuptools-69.2.0-py3-none-any.whl (821 kB)
      Using cached Cython-0.29.37-py2.py3-none-any.whl (989 kB)
      Using cached cymem-2.0.8-cp312-cp312-win_amd64.whl (39 kB)
      Using cached preshed-3.0.9-cp312-cp312-win_amd64.whl (122 kB)
      Using cached murmurhash-1.0.10-cp312-cp312-win_amd64.whl (25 kB)
      Using cached blis-0.7.11-cp312-cp312-win_amd64.whl (6.6 MB)
      Using cached pathy-0.11.0-py3-none-any.whl (47 kB)
      Using cached pathlib_abc-0.1.1-py3-none-any.whl (23 kB)
      Using cached numpy-1.26.4-cp312-cp312-win_amd64.whl (15.5 MB)
      Using cached catalogue-2.0.10-py3-none-any.whl (17 kB)
      Using cached pydantic-1.8.2-py3-none-any.whl (126 kB)
      Using cached smart_open-6.4.0-py3-none-any.whl (57 kB)
      Using cached srsly-2.4.8-cp312-cp312-win_amd64.whl (478 kB)
      Using cached typer-0.9.0-py3-none-any.whl (45 kB)
      Using cached wasabi-0.10.1-py3-none-any.whl (26 kB)
      Using cached click-8.1.7-py3-none-any.whl (97 kB)
      Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB)
      Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
      Building wheels for collected packages: thinc
        Building wheel for thinc (pyproject.toml): started
        Building wheel for thinc (pyproject.toml): finished with status 'error'
        error: subprocess-exited-with-error

        Building wheel for thinc (pyproject.toml) did not run successfully.
        exit code: 1

Error with dictionaries with python3.8

Hi,
While testing coreferee in French on this simple example :

"Robin est un garçon, il est gentils. La Reine Elisabeth II est aussi gentille"

with this code :

import spacy
import coreferee
nlp = spacy.load("fr_core_news_lg")
nlp.add_pipe('coreferee')
doc = nlp(text)
doc._.coref_chains.print()

I get, this error message :

Unexpected error in Coreferee annotating document, skipping ....
⚠ <class 'TypeError'>
⚠ unsupported operand type(s) for |: 'dict' and 'dict'

versions:
python==3.8.1
spacy==3.2.0
fr_core_news_lg==3.2.0
coreferee==1.3.1

I think this issue is due to an operation on dict that is not yet supported in python3.8. The syntax to concatenate two dicts need to be changed as follows:

a = {"exemple_1":5,"exemple_2":3}
b = {"exemple_2":5,"exemple_3":3}
c = {**a,**b}

instead of :

a = {"exemple_1":5,"exemple_2":3}
b = {"exemple_2":5,"exemple_3":3}
c = a|b

In French, the following changes need to be applied at least in this file:

"coreferee/lang/fr/language_specific_rules.py", line 1276,

After changing this file, the bug is disappearing on my example but might be in other languages or use cases.

Hope this helps,
Thank you for your amazing work,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.