Coder Social home page Coder Social logo

cvangysel / pytrec_eval Goto Github PK

View Code? Open in Web Editor NEW
249.0 9.0 31.0 44 KB

pytrec_eval is an Information Retrieval evaluation tool for Python, based on the popular trec_eval.

Home Page: http://ilps.science.uva.nl/

License: MIT License

Python 49.46% C++ 50.54%
information-retrieval evaluation

pytrec_eval's Introduction

pytrec_eval

pytrec_eval is a Python interface to TREC's evaluation tool, trec_eval. It is an attempt to stop the cultivation of custom implementations of Information Retrieval evaluation measures for the Python programming language.

Requirements

The module was developed using Python 3.5. You need a Python distribution that comes with development headers. In addition to the default Python modules, numpy and scipy are required.

Installation

Installation is simple and should be relatively painless if your Python environment is functioning correctly (see below for FAQs).

pip install pytrec_eval

Examples

Check out the examples that simulate the standard trec_eval front-end and that compute statistical significance between two runs.

To get a grasp of how simple the module is to use, check this out:

import pytrec_eval
import json

qrel = {
    'q1': {
        'd1': 0,
        'd2': 1,
        'd3': 0,
    },
    'q2': {
        'd2': 1,
        'd3': 1,
    },
}

run = {
    'q1': {
        'd1': 1.0,
        'd2': 0.0,
        'd3': 1.5,
    },
    'q2': {
        'd1': 1.5,
        'd2': 0.2,
        'd3': 0.5,
    }
}

evaluator = pytrec_eval.RelevanceEvaluator(
    qrel, {'map', 'ndcg'})

print(json.dumps(evaluator.evaluate(run), indent=1))

The above snippet will return a data structure that contains the requested evaluation measures for queries q1 and q2:

{
    'q1': {
        'ndcg': 0.5,
        'map': 0.3333333333333333
    },
    'q2': {
        'ndcg': 0.6934264036172708,
        'map': 0.5833333333333333
    }
}

For more like this, see the example that uses parametrized evaluation measures.

Frequently Asked Questions

Since the module's initial release, no questions have been asked so frequently that they deserve a spot in this section.

Citation

If you use pytrec_eval to produce results for your scientific publication, please refer to our SIGIR paper:

@inproceedings{VanGysel2018pytreceval,
  title={Pytrec\_eval: An Extremely Fast Python Interface to trec\_eval},
  author={Van Gysel, Christophe and de Rijke, Maarten},
  publisher={ACM},
  booktitle={SIGIR},
  year={2018},
}

License

pytrec_eval is licensed under the MIT license. Please note that trec_eval is licensed separately. If you modify pytrec_eval in any way, please link back to this repository.

pytrec_eval's People

Contributors

cmacdonald avatar cvangysel avatar nicoweidmann avatar ricocotam avatar seanmacavaney avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytrec_eval's Issues

Memory leak problem

Hi,

Thanks for sharing the tool! It is indeed a very useful one!

I noticed a memory leak problem when running evaluator.evaluate. It allocates memory slightly higher than the size of the run file, but never releases the memory. To reproduce it, I attached a simple program, with sample qrel and run files.

pytrec_eval_test.zip

Here is the results of memory profiling. As you can see del res does not release the memory, leading to allocation of large memory after several runs:

Line # Mem usage Increment Line Contents

79   55.574 MiB   55.574 MiB   @profile
80                             def runme():
81                                 
82   57.930 MiB    2.355 MiB       qrel = load_reference('qrels.txt')
83  106.215 MiB   48.285 MiB       run = load_candidate('run.txt')
84                                 
85                                 
86  107.570 MiB    1.355 MiB       evaluator = pytrec_eval.RelevanceEvaluator(qrel, {'map', 'ndcg'})
87                             
88  107.570 MiB    0.000 MiB       N = 100
89 1320.098 MiB    0.000 MiB       for i in range(1,N):
90 1320.098 MiB   18.855 MiB           res = evaluator.evaluate(run)
91 1320.098 MiB    0.000 MiB           del res

The problem made me go back to old style ac-hoc running of trec_eval. It would be great to get through it, and I am happy to help.

Best,
Navid

why relevance scores should be only Integers?

For some metrics like nDCG, it is plausible that we have float relevance scores.
Is it a way to use pytrec_eval for floating relevance scores?

The following sample:

import pytrec_eval
import json

qrel = {
    'q1': {
        'd1': 0.2,
        'd2': 1.5,
        'd3': 0,
    },
    'q2': {
        'd2': 2.5,
        'd3': 1,
    },
}

run = {
    'q1': {
        'd1': 1.0,
        'd2': 0.0,
        'd3': 1.5,
    },
    'q2': {
        'd1': 1.5,
        'd2': 0.2,
        'd3': 0.5,
    }
}

evaluator = pytrec_eval.RelevanceEvaluator(
    qrel, {'ndcg'})

print(json.dumps(evaluator.evaluate(run), indent=1))

Raised the following exception:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-7-9cc469855e77> in <module>
     28 
     29 evaluator = pytrec_eval.RelevanceEvaluator(
---> 30     qrel, {'ndcg'})
     31 
     32 print(json.dumps(evaluator.evaluate(run), indent=1))

TypeError: Expected relevance to be integer.

Could you please explain how does the evaluation work when the grades are taken from two different ranges?

Dear developers,

I'm using your tool to estimate evaluation metrics for information retrieval research and I got a bit stuck with some behaviour, which doesn't look intuitive for me. Could you please help me to figure out what's going on?

Question:
Suppose I have a qrel file, where the grades are all integers and can take the values: {0, 1, 2, 3}. It's an initial qrel file for RelevanceEvaluator.

To evaluate the result I take another file where the grades are float and vary from 0 to 100 - [0, 100]. How does the RelevanceEvaluator behave in that case? Does it use any normalization?

  1. I tried to relabel all {2, 3} values to {1} in the intial qrel file and initialized the RelevanceEvaluator with such a qrel file with binary grades. And nothing changed... Is it OK?

  2. I used one more run-file for evaluation where grades are float and vary from 0 to 2 - [0, 2] and the results significantly changed with binary initial grades in comparison with {0, 1, 2, 3} initial grades. Why?

Thanks in advance.

Metrics missing when running evaluator twice

Kudos for the great interface!

I have been running some evaluations where I obtain the NDCG@10 and NDCG@100 metrics. I noticed that if I run the evaluator twice, the second time only one of the metrics appears. Here is a simple example, modified from simple.py:

import pytrec_eval
import json

qrel = {
    'q1': {
        'd1': 0,
        'd2': 1,
        'd3': 0,
    },
    'q2': {
        'd2': 1,
        'd3': 1,
    },
}

run = {
    'q1': {
        'd1': 1.0,
        'd2': 0.0,
        'd3': 1.5,
    },
    'q2': {
        'd1': 1.5,
        'd2': 0.2,
        'd3': 0.5,
    }
}

evaluator = pytrec_eval.RelevanceEvaluator(
    qrel, {'ndcg_cut_10', 'ndcg_cut_100'})

print(json.dumps(evaluator.evaluate(run), indent=1))
# Just calling .evaluate() again, but run could be different
print(json.dumps(evaluator.evaluate(run), indent=1))

Output:

{
 "q1": {
  "ndcg_cut_10": 0.5,
  "ndcg_cut_100": 0.5
 },
 "q2": {
  "ndcg_cut_10": 0.6934264036172708,
  "ndcg_cut_100": 0.6934264036172708
 }
}
{
 "q1": {
  "ndcg_cut_10": 0.5
 },
 "q2": {
  "ndcg_cut_10": 0.6934264036172708
 }
}

Tips to fasten the process

Hi,
Do you guys have any tips to have a faster metric computation ? I'm using the MAP and it's really slow as I need to do it quite often.

new release?

Hi guys,

Any timeline for deploying a new version to PyPi?

Pytrec scores are not consistent in colab

the same file and the same code, on my laptop gives different results than on google colab (metrics are P_1, P_3, recip_rank, map and ndcg) I did check the precision (running everything on np.float64), yet not the issue.

any suggestion?

Pip install error on windows 10, possible hardcoded path

Hello,
I got a series of errors like this:

File "C:\Users\myusername\AppData\Local\Continuum\anaconda3\lib\distutils\util.py", line 111, in convert_path raise ValueError("path '%s' cannot be absolute" % pathname) ValueError: path '/Users/cvangysel/Projects/pytrec_eval/trec_eval/convert_zscores.c' cannot be absolute

when I ran
pip install pytrec-eval

It seems that the absolute path is hardcoded somewhere.
I modified the util.py file to ignore the prefix '/Users/cvangysel/Projects/pytrec_eval/trec_eval/' and the installation seems to be successful (the example on README.md ran correctly).

use complete set of queries from relevance judgments (-c)

The -c option in trec_eval does the following:

 --complete_rel_info_wanted:
 -c: Average over the complete set of queries in the relevance judgements  
     instead of the queries in the intersection of relevance judgements 
     and results.  Missing queries will contribute a value of 0 to all 
     evaluation measures (which may or may not be reasonable for a  
     particular evaluation measure, but is reasonable for standard TREC 
     measures.) Default is off.

Although the default in trec_eval is off, I think it would be prudent to default this value to on (and maybe give the user an option to turn it off). Without this, a user may accidentally average over an incomplete set of queries, e.g., if their engine doesn't return any results for a given query.

It doesn't look like this is as simple as setting:

self->epi_.average_complete_flag = 1;

because the setting only affects trec_eval's averages, not the individual per-query scores. A fix could be modifying the run dict before sending it to the relevance assessor, adding in any missing queries pointing to empty dicts.

windows 10 installation

When I try to install pytrec_eval via pip install pytrec_eval, I am getting this error.
ERROR: Could not build wheels for pytrec_eval, which is required to install pyproject.toml-based projects
win 10 and python 3.9.0

fail to install for Mac due to support of libc++ in place of stdlibc++ in the latest MacOS

when installing with pip or from source code, on the latest MacOS demonstrated the issue:

"warning: include path for stdlibc++ headers not found; pass '-std=libc++' on the command line to use the libc++ standard library instead".

and later shows the error:

"src/pytrec_eval.cpp:19:10: fatal error: 'algorithm' file not found"

Seems that in the latest MacOS, stdlibc++ is no longer supported and I was expected to use libc++?

iprec_at_recall trec_eval: duplicate cutoffs detected

I am running an experinment in PyTerrier and i am trying to measure the iprec_at_recall at all recall values. However, i get the following error when i run the experinment ( works fine with other measures such as ndcg etc ).

trec_eval: duplicate cutoffs detected
python3: src/pytrec_eval.cpp:634: PyObject* RelevanceEvaluator_evaluate(RelevanceEvaluator*, PyObject*): Assertion 'te_trec_measures[measure_idx]->eval_index >=0' failed.

Range of doc score in run file

I am wondering the value range for document score for run file should be within [0, 1], or it can be any value including negative floating point as well ?

List of supported metrics

Where can I find the list of metrics and their descriptions that pytrec_eval supports. I tried looking at the trec_eval repo as well and it doesn't seem to list the metric names and associated descriptions in a clear place. For example I tried passing "precision" and got an unsupported measure error.

Does not install with pip on the Mac (c++ code does not compile with clang)

The c++ code does not compile on the Mac when installing with pip install pytrec_eval

Output:

src/pytrec_eval.cpp:637:30: error: expected expression
        RelevanceEvaluatorType = {
                                 ^
    11 warnings and 2 errors generated.
    error: command '/usr/bin/clang' failed with exit status 1

As a workaround, I made it work by installing gcc/g++ with homebrew and setting the environment so that pip uses gcc instead of clang.

Useful links:

error when installing using pip

hello everyone, so basically i'm using windows and i'm trying to install python-terrier and since pytrec_eval is one of the dependencies, there is this error that appears when installing it and break the whole thing:
raise ValueError("path '%s' cannot be absolute" % pathname) ValueError: path '/tmp/tmpgd7izyps/trec_eval-9.0.8/convert_zscores.c' cannot be absolute
any solutions ??

pytrec-eval-terrier 0.5.1

When trying to install pytrec-eval-terrier=0.5.1, I get the below error:

Collecting pytrec-eval-terrier==0.5.1
  Using cached pytrec_eval-terrier-0.5.1.tar.gz (16 kB)
  WARNING: Generating metadata for package pytrec-eval-terrier produced metadata for project name pytrec-eval. Fix your #egg=pytrec-eval-terrier fragments.
WARNING: Discarding https://files.pythonhosted.org/packages/6a/38/d723c26698e517f450ea905f633474e8ba714e23fb67a8c2aa34b803efbf/pytrec_eval-terrier-0.5.1.tar.gz#sha256=55e7f5b2c83f681ac262c975f64fdcbece64fed35508d410e56d2c52f262ebfe (from https://pypi.org/simple/pytrec-eval-terrier/) (requires-python:>=3). Requested pytrec-eval from https://files.pythonhosted.org/packages/6a/38/d723c26698e517f450ea905f633474e8ba714e23fb67a8c2aa34b803efbf/pytrec_eval-terrier-0.5.1.tar.gz#sha256=55e7f5b2c83f681ac262c975f64fdcbece64fed35508d410e56d2c52f262ebfe (from ir-measures==0.1.3) has inconsistent name: filename has 'pytrec-eval-terrier', but metadata has 'pytrec-eval'
ERROR: Could not find a version that satisfies the requirement pytrec-eval-terrier==0.5.1 (from ir-measures) (from versions: 0.5.1)
ERROR: No matching distribution found for pytrec-eval-terrier==0.5.1​

Seems like there are inconsistent names (perhaps in setup.py): pytrec-eval-terrier and pytrec-eval

How can I resolve this?

kernel will die with empty inputs

providing empty qrel/run file leads to kernel crash!

qrel = {}
run = {}
evaluator = pytrec_eval.RelevanceEvaluator(qrel, pytrec_eval.supported_measures)
results = evaluator.evaluate(run)

It should be catched in anyway.

Error on python setup.py install

Good Morning,

When I run :"python setup.py install" I get the following error:
"error: command 'gcc' failed with exit status 1. "
I would be grateful if you have a direction what is the problem

The installation log:
install_log.txt

Thanks in advance,
Ortal

Matching scores cannot be ints

This works
evaluator.evaluate({'151': {'clueweb09-en0027-05-20087': float(10000)}})
This does not:
evaluator.evaluate({'151': {'clueweb09-en0027-05-20087': 10000}})

There's no valid reason why a matching score cannot be an int, so I think the check at

if (!PyFloat_Check(inner_value)) {

is too strict.

The error message is not too clear:
`

TypeError Traceback (most recent call last)
in ()
----> 1 evaluator.evaluate({'151': {'clueweb09-en0027-05-20087': 10000}})

TypeError: Unable to extract query/object scores.
`

I think that the exact problem is being overwritten by the calling code that checks the return value.

CI/CD friendly wheel

Hi!

This outbound call removes the ability of consumers of this package to use it in CI/CD pipelines that prevent eggress.

Would you consider publishing a pre-built wheel that does not require this download?

RelevanceEvaluator breaks when evaluating on multiple runs

As identified by @grodino in terrierteam/ir_measures#42

In short, when a measure with multiple cutoffs are provided to RelevanceEvaluator, only one is returned on subsequent invocations.

>>> import pytrec_eval
>>> qrel = {
>>>   '0': {'D0': 0, 'D1': 1, 'D2': 1, 'D3': 1, 'D4': 0},
>>>   '1': {'D0': 1, 'D3': 2, 'D5': 2}
>>> }
>>> run = {
>>>   '0': {'D0': 0.8, 'D2': 0.7, 'D1': 0.3, 'D3': 0.4, 'D4': 0.1},
>>>   '1': {'D1': 0.8, 'D3': 0.7, 'D4': 0.3, 'D2': 0.4, 'D10': 8.}
>>> }
>>> evaluator = pytrec_eval.RelevanceEvaluator(qrel, {'map', 'ndcg_cut.10,100,2'})
>>> print(evaluator.evaluate(run))
{'0': {'map': 0.6388888888888888, 'ndcg_cut_2': 0.38685280723454163, 'ndcg_cut_10': 0.7328286204777911, 'ndcg_cut_100': 0.7328286204777911}, '1': {'map': 0.1111111111111111, 'ndcg_cut_2': 0.0, 'ndcg_cut_10': 0.26582598262939583, 'ndcg_cut_100': 0.26582598262939583}}
>>> print(evaluator.evaluate(run))
{'0': {'map': 0.6388888888888888, 'ndcg_cut_10': 0.7328286204777911}, '1': {'map': 0.1111111111111111, 'ndcg_cut_10': 0.26582598262939583}}
# ^ second invocation is missing ndcg_cut_2, ndcg_cut_100

Issues when installing on Mac m1

Hello,

I tried installing pytrec_eval under m1 and the installation work, but when I try to import it I get this issues.

Traceback (most recent call last):
  File "/Users/tcarvalho/Library/Caches/pypoetry/virtualenvs/evaluation-suite-iSLcvJiA-py3.10/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3508, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-2-b628730d1a82>", line 1, in <module>
    import pytrec_eval
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/Users/tcarvalho/Library/Caches/pypoetry/virtualenvs/evaluation-suite-iSLcvJiA-py3.10/lib/python3.10/site-packages/pytrec_eval/__init__.py", line 7, in <module>
    from pytrec_eval_ext import RelevanceEvaluator as _RelevanceEvaluator
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
ImportError: dlopen(/Users/tcarvalho/Library/Caches/pypoetry/virtualenvs/evaluation-suite-iSLcvJiA-py3.10/lib/python3.10/site-packages/pytrec_eval_ext.cpython-310-darwin.so, 0x0002): tried: '/Users/tcarvalho/Library/Caches/pypoetry/virtualenvs/evaluation-suite-iSLcvJiA-py3.10/lib/python3.10/site-packages/pytrec_eval_ext.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/tcarvalho/Library/Caches/pypoetry/virtualenvs/evaluation-suite-iSLcvJiA-py3.10/lib/python3.10/site-packages/pytrec_eval_ext.cpython-310-darwin.so' (no such file), '/Users/tcarvalho/Library/Caches/pypoetry/virtualenvs/evaluation-suite-iSLcvJiA-py3.10/lib/python3.10/site-packages/pytrec_eval_ext.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))

My understanding is that the underlying C packages aren't supported for M1. I tried installing directly from the tar file, but I have the same issue.

For those with SSL issues here is the fix:

Heres the error message:

urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:1131)>
      Fetching trec_eval from http://github.com/usnistgov/trec_eval/archive/v9.0.8.tar.gz.

On our proxy we only have a http and https is passed through this, on newer urllib3 this I have read causes a problem.
The fix creates an unsecured connection, without SSL: edit the file: /pytrec_eval/setup.py

import ssl
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE

response = urllib.request.urlopen(REMOTE_TREC_EVAL_URI, context = ctx)

Qrel Relevance scores between 1 to 5

Is it right to set qrel relevance scores to sometimg between 1 to 5?
1 means less relevant and 5 means high relevant.
I’m going to use ndcg measure

python 3.7 support

I get the following error when I attempt to install under python 3.7:

src/pytrec_eval.cpp:150:72: error: invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive]
                 query_document_pairs[pair_idx].docno = PyUnicode_AsUTF8(inner_key);
                                                        ~~~~~~~~~~~~~~~~^~~~~~~~~~~
error: command 'gcc' failed with exit status 1

Steps to reproduce:

conda create -n py37 python=3.7
source activate py37
pip install pytrec_eval

Works fine under python 3.6.

Custom k for cut metrics

Hi,

Is there any way to specify custom k values for map_cut, ndcg_cut etc., instead of default ones ?

It is definitely possible with trec_eval, but I don't see how to do it with the current interface (or do I miss something ?)

Thanks

Thibault

Faulty Scores Generated by Evaluator in Presence of Empty Relevance Sets

When evaluating multiple query relevance (qrel) sets with pytrec_eval, incorrect scores are generated when one or more qrel sets are empty. For example:

qrel = {
'q1': {
'd1': 0,
'd2': 1,
'd3': 0,
},
'q2': {
},
'q3': {
'd2': 1,
'd3': 1,
},
}

evaluator = pytrec_eval.RelevanceEvaluator(qrel, {'map', 'ndcg'})
evaluator.evaluate(some_valid_b)

In this case, the evaluator provides erroneous scores for 'q2' and relations after q2 i.e q3, without raising any warning, which could mislead users.

issue with pip install in mac

Hi there,
I am getting following error when I try to install pytrec_eval either using pip install or when trying to install python-terrier:


Collecting pytrec_eval
Using cached pytrec_eval-0.4.tar.gz (11 kB)
ERROR: Command errored out with exit status 1:
command: /Users/ali.vahid/project_pyTerrier/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/pb/b4bbdkcn11x0b_2nsvl7kzjh0000gn/T/pip-install-os9v0s7h/pytrec-eval/setup.py'"'"'; file='"'"'/private/var/folders/pb/b4bbdkcn11x0b_2nsvl7kzjh0000gn/T/pip-install-os9v0s7h/pytrec-eval/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/pb/b4bbdkcn11x0b_2nsvl7kzjh0000gn/T/pip-install-os9v0s7h/pytrec-eval/pip-egg-info
cwd: /private/var/folders/pb/b4bbdkcn11x0b_2nsvl7kzjh0000gn/T/pip-install-os9v0s7h/pytrec-eval/
Complete output (43 lines):
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 1319, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1230, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1276, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1225, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1004, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 944, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1399, in connect
self.sock = self._context.wrap_socket(self.sock,
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/ssl.py", line 1040, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/private/var/folders/pb/b4bbdkcn11x0b_2nsvl7kzjh0000gn/T/pip-install-os9v0s7h/pytrec-eval/setup.py", line 28, in <module>
    response = urllib.request.urlopen(REMOTE_TREC_EVAL_ZIP)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 525, in open
    response = self._open(req, data)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 542, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 502, in _call_chain
    result = func(*args)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 1362, in https_open
    return self.do_open(http.client.HTTPSConnection, req,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 1322, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)>
Fetching trec_eval from https://github.com/usnistgov/trec_eval/archive/v9.0.5.zip.
----------------------------------------

ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.