I am wondering if codes for the inference/evaluation on BEIR datasets are available or planned to be shared.
#### Download scifact.zip dataset and unzip the dataset
dataset = "scifact"
url = "https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/{}.zip".format(dataset)
dataset_dir = 'experiments/datasets/beir/'
data_path = util.download_and_unzip(url, dataset_dir)
#### Provide the data_path where scifact has been downloaded and unzipped
corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")
#### Load the SBERT model and retrieve using cosine-similarity
model = DRES(models.SentenceBERT("OpenMatch/cocodr-base-msmarco"), batch_size=16)
retriever = EvaluateRetrieval(model, score_function="dot")
results = retriever.retrieve(corpus, queries)
#### Evaluate your model with NDCG@k, MAP@K, Recall@K and Precision@K where k = [1,3,5,10,100,1000]
ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)
2022-11-08 11:37:22 - NDCG@1: 0.1467
2022-11-08 11:37:22 - NDCG@3: 0.2189
2022-11-08 11:37:22 - NDCG@5: 0.2352
2022-11-08 11:37:22 - NDCG@10: 0.2535
2022-11-08 11:37:22 - NDCG@100: 0.3066
2022-11-08 11:37:22 - NDCG@1000: 0.3415