Topic: evaluation-metrics Goto Github
Some thing interesting about evaluation-metrics
Some thing interesting about evaluation-metrics
evaluation-metrics,Python SDK for agent evals and observability
Organization: agentops-ai
Home Page: https://agentops.ai
evaluation-metrics,โก๏ธA Blazing-Fast Python Library for Ranking Evaluation, Comparison, and Fusion ๐
User: amenra
Home Page: https://amenra.github.io/ranx
evaluation-metrics,Python SDK for running evaluations on LLM generated responses
Organization: athina-ai
Home Page: https://docs.athina.ai
evaluation-metrics,A Python wrapper for the ROUGE summarization evaluation package
User: bheinzerling
evaluation-metrics,CLEval: Character-Level Evaluation for Text Detection and Recognition Tasks
Organization: clovaai
evaluation-metrics,Code base for the precision, recall, density, and coverage metrics for generative models. ICML 2020.
Organization: clovaai
evaluation-metrics,:gift:[ChatGPT4MTevaluation] ErrorAnalysis Prompt for MT Evaluation in ChatGPT
User: coldmist-lu
Home Page: https://arxiv.org/pdf/2303.13809.pdf
evaluation-metrics,The LLM Evaluation Framework
Organization: confident-ai
Home Page: https://docs.confident-ai.com/
evaluation-metrics,An implementation of a full named-entity evaluation metrics based on SemEval'13 Task 9 - not at tag/token level but considering all the tokens that are part of the named-entity
User: davidsbatista
evaluation-metrics,A fast implementation of bss_eval metrics for blind source separation
User: fakufaku
Home Page: https://fast-bss-eval.readthedocs.io/en/latest/
evaluation-metrics,Easier Automatic Sentence Simplification Evaluation
User: feralvam
evaluation-metrics,[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
User: fuxiaoliu
Home Page: https://fuxiaoliu.github.io/LRV/
evaluation-metrics,Useful python NLP tools (evaluation, GUI interface, tokenization)
User: golsun
evaluation-metrics,[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.
Organization: google-research
Home Page: https://agarwl.github.io/rliable
evaluation-metrics,Official repository of RankEval: An Evaluation and Analysis Framework for Learning-to-Rank Solutions.
Organization: hpclab
Home Page: http://rankeval.isti.cnr.it/
evaluation-metrics,LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
Organization: huggingface
evaluation-metrics,STT ํ๊ธ ๋ฌธ์ฅ ์ธ์๊ธฐ ์ถ๋ ฅ ์คํฌ๋ฆฝํธ์ ์ธ์ ์ค๋ฅ์จ(CER), ๋จ์ด ์ค๋ฅ์จ(WER)์ ๊ณ์ฐํ๋ Python ํจ์ ํจํค์ง
User: hyeonsangjeon
evaluation-metrics,Evaluation script for named entity recognition (NER) systems based on entity-level F1 score.
User: jantrienes
evaluation-metrics,Evaluate your speech-to-text system with similarity measures such as word error rate (WER)
Organization: jitsi
evaluation-metrics,Python client for Kolena's machine learning testing platform
Organization: kolenaio
Home Page: https://docs.kolena.io
evaluation-metrics,NeurIPS 2023 - TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models Official Code
Organization: lait-cvlab
Home Page: https://lait-cvlab.github.io/TopPR/
evaluation-metrics,Full named-entity (i.e., not tag/token) evaluation metrics based on SemEvalโ13
Organization: mantisai
evaluation-metrics,A data discovery and manipulation toolset for unstructured data
Organization: microsoft
evaluation-metrics,OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)
Organization: mind-lab
evaluation-metrics,A news recommendation evaluation framework
User: mjugo
evaluation-metrics,Assessing Generative Models via Precision and Recall (official repository)
User: msmsajjadi
Home Page: http://msajjadi.com
evaluation-metrics,๐ Reference-Free automatic summarization evaluation with potential hallucination detection
User: muhtasham
evaluation-metrics,Counting-Stars (โ )
User: nick7nlp
Home Page: https://arxiv.org/pdf/2403.11802.pdf
evaluation-metrics,Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations.
Organization: om-ai-lab
evaluation-metrics,PyNLPl, pronounced as 'pineapple', is a Python library for Natural Language Processing. It contains various modules useful for common, and less common, NLP tasks. PyNLPl can be used for basic tasks such as the extraction of n-grams and frequency lists, and to build simple language model. There are also more complex data types and algorithms. Moreover, there are parsers for file formats common in NLP (e.g. FoLiA/Giza/Moses/ARPA/Timbl/CQL). There are also clients to interface with various NLP specific servers. PyNLPl most notably features a very extensive library for working with FoLiA XML (Format for Linguistic Annotation).
User: proycon
Home Page: https://pypi.python.org/pypi/PyNLPl
evaluation-metrics,Open-Source Evaluation for GenAI Application Pipelines
Organization: relari-ai
Home Page: https://docs.relari.ai/
evaluation-metrics,Learning to Evaluate Image Captioning. CVPR 2018
User: richardaecn
evaluation-metrics,Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper
Organization: salesforce
Home Page: https://arxiv.org/abs/1910.12840
evaluation-metrics,It is a Natural Language Processing Problem where Sentiment Analysis is done by Classifying the Positive tweets from negative tweets by machine learning models for classification, text mining, text analysis, data analysis and data visualization
User: sharmaroshan
evaluation-metrics,Python wrapper for evaluating summarization quality by ROUGE package
User: tagucci
evaluation-metrics,EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation
User: tanyuqian
evaluation-metrics,Artificial intelligence (AI, ML, DL) performance metrics implemented in Python
User: thieu1995
Home Page: https://permetrics.readthedocs.io/en/latest/
evaluation-metrics,Benchmark for evaluating open-ended generation
Organization: thu-coai
evaluation-metrics,Code for "Semantic Object Accuracy for Generative Text-to-Image Synthesis" (TPAMI 2020)
User: tohinz
evaluation-metrics,Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.
Organization: tonicai
Home Page: https://docs.tonic.ai/validate/
evaluation-metrics, A Neural Framework for MT Evaluation
Organization: unbabel
Home Page: https://unbabel.github.io/COMET/html/index.html
evaluation-metrics,:chart_with_upwards_trend: Implementation of eight evaluation metrics to access the similarity between two images. The eight metrics are as follows: RMSE, PSNR, SSIM, ISSM, FSIM, SRE, SAM, and UIQ.
Organization: up42
evaluation-metrics,Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)
User: v-iashin
Home Page: https://v-iashin.github.io/SpecVQGAN
evaluation-metrics,(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
User: xinshuoweng
Home Page: http://www.xinshuoweng.com/
evaluation-metrics,A reference-free metric for measuring summary quality, learned from human ratings.
User: yg211
Home Page: https://arxiv.org/abs/1909.01214
evaluation-metrics,GOM๏ผNew Metric for Re-identification. ๐GOM explicitly balances the effect of performing retrieval and verification into a single unified metric.
User: yuanxincherry
evaluation-metrics,A list of works on evaluation of visual generation models, including evaluation metrics, models, and systems
User: ziqihuangg
evaluation-metrics,A more complete python version (GPU) of the evaluation for salient object detection (with S-measure, Fbw measure, MAE, max/mean/adaptive F-measure, max/mean/adaptive E-measure, PRcurve and F-measure curve)
User: zyjwuyan
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.