Coder Social home page Coder Social logo

cleanlab / examples Goto Github PK

View Code? Open in Web Editor NEW
110.0 110.0 22.0 153.34 MB

Notebooks demonstrating example applications of the cleanlab library

Home Page: https://github.com/cleanlab/cleanlab

License: GNU Affero General Public License v3.0

Jupyter Notebook 99.31% Python 0.65% Shell 0.04%
cleanlab hacktoberfest

examples's Introduction

pypi_versions py_versions coverage Github Stars Slack Community Twitter

cleanlab helps you clean data and labels by automatically detecting issues in a ML dataset. To facilitate machine learning with messy, real-world data, this data-centric AI package uses your existing models to estimate dataset problems that can be fixed to train even better models.

Examples of various issues in Cat/Dog dataset automatically detected by cleanlab via this code:

        lab = cleanlab.Datalab(data=dataset, label="column_name_for_labels")
        # Fit any ML model, get its feature_embeddings & pred_probs for your data
        lab.find_issues(features=feature_embeddings, pred_probs=pred_probs)
        lab.report()

Try easy mode with Cleanlab Studio

While this open-source package finds data issues, its utility depends on you having: a good existing ML model + an interface to efficiently fix these issues in your dataset. Providing all these pieces, Cleanlab Studio is a Data Curation platform to find and fix problems in any {image, text, tabular} dataset. Cleanlab Studio automatically runs optimized algorithms from this package on top of AutoML & Foundation models fit to your data, and presents detected issues (+ AI-suggested fixes) in an intelligent data correction interface.

Try it for free! Adopting Cleanlab Studio enables users of this package to:

  • Work 100x faster (1 min to analyze your raw data with zero code or ML work; optionally use Python API)
  • Produce better-quality data (10x more types of issues auto detected & corrected via built-in AI)
  • Accomplish more (auto-label data, deploy ML instantly, audit LLM inputs/outputs, moderate content, ...)
  • Monitor incoming data and detect issues in real-time (integrate your data pipeline on an Enterprise plan)

The modern AI pipeline automated with Cleanlab Studio

Run cleanlab open-source

This cleanlab package runs on Python 3.8+ and supports Linux, macOS, as well as Windows.

Practicing data-centric AI can look like this:

  1. Train initial ML model on original dataset.
  2. Utilize this model to diagnose data issues (via cleanlab methods) and improve the dataset.
  3. Train the same model on the improved dataset.
  4. Try various modeling techniques to further improve performance.

Most folks jump from Step 1 → 4, but you may achieve big gains without any change to your modeling code by using cleanlab! Continuously boost performance by iterating Steps 2 → 4 (and try to evaluate with cleaned data).

Use cleanlab with any model and in most ML tasks

All features of cleanlab work with any dataset and any model. Yes, any model: PyTorch, Tensorflow, Keras, JAX, HuggingFace, OpenAI, XGBoost, scikit-learn, etc.

cleanlab is useful across a wide variety of Machine Learning tasks. Specific tasks this data-centric AI package offers dedicated functionality for include:

  1. Binary and multi-class classification
  2. Multi-label classification (e.g. image/document tagging)
  3. Token classification (e.g. entity recognition in text)
  4. Regression (predicting numerical column in a dataset)
  5. Image segmentation (images with per-pixel annotations)
  6. Object detection (images with bounding box annotations)
  7. Classification with data labeled by multiple annotators
  8. Active learning with multiple annotators (suggest which data to label or re-label to improve model most)
  9. Outlier detection (identify atypical data that appears out of distribution)

For other ML tasks, cleanlab can still help you improve your dataset if appropriately applied. See our Example Notebooks and Blog.

So fresh, so cleanlab

Beyond automatically catching all sorts of issues lurking in your data, this data-centric AI package helps you deal with noisy labels and train more robust ML models. Here's an example:

# cleanlab works with **any classifier**. Yup, you can use PyTorch/TensorFlow/OpenAI/XGBoost/etc.
cl = cleanlab.classification.CleanLearning(sklearn.YourFavoriteClassifier())

# cleanlab finds data and label issues in **any dataset**... in ONE line of code!
label_issues = cl.find_label_issues(data, labels)

# cleanlab trains a robust version of your model that works more reliably with noisy data.
cl.fit(data, labels)

# cleanlab estimates the predictions you would have gotten if you had trained with *no* label issues.
cl.predict(test_data)

# A universal data-centric AI tool, cleanlab quantifies class-level issues and overall data quality, for any dataset.
cleanlab.dataset.health_summary(labels, confident_joint=cl.confident_joint)

cleanlab cleans your data's labels via state-of-the-art confident learning algorithms, published in this paper and blog. See some of the datasets cleaned with cleanlab at labelerrors.com.

cleanlab is:

  1. backed by theory -- with provable guarantees of exact label noise estimation, even with imperfect models.
  2. fast -- code is parallelized and scalable.
  3. easy to use -- one line of code to find mislabeled data, bad annotators, outliers, or train noise-robust models.
  4. general -- works with any dataset (text, image, tabular, audio,...) + any model (PyTorch, OpenAI, XGBoost,...)

Examples of incorrect given labels in various image datasets found and corrected using cleanlab. While these examples are from image datasets, this also works for text, audio, tabular data.

Citation and related publications

cleanlab is based on peer-reviewed research. Here are relevant papers to cite if you use this package:

Confident Learning (JAIR '21) (click to show bibtex)
@article{northcutt2021confidentlearning,
    title={Confident Learning: Estimating Uncertainty in Dataset Labels},
    author={Curtis G. Northcutt and Lu Jiang and Isaac L. Chuang},
    journal={Journal of Artificial Intelligence Research (JAIR)},
    volume={70},
    pages={1373--1411},
    year={2021}
}
Rank Pruning (UAI '17) (click to show bibtex)
@inproceedings{northcutt2017rankpruning,
    author={Northcutt, Curtis G. and Wu, Tailin and Chuang, Isaac L.},
    title={Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels},
    booktitle = {Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence},
    series = {UAI'17},
    year = {2017},
    location = {Sydney, Australia},
    numpages = {10},
    url = {http://auai.org/uai2017/proceedings/papers/35.pdf},
    publisher = {AUAI Press},
}
Label Quality Scoring (ICML '22) (click to show bibtex)
@inproceedings{kuan2022labelquality,
    title={Model-agnostic label quality scoring to detect real-world label errors},
    author={Kuan, Johnson and Mueller, Jonas},
    booktitle={ICML DataPerf Workshop},
    year={2022}
}
Out-of-Distribution Detection (ICML '22) (click to show bibtex)
@inproceedings{kuan2022ood,
    title={Back to the Basics: Revisiting Out-of-Distribution Detection Baselines},
    author={Kuan, Johnson and Mueller, Jonas},
    booktitle={ICML Workshop on Principles of Distribution Shift},
    year={2022}
}
Token Classification Label Errors (NeurIPS '22) (click to show bibtex)
@inproceedings{wang2022tokenerrors,
    title={Detecting label errors in token classification data},
    author={Wang, Wei-Chen and Mueller, Jonas},
    booktitle={NeurIPS Workshop on Interactive Learning for Natural Language Processing (InterNLP)},
    year={2022}
}
CROWDLAB for Data with Multiple Annotators (NeurIPS '22) (click to show bibtex)
@inproceedings{goh2022crowdlab,
    title={CROWDLAB: Supervised learning to infer consensus labels and quality scores for data with multiple annotators},
    author={Goh, Hui Wen and Tkachenko, Ulyana and Mueller, Jonas},
    booktitle={NeurIPS Human in the Loop Learning Workshop},
    year={2022}
}
ActiveLab: Active learning with data re-labeling (ICLR '23) (click to show bibtex)
@inproceedings{goh2023activelab,
    title={ActiveLab: Active Learning with Re-Labeling by Multiple Annotators},
    author={Goh, Hui Wen and Mueller, Jonas},
    booktitle={ICLR Workshop on Trustworthy ML},
    year={2023}
}
Incorrect Annotations in Multi-Label Classification (ICLR '23) (click to show bibtex)
@inproceedings{thyagarajan2023multilabel,
    title={Identifying Incorrect Annotations in Multi-Label Classification Data},
    author={Thyagarajan, Aditya and Snorrason, Elías and Northcutt, Curtis and Mueller, Jonas},
    booktitle={ICLR Workshop on Trustworthy ML},
    year={2023}
}
Detecting Dataset Drift and Non-IID Sampling (ICML '23) (click to show bibtex)
@inproceedings{cummings2023drift,
    title={Detecting Dataset Drift and Non-IID Sampling via k-Nearest Neighbors},
    author={Cummings, Jesse and Snorrason, Elías and Mueller, Jonas},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}
Detecting Errors in Numerical Data (ICML '23) (click to show bibtex)
@inproceedings{zhou2023errors,
    title={Detecting Errors in Numerical Data via any Regression Model},
    author={Zhou, Hang and Mueller, Jonas and Kumar, Mayank and Wang, Jane-Ling and Lei, Jing},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}
ObjectLab: Mislabeled Images in Object Detection Data (ICML '23) (click to show bibtex)
@inproceedings{tkachenko2023objectlab,
    title={ObjectLab: Automated Diagnosis of Mislabeled Images in Object Detection Data},
    author={Tkachenko, Ulyana and Thyagarajan, Aditya and Mueller, Jonas},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}
Label Errors in Segmentation Data (ICML '23) (click to show bibtex)
@inproceedings{lad2023segmentation,
    title={Estimating label quality and errors in semantic segmentation data via any model},
    author={Lad, Vedang and Mueller, Jonas},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}

To understand/cite other cleanlab functionality not described above, check out our additional publications.

Other resources

Join our community

License

Copyright (c) 2017 Cleanlab Inc.

cleanlab is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

cleanlab is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

See GNU Affero General Public LICENSE for details. You can email us to discuss licensing: [email protected]

Commercial licensing

Commercial licensing is available for teams and enterprises that want to use cleanlab in production workflows, but are unable to open-source their code as is required by the current license. Please email us: [email protected]

examples's People

Contributors

aditya1503 avatar anishathalye avatar cgnorthcutt avatar cmauck10 avatar elisno avatar ericwang1997 avatar huiwengoh avatar jecummin avatar johnsonkuan avatar jwmueller avatar nelsonauner avatar sanjanag avatar ulya-tkch avatar vdlad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

examples's Issues

Examples: dependencies in each notebook

  • Setup good standardized system for specifying dependencies (and their versions) for each notebook in Examples/.
  • For instance, could have each notebook live in a separate folder with its own requirements.txt-like file for the dependencies of this notebook.
  • We still want to have the master requirements.txt that user can install all notebooks' dependencies from.
  • Version of cleanlab used in each notebook should not be part of the requirements file, since these notebooks should work for many cleanlab versions.
  • Add DEVELOPMENT.md in Examples that specifies contributors should pip freeze to figure out which package versions they used and then how to officially specify these dependencies.
  • More broadly DEVELOPMENT.md should contain checklist of things contributor should consider when adding new notebook including:
  • dependency specifications in this example folder's requirements.txt
  • dependency specifications in the main examples/requirements.txt (ensure new dependencies to not conflict with existing ones)
  • ensure the jupyter notebook contains cell outputs and they look good when viewing the notebook via github (clear outputs from any cells that are humongous)
  • ensure jupyter notebook cells are executed in order
  • if notebook runs really slow or is hard to auto-execute, add it to ignored notebooks in run_all_notebooks.py
  • add notebook to table in the README and ensure its folder name begins with a number

cross-validation question

Why when using k-fold cross-validation with cleanlab, the results of each fold are spliced ​​and then analyzed instead of each fold analyzed separately?

Simpler or current model I should use to predict probabilities?

Thanks for publishing such a great project for finding data issues. After reviewing some of the examples, I would like to hear your guidance for the following situation:

How to find human annotators' error labels during active learning to fine-tune a sentence transformer model for a text classification task.
Should I use a simpler model, i.e. a logistic regression model, to generate the probabilities for confident learning, or should I use the current fine-tuned sentence transformer to do the job? Will this make a big difference?

Examples: CI should run modified notebooks

When commit is made that modifies an examples/ notebook, the CI should execute this notebook from scratch (Run All) to verify it works.

A further improvement would be to add a CI cron job that periodically runs all the examples/ notebooks.

Examples: can remove cross-validation from outlier detection

This example: https://github.com/cleanlab/examples/tree/master/outlier_detection_cifar10
could become much more straightforward without cross-validation now that we've seen it doesn't help too much in the train/test OOD settings.

But to truly assess the utility of cross-validation, we should first include an additional benchmark setting where we evaluate OOD detection on a training dataset with some outliers included (during training not just during testing). Cross-validation may still help in this setting.

why I can not run it in pycharm? sos

cj, pred_probs = cleanlab.count.estimate_confident_joint_and_cv_pred_proba(X_train, y_train, clf=cnn, )
This code goes into an infinite loop: it will be restarted when it reach 50

Add pkl of the prediction Model

Is it possible to add the saved trained prediction model used in the notebook? This will save time spent retraining the model while testing the object lab feature on the Coco dataset.

Clarifications to the clarifications in the examples/iris_simple_example.ipynb

i'm trying to understand and use the library, thanks for sharing!

I've got confused by the text in examples/iris_simple_example.ipynb

  1. The text suggests that the example will show the benefits of using cleanlab on the Iris dataset:

... cleanlab on the Iris dataset.

WITHOUT confident learning, Iris dataset test accuracy: 0.6

However it's missing from the text that acutally a random error will be introduced to the labels and the example tries to learn despite of those mistakes. LogistricRegression on the original Iris dataset has an accuracy of 0.97+

Please let me know if I misunderstand something.
(Do not get me wrong, it's still a valid, good example and it's still impressive, it's just it's not the Iris dataset anymore, which is quite important.)

Here we show the performance with LogisiticRegression classifier
versus LogisticRegression without cleanlab on the Iris dataset.

I think this statement is somehow incomple, how about:

Here we show the performance of LearningWithNoisyLabels using LogisiticRegression classifier
versus LogisticRegression without cleanlab on a modified Iris dataset, which includes random label errors.

Examples: tagged releases that correspond to cleanlab releases

It will eventually become hard to know which versions of cleanlab an Examples/ notebook works with. An easy way to specify this is via tagged release of Examples/ notebook repo, where users of a particular older cleanlab version can just run the notebook from the corresponding tagged release of the Examples repo.

  • Tag releases for past versions of Examples by matching commit dates against tag release dates for the main cleanlab repo.
  • Edit bottom of Examples/README.md to list links to the tagged versions and clarify for users of older versions of cleanlab that they should run the notebooks from these versions instead.
  • Add statement to Examples/readme that the notebooks in master branch of Examples are assumed to correspond to master branch version of cleanlab.
  • Clarify how to link particular Examples/ notebooks from inside tutorial notebooks (presumably just link notebook file available in master branch on github). Try to avoid linking tutorial notebooks or cleanlab-code from inside Examples notebooks (don't want to link outdated versions of cleanlab).
  • Update links to moved examples notebooks throughout: cleanlab repo readme, docs.cleanlab.ai, cleanlab.ai/blog, and anywhere else there may be links.

add pervasive label errors tutorial as an examples notebook

Make examples notebook out of this:
https://github.com/cleanlab/label-errors/blob/main/examples/Tutorial%20-%20How%20To%20Find%20Label%20Errors%20With%20CleanLab.ipynb

The code needs to be updated to use the latest version of cleanlab. Dataset files can stay where they are, but code to load them needs to be updated too.

Consider replacing one of the existing examples/ notebooks with this one if they are quite similar.

  • Also update the original notebook with link to the examples/ notebook stating:
    Here's how to find label issues in these datasets using the latest version of cleanlab:

Examples: rename files

  • rename files thematically and organize their order
  • update links pointing to the examples throughout: examples readme and cleanlab's: readme, docstrings, docs, tutorials, blogposts

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.