We are building FS-Tox: a toxicity benchmark for small molecule toxicology assays. Toxicity prediction tasks differ from traditional machine learning tasks in that there are usually only a small number of training examples per toxicity assay. Here, we provide a few-shot learning dataset built using several publicly available toxicity datasets (e.g. EPA's ToxRefDB), and an associated benchmarking pipeline. We will incorporate the different assays from these datsets consisting of the molecular representation of a small molecule, with an associated binary marker of whether the drug was toxic or not for the given assay.
Test the performance of the following state-of-the-art few-shot prediction methods on existing toxicity benchmark:
- Gradient Boosted Random Forest (XGBoost)
- Text-embedding-ada-002 on SMILES (OpenAI)
- [] Galactica 125M (Hugging Face)
- [] Galactica 1.3B (Hugging Face)
- ChemGPT 19M (Hugging Face)
- [] ChemGPT 1.2B (Hugging Face)
- [] Uni-Mol (docker)
- [] Uni-Mol+ (docker)
- [] MoLeR (Microsoft)
Incorporate the following datsets containing results from in vivo toxicity assays:
- [] ToxRefDB (subacute and chronic toxicity)
- [] TDCommon, Zhu 2009 (acute toxicity)
- [] MEIC (small, curated clinical toxicity)
Test the following language models on the FS-Tox benchmark:
- [] Text-embedding-ada-002 on SMILES (OpenAI)
- [] Galactica 125M (Hugging Face)
- [] Galactica 1.3B (Hugging Face)
- [] ChemGPT 19M (Hugging Face)
- [] ChemGPT 1.2B (Hugging Face)
- [] Uni-Mol (docker)
- [] Uni-Mol+ (docker)
- [] MoLeR (Microsoft)
Incorporate in vitro assays into the FS-Tox benchmark:
- [] ToxCast
- [] Extended Tox21
- [] NCI60 data
βββ LICENSE
βββ Makefile <- Makefile with commands like `make data` or `make train`
βββ README.md <- The top-level README for developers using this project.
βββ data
βΒ Β βββ external <- Data from third party sources.
βΒ Β βββ interim <- Intermediate data that has been transformed.
βΒ Β βββ processed <- The final, canonical data sets for modeling.
βΒ Β βββ raw <- The original, immutable data dump.
β
βββ docs <- A default Sphinx project; see sphinx-doc.org for details
β
βββ models <- Trained and serialized models, model predictions, or model summaries
β
βββ notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
β and a short `-` delimited description, e.g.
β `1.0-initial-data-exploration`.
β
βββ references <- Data dictionaries, manuals, and all other explanatory materials.
β
βββ reports <- Generated analysis as HTML, PDF, LaTeX, etc.
βΒ Β βββ figures <- Generated graphics and figures to be used in reporting
β
βββ requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
β generated with `pip freeze > requirements.txt`
β
βββ setup.py <- makes project pip installable (pip install -e .) so src can be imported
βββ src <- Source code for use in this project.
βΒ Β βββ __init__.py <- Makes src a Python module
β β
βΒ Β βββ data <- Scripts to download or generate data
βΒ Β βΒ Β βββ make_dataset.py
β β
βΒ Β βββ features <- Scripts to turn raw data into features for modeling
βΒ Β βΒ Β βββ build_features.py
β β
βΒ Β βββ models <- Scripts to train models and then use trained models to make
β β β predictions
βΒ Β βΒ Β βββ predict_model.py
βΒ Β βΒ Β βββ train_model.py
β β
βΒ Β βββ visualization <- Scripts to create exploratory and results oriented visualizations
βΒ Β βββ visualize.py
β
βββ tox.ini <- tox file with settings for running tox; see tox.readthedocs.io