Coder Social home page Coder Social logo

drugood's Introduction

🔥DrugOOD🔥: OOD Dataset Curator and Benchmark for AI Aided Drug Discovery

This is the official implementation of the DrugOOD project, this is the project page: https://drugood.github.io/

Environment Installation

You can install the conda environment using the drugood.yaml file provided:

!git clone https://github.com/tencent-ailab/DrugOOD.git
!cd DrugOOD
!conda env create --name drugood --file=drugood.yaml
!conda activate drugood

Then you can go to the demo at demo/demo.ipynb which gives a quick practice on how to use DrugOOD.

Demo

For a quick practice on using DrugOOD for dataset curation and OOD benchmarking, one can refer to the demo/demo.ipynb.

Dataset Curator

First, you need to generate the required DrugOOD dataset with our code. The dataset curator currently focusing on generating datasets from CHEMBL. It supports the following two tasks:

  • Ligand Based Affinity Prediction (LBAP).
  • Structure Based Affinity Prediction (SBAP).

For OOD domain annotations, it supports the following 5 choices.

  • Assay.
  • Scaffold.
  • Size.
  • Protein. (only for SBAP task)
  • Protein Family. (only for SBAP task)

For noise annotations, it supports the following three noise levels. Datasets with different noises are implemented by filters with different levels of strictness.

  • Core.
  • Refined.
  • General.

At the same time, due to the inconvenient conversion between different measurement type (E.g. IC50, EC50, Ki, Potency), one needs to specify the measurement type when generating the dataset.

How to Run and Reproduce the 96 Datasets?

Firstly, specifiy the path of CHEMBL database and the directory to save the data in the configuration file: configs/_base_/curators/lbap_defaults.py for LBAP task or configs/_base_/curators/sbap_defaults.py for SBAP task.
The source_root="YOUR_PATH/chembl_29_sqlite/chembl_29.db" means the path to the chembl29 sqllite file. The target_root="data/" specifies the folder to save the generated data.

Note that you can download the original chembl29 database with sqllite format from http://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_29/chembl_29_sqlite.tar.gz.

The built-in configuration files are located in:
configs/curators/. Here we provide the 96 config files to reproduce the 96 datasets in our paper. Meanwhile, you can also customize your own datasets by changing the config files.

Run tools/curate.py to generate dataset. Here are some examples:

Generate datasets for the LBAP task, with assay as domain, core as noise level, IC50 as measurement type, LBAP as task type.:

python tools/curate.py --cfg configs/curators/lbap_core_ic50_assay.py

Generate datasets for the SBAP task, with protein as domain, refined as noise level, EC50 as measurement type, SBAP as task type.:

python tools/curate.py --cfg configs/curator/sbap_refined_ec50_protein.py

Benchmarking SOTA OOD Algorithms

Currently we support 6 different baseline algorithms:

  • ERM
  • IRM
  • GroupDro
  • Coral
  • MixUp
  • DANN

Meanwhile, we support various GNN backbones:

  • GIN
  • GCN
  • Weave
  • ShcNet
  • GAT
  • MGCN
  • NF
  • ATi-FPGNN
  • GTransformer

And different backbones for protein sequence modeling:

  • Bert
  • ProteinBert

How to Run?

Firstly, run the following command to install.

python setup.py develop

Run the LBAP task with ERM algorithm:

python tools/train.py configs/algorithms/erm/lbap_core_ec50_assay_erm.py

If you would like to run ERM on other datasets, change the corresponding options inside the above config file. For example, ann_file = 'data/lbap_core_ec50_assay.json' specifies the input data.

Similarly, run the SBAP task with ERM algorithm:

python tools/train.py configs/algorithms/erm/sbap_core_ec50_assay_erm.py

Reference

😄If you find this repo is useful, please consider to cite our paper:

@ARTICLE{2022arXiv220109637J,
    author = {{Ji}, Yuanfeng and {Zhang}, Lu and {Wu}, Jiaxiang and {Wu}, Bingzhe and {Huang}, Long-Kai and {Xu}, Tingyang and {Rong}, Yu and {Li}, Lanqing and {Ren}, Jie and {Xue}, Ding and {Lai}, Houtim and {Xu}, Shaoyong and {Feng}, Jing and {Liu}, Wei and {Luo}, Ping and {Zhou}, Shuigeng and {Huang}, Junzhou and {Zhao}, Peilin and {Bian}, Yatao},
    title = "{DrugOOD: Out-of-Distribution (OOD) Dataset Curator and Benchmark for AI-aided Drug Discovery -- A Focus on Affinity Prediction Problems with Noise Annotations}",
    journal = {arXiv e-prints},
    keywords = {Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Quantitative Biology - Quantitative Methods},
    year = 2022,
    month = jan,
    eid = {arXiv:2201.09637},
    pages = {arXiv:2201.09637},
    archivePrefix = {arXiv},
    eprint = {2201.09637},
    primaryClass = {cs.LG}
}

Disclaimer

This is not an officially supported Tencent product.

drugood's People

Contributors

jiyuanfeng avatar tencent-ailab avatar yataobian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

drugood's Issues

Question for LBAP task

Dear authors,
First, thank you for the excellent benchmark. I have one question about the protein target of LBAP task. In your paper, on page 12, you said "In LBAP, we follow the common practice and do not involve any protein target information, which is usually used in the activity prediction for one specific protein target." To my understanding, it should keep the protein target the same in both the training set and testing set. Is that correct? But I cannot find the code to ensure this requirement. If I missed it, could you point out where it is? Thanks advance.

segmentation fault

Hi, I am so interested in your work and want to generate new datasets of mine, yet the segmentation fault always occurs when I run the curate.py, for example, in the cell [15] of demo.ipynb, two results of the same codes are listed below:

from mmcv import Config
cfg = Config.fromfile('../configs/curators/lbap_core_ec50_assay.py')
cfg.path.source_root = '/data/ly03/DrugOOD/CHEMBL_SQLLITE/chembl_29_sqlite/chembl_29.db'
cfg.path.target_root = '/data/ly03/DrugOOD/dataset/'
cfg.noise_filter.assay.molecules_number=[50, 100]
cfg.path.task.subset ="lbap_core_ec50_assay_custom"
print(f'Built-in Config:\n{cfg.pretty_text}')
from drugood.apis.curate import curate_data
curate_data(cfg)

1
2

conda clean -a is used already, with no help for this problem.
I think the segmentation fault may occur when the addresses of vars are not defined with constants, for instance, the dicts of the 'sample' are not assumed here. But I notice the codes in demo.ipynb doesn't point to these vars, either. So might this error occur because of the CPU/GPU limitations? I'm using the Ubuntu 16.04 with a memory of 94GB, and Gpu: 12 GB of TITAN V. Are these suitable to generate the datasets?

Looking forward to your advice! Thank you for your gorgeous work.

P.S.

  1. with ulimit -a the info is:
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 384842
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 384842
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
  1. my env info is listed below(CUDA 10.2 CUDNN 7.6.5)
name: drugood
channels:
  - brown-data-science
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
  - bioconda
  - conda-forge
  - defaults
  - r
dependencies:
  - _libgcc_mutex=0.1=main
  - _openmp_mutex=4.5=1_gnu
  - blas=1.0=mkl
  - bzip2=1.0.8=h7b6447c_0
  - ca-certificates=2021.10.8=ha878542_0
  - certifi=2021.10.8=py38h578d9bd_2
  - cudatoolkit=10.2.89=hfd86e86_1
  - ffmpeg=4.3=hf484d3e_0
  - freetype=2.10.4=h5ab3b9f_0
  - gcc=5.4.0=0
  - gmp=6.2.1=h2531618_2
  - gnutls=3.6.15=he1e5248_0
  - intel-openmp=2021.3.0=h06a4308_3350
  - jpeg=9b=h024ee3a_2
  - lame=3.100=h7b6447c_0
  - lcms2=2.12=h3be6417_0
  - ld_impl_linux-64=2.35.1=h7274673_9
  - libffi=3.3=he6710b0_2
  - libgcc-ng=9.3.0=h5101ec6_17
  - libgomp=9.3.0=h5101ec6_17
  - libiconv=1.15=h63c8f33_5
  - libidn2=2.3.1=h27cfd23_0
  - libpng=1.6.37=hbc83047_0
  - libstdcxx-ng=9.3.0=hd4cf53a_17
  - libtasn1=4.16.0=h27cfd23_0
  - libtiff=4.2.0=h85742a9_0
  - libunistring=0.9.10=h27cfd23_0
  - libuv=1.40.0=h7b6447c_0
  - libwebp-base=1.2.0=h27cfd23_0
  - lz4-c=1.9.3=h2531618_0
  - mkl=2021.3.0=h06a4308_520
  - mkl-service=2.4.0=py38h7f8727e_0
  - mkl_fft=1.3.0=py38h42c9631_2
  - mkl_random=1.2.2=py38h51133e4_0
  - ncurses=6.2=he6710b0_1
  - nettle=3.7.3=hbbd107a_1
  - ninja=1.10.2=hff7bd54_1
  - numpy=1.20.3=py38hf144106_0
  - numpy-base=1.20.3=py38h74d4b33_0
  - olefile=0.46=py_0
  - openh264=2.1.0=hd408876_0
  - openjpeg=2.3.0=h05c96fa_1
  - openssl=1.1.1k=h7f98852_0
  - pillow=8.3.1=py38h2c7a002_0
  - pip=21.1.3=py38h06a4308_0
  - python=3.8.5=h7579374_1
  - python_abi=3.8=2_cp38
  - pytorch=1.7.1=py3.8_cuda10.2.89_cudnn7.6.5_0
  - readline=8.1=h27cfd23_0
  - setuptools=52.0.0=py38h06a4308_0
  - six=1.16.0=pyhd3eb1b0_0
  - sqlite=3.36.0=hc218d9a_0
  - tk=8.6.10=hbc83047_0
  - torchaudio=0.7.2=py38
  - torchvision=0.8.2=py38_cu102
  - tree=1.8.0=h7f98852_2
  - typing_extensions=3.10.0.0=pyh06a4308_0
  - wheel=0.36.2=pyhd3eb1b0_0
  - xz=5.2.5=h7b6447c_0
  - zlib=1.2.11=h7b6447c_3
  - zstd=1.4.9=haebb681_0
  - pip:
    - absl-py==1.0.0
    - addict==2.4.0
    - attrs==21.4.0
    - cachetools==5.0.0
    - charset-normalizer==2.0.12
    - click==8.0.4
    - cloudpickle==2.0.0
    - codecov==2.1.12
    - colorama==0.4.4
    - coverage==6.3.2
    - cycler==0.11.0
    - cython==0.29.28
    - dgl-cu102==0.6.1
    - dgl-cu110==0.6.1
    - dgllife==0.2.9
    - drugood==0.0.1
    - filelock==3.6.0
    - flake8==4.0.1
    - fonttools==4.29.1
    - future==0.18.2
    - fuzzywuzzy==0.18.0
    - google-auth==2.6.0
    - google-auth-oauthlib==0.4.6
    - googledrivedownloader==0.4
    - grpcio==1.44.0
    - huggingface-hub==0.4.0
    - hyperopt==0.2.7
    - idna==3.3
    - importlib-metadata==4.11.1
    - iniconfig==1.1.1
    - interrogate==1.5.0
    - isodate==0.6.1
    - isort==4.3.21
    - jinja2==3.0.3
    - joblib==1.1.0
    - kiwisolver==1.3.2
    - littleutils==0.2.2
    - markdown==3.3.6
    - markupsafe==2.1.0
    - matplotlib==3.5.1
    - mccabe==0.6.1
    - mmcv==1.4.5
    - networkx==2.6.3
    - oauthlib==3.2.0
    - ogb==1.3.2
    - opencv-python==4.5.5.62
    - outdated==0.2.1
    - packaging==21.3
    - pandas==1.4.1
    - pluggy==1.0.0
    - prettytable==3.2.0
    - protobuf==3.19.4
    - py==1.11.0
    - py4j==0.10.9.3
    - pyasn1==0.4.8
    - pyasn1-modules==0.2.8
    - pycodestyle==2.8.0
    - pyflakes==2.4.0
    - pyparsing==3.0.7
    - pytdc==0.3.6
    - pytest==7.0.1
    - python-dateutil==2.8.2
    - pytz==2021.3
    - pyyaml==6.0
    - rdflib==6.1.1
    - rdkit-pypi==2021.9.4
    - regex==2022.1.18
    - requests==2.27.1
    - requests-oauthlib==1.3.1
    - rsa==4.8
    - sacremoses==0.0.47
    - scikit-learn==1.0.2
    - scipy==1.8.0
    - seaborn==0.11.2
    - tabulate==0.8.9
    - tensorboard==2.8.0
    - tensorboard-data-server==0.6.1
    - tensorboard-plugin-wit==1.8.1
    - threadpoolctl==3.1.0
    - tokenizers==0.11.5
    - toml==0.10.2
    - tomli==2.0.1
    - torch-cluster==1.5.9
    - torch-geometric==2.0.3
    - torch-scatter==2.0.7
    - torch-sparse==0.6.9
    - torch-spline-conv==1.2.1
    - tqdm==4.62.3
    - transformers==4.16.2
    - urllib3==1.26.8
    - wcwidth==0.2.5
    - werkzeug==2.0.3
    - wilds==2.0.0
    - xdoctest==0.15.10
    - yacs==0.1.8
    - yapf==0.32.0
    - zipp==3.7.0
prefix: /home/ly03/anaconda3/envs/drugood

Bug about IRM implementation

In the provided code of orginal IRM article, the value of dummy_w (the self.scale in irm.py of your codes) will not be updated. However, In your code, this value serves as a parameter of the model, which means it will be updated via the optimizer

A bug about device inconsistency

Every provided backbone will call the function "move_to_device", moving the data to device "cuda:0". This means the model must be defined on "cuda:0" and "cuda:0" must be included in gpu-ids. This is quite inconvenient.

A bug on using protein family

Hi,

I'm testing the dataset curated from protein family, e.g., configs/curators/sbap_core_ki_protein_family.py. And I get the following exception:

Traceback (most recent call last):
  File "xxx/drugood/apis/curate.py", line 14, in curate_data
    data = curator.data_splitting(data)
  File "xxx/drugood/curators/curator.py", line 206, in data_splitting
    domain_value = domain_func(value_for_generating_domain)
  File "xxx/drugood/curators/get_domain_info.py", line 75, in protein_family
    class_id = self.protein_family_getter(protein_seq)
  File "xxx/drugood/curators/chembl/protein_family.py", line 48, in __call__
    target_level_class_id = self.get_target_level_class_id(class_id)
  File "xxx/drugood/curators/chembl/protein_family.py", line 37, in get_target_level_class_id
    class_id_cur_level = self.dict_id_to_parent_level[class_id_cur_level][0]
KeyError: None

It turns out that protein_family_level is None.


A quick update: this line fails to pass in the protein_family_level.

Data curation error caused by configuration files

Hello, I noticed that in some of the configuration python files of curators, there exist nested definitions of assay.

For example:

image

After removing a layer of assay dictionary, the data curation process runs well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.