tencent-ailab / drugood Goto Github PK
View Code? Open in Web Editor NEWOOD Dataset Curator and Benchmark for AI-aided Drug Discovery
License: Other
OOD Dataset Curator and Benchmark for AI-aided Drug Discovery
License: Other
In the provided code of orginal IRM article, the value of dummy_w (the self.scale in irm.py of your codes) will not be updated. However, In your code, this value serves as a parameter of the model, which means it will be updated via the optimizer
Dear authors,
First, thank you for the excellent benchmark. I have one question about the protein target of LBAP task. In your paper, on page 12, you said "In LBAP, we follow the common practice and do not involve any protein target information, which is usually used in the activity prediction for one specific protein target." To my understanding, it should keep the protein target the same in both the training set and testing set. Is that correct? But I cannot find the code to ensure this requirement. If I missed it, could you point out where it is? Thanks advance.
Hi, I am so interested in your work and want to generate new datasets of mine, yet the segmentation fault always occurs when I run the curate.py, for example, in the cell [15] of demo.ipynb, two results of the same codes are listed below:
from mmcv import Config
cfg = Config.fromfile('../configs/curators/lbap_core_ec50_assay.py')
cfg.path.source_root = '/data/ly03/DrugOOD/CHEMBL_SQLLITE/chembl_29_sqlite/chembl_29.db'
cfg.path.target_root = '/data/ly03/DrugOOD/dataset/'
cfg.noise_filter.assay.molecules_number=[50, 100]
cfg.path.task.subset ="lbap_core_ec50_assay_custom"
print(f'Built-in Config:\n{cfg.pretty_text}')
from drugood.apis.curate import curate_data
curate_data(cfg)
conda clean -a
is used already, with no help for this problem.
I think the segmentation fault may occur when the addresses of vars are not defined with constants, for instance, the dicts of the 'sample' are not assumed here. But I notice the codes in demo.ipynb doesn't point to these vars, either. So might this error occur because of the CPU/GPU limitations? I'm using the Ubuntu 16.04 with a memory of 94GB, and Gpu: 12 GB of TITAN V. Are these suitable to generate the datasets?
Looking forward to your advice! Thank you for your gorgeous work.
P.S.
ulimit -a
the info is:core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 384842
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 384842
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
name: drugood
channels:
- brown-data-science
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
- bioconda
- conda-forge
- defaults
- r
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- blas=1.0=mkl
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2021.10.8=ha878542_0
- certifi=2021.10.8=py38h578d9bd_2
- cudatoolkit=10.2.89=hfd86e86_1
- ffmpeg=4.3=hf484d3e_0
- freetype=2.10.4=h5ab3b9f_0
- gcc=5.4.0=0
- gmp=6.2.1=h2531618_2
- gnutls=3.6.15=he1e5248_0
- intel-openmp=2021.3.0=h06a4308_3350
- jpeg=9b=h024ee3a_2
- lame=3.100=h7b6447c_0
- lcms2=2.12=h3be6417_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgomp=9.3.0=h5101ec6_17
- libiconv=1.15=h63c8f33_5
- libidn2=2.3.1=h27cfd23_0
- libpng=1.6.37=hbc83047_0
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libtasn1=4.16.0=h27cfd23_0
- libtiff=4.2.0=h85742a9_0
- libunistring=0.9.10=h27cfd23_0
- libuv=1.40.0=h7b6447c_0
- libwebp-base=1.2.0=h27cfd23_0
- lz4-c=1.9.3=h2531618_0
- mkl=2021.3.0=h06a4308_520
- mkl-service=2.4.0=py38h7f8727e_0
- mkl_fft=1.3.0=py38h42c9631_2
- mkl_random=1.2.2=py38h51133e4_0
- ncurses=6.2=he6710b0_1
- nettle=3.7.3=hbbd107a_1
- ninja=1.10.2=hff7bd54_1
- numpy=1.20.3=py38hf144106_0
- numpy-base=1.20.3=py38h74d4b33_0
- olefile=0.46=py_0
- openh264=2.1.0=hd408876_0
- openjpeg=2.3.0=h05c96fa_1
- openssl=1.1.1k=h7f98852_0
- pillow=8.3.1=py38h2c7a002_0
- pip=21.1.3=py38h06a4308_0
- python=3.8.5=h7579374_1
- python_abi=3.8=2_cp38
- pytorch=1.7.1=py3.8_cuda10.2.89_cudnn7.6.5_0
- readline=8.1=h27cfd23_0
- setuptools=52.0.0=py38h06a4308_0
- six=1.16.0=pyhd3eb1b0_0
- sqlite=3.36.0=hc218d9a_0
- tk=8.6.10=hbc83047_0
- torchaudio=0.7.2=py38
- torchvision=0.8.2=py38_cu102
- tree=1.8.0=h7f98852_2
- typing_extensions=3.10.0.0=pyh06a4308_0
- wheel=0.36.2=pyhd3eb1b0_0
- xz=5.2.5=h7b6447c_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.9=haebb681_0
- pip:
- absl-py==1.0.0
- addict==2.4.0
- attrs==21.4.0
- cachetools==5.0.0
- charset-normalizer==2.0.12
- click==8.0.4
- cloudpickle==2.0.0
- codecov==2.1.12
- colorama==0.4.4
- coverage==6.3.2
- cycler==0.11.0
- cython==0.29.28
- dgl-cu102==0.6.1
- dgl-cu110==0.6.1
- dgllife==0.2.9
- drugood==0.0.1
- filelock==3.6.0
- flake8==4.0.1
- fonttools==4.29.1
- future==0.18.2
- fuzzywuzzy==0.18.0
- google-auth==2.6.0
- google-auth-oauthlib==0.4.6
- googledrivedownloader==0.4
- grpcio==1.44.0
- huggingface-hub==0.4.0
- hyperopt==0.2.7
- idna==3.3
- importlib-metadata==4.11.1
- iniconfig==1.1.1
- interrogate==1.5.0
- isodate==0.6.1
- isort==4.3.21
- jinja2==3.0.3
- joblib==1.1.0
- kiwisolver==1.3.2
- littleutils==0.2.2
- markdown==3.3.6
- markupsafe==2.1.0
- matplotlib==3.5.1
- mccabe==0.6.1
- mmcv==1.4.5
- networkx==2.6.3
- oauthlib==3.2.0
- ogb==1.3.2
- opencv-python==4.5.5.62
- outdated==0.2.1
- packaging==21.3
- pandas==1.4.1
- pluggy==1.0.0
- prettytable==3.2.0
- protobuf==3.19.4
- py==1.11.0
- py4j==0.10.9.3
- pyasn1==0.4.8
- pyasn1-modules==0.2.8
- pycodestyle==2.8.0
- pyflakes==2.4.0
- pyparsing==3.0.7
- pytdc==0.3.6
- pytest==7.0.1
- python-dateutil==2.8.2
- pytz==2021.3
- pyyaml==6.0
- rdflib==6.1.1
- rdkit-pypi==2021.9.4
- regex==2022.1.18
- requests==2.27.1
- requests-oauthlib==1.3.1
- rsa==4.8
- sacremoses==0.0.47
- scikit-learn==1.0.2
- scipy==1.8.0
- seaborn==0.11.2
- tabulate==0.8.9
- tensorboard==2.8.0
- tensorboard-data-server==0.6.1
- tensorboard-plugin-wit==1.8.1
- threadpoolctl==3.1.0
- tokenizers==0.11.5
- toml==0.10.2
- tomli==2.0.1
- torch-cluster==1.5.9
- torch-geometric==2.0.3
- torch-scatter==2.0.7
- torch-sparse==0.6.9
- torch-spline-conv==1.2.1
- tqdm==4.62.3
- transformers==4.16.2
- urllib3==1.26.8
- wcwidth==0.2.5
- werkzeug==2.0.3
- wilds==2.0.0
- xdoctest==0.15.10
- yacs==0.1.8
- yapf==0.32.0
- zipp==3.7.0
prefix: /home/ly03/anaconda3/envs/drugood
Every provided backbone will call the function "move_to_device", moving the data to device "cuda:0". This means the model must be defined on "cuda:0" and "cuda:0" must be included in gpu-ids. This is quite inconvenient.
Hi,
I'm testing the dataset curated from protein family, e.g., configs/curators/sbap_core_ki_protein_family.py
. And I get the following exception:
Traceback (most recent call last):
File "xxx/drugood/apis/curate.py", line 14, in curate_data
data = curator.data_splitting(data)
File "xxx/drugood/curators/curator.py", line 206, in data_splitting
domain_value = domain_func(value_for_generating_domain)
File "xxx/drugood/curators/get_domain_info.py", line 75, in protein_family
class_id = self.protein_family_getter(protein_seq)
File "xxx/drugood/curators/chembl/protein_family.py", line 48, in __call__
target_level_class_id = self.get_target_level_class_id(class_id)
File "xxx/drugood/curators/chembl/protein_family.py", line 37, in get_target_level_class_id
class_id_cur_level = self.dict_id_to_parent_level[class_id_cur_level][0]
KeyError: None
It turns out that protein_family_level
is None.
A quick update: this line fails to pass in the protein_family_level
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.