Coder Social home page Coder Social logo

huhlim / cg2all Goto Github PK

View Code? Open in Web Editor NEW
32.0 3.0 8.0 133.13 MB

Convert coarse-grained protein structure to all-atom model

Home Page: https://huggingface.co/spaces/huhlim/cg2all

License: Apache License 2.0

Python 73.09% Jupyter Notebook 7.26% Roff 19.65%

cg2all's Introduction

cg2all

Convert coarse-grained protein structure to all-atom model

Web server / Google Colab notebook

Hugging Face Spaces
A demo web page is available for conversions of CG model to all-atom structure via Huggingface space.

Google Colab
A Google Colab notebook is available for tasks:

  • Task 1: Conversion of an all-atom structure to a CG model using convert_all2cg
  • Task 2: Conversion of a CG model to an all-atom structure using convert_cg2all
  • Task 3: Conversion of a CG simulation trajectory to an atomistic simulation trajectory using convert_cg2all

Google Colab
A Google Colab notebook is available for local optimization of a protein model structure against a cryo-EM density map using cryo_em_minimizer.py

Installation

These steps will install Python libraries including cg2all (this repository), a modified MDTraj, a modified SE3Transformer, and other dependent libraries. The installation steps also place executables convert_cg2all and convert_all2cg in your python binary directory.

This package is tested on Linux (CentOS) and MacOS (Apple Silicon, M1).

for CPU only

pip install git+http://github.com/huhlim/cg2all

for CUDA (GPU) usage

  1. Install Miniconda
  2. Create an environment with DGL library with CUDA support
# This is an example with cudatoolkit=11.3.
# Set a proper cudatoolkit version that is compatible with your CUDA drivier and DGL library.
# dgl>=1.1 occassionally raises some errors, so please use dgl<=1.0.
conda create --name cg2all pip cudatoolkit=11.3 dgl=1.0 -c dglteam/label/cu113
  1. Activate the environment
conda activate cg2all
  1. Install this package
pip install git+http://github.com/huhlim/cg2all

for cryo_em_minimizer usage

You need additional python package, mrcfile to deal with cryo-EM density map.

pip install mrcfile

Usages

convert_cg2all

convert a coarse-grained protein structure to all-atom model

usage: convert_cg2all [-h] -p IN_PDB_FN [-d IN_DCD_FN] -o OUT_FN [-opdb OUTPDB_FN]
                      [--cg {supported_cg_models}] [--chain-break-cutoff CHAIN_BREAK_CUTOFF] [-a]
                      [--fix] [--ckpt CKPT_FN] [--time TIME_JSON] [--device DEVICE] [--batch BATCH_SIZE] [--proc N_PROC]

options:
  -h, --help            show this help message and exit
  -p IN_PDB_FN, --pdb IN_PDB_FN
  -d IN_DCD_FN, --dcd IN_DCD_FN
  -o OUT_FN, --out OUT_FN, --output OUT_FN
  -opdb OUTPDB_FN
  --cg {supported_cg_models}
  --chain-break-cutoff CHAIN_BREAK_CUTOFF
  -a, --all, --is_all
  --fix, --fix_atom
  --standard-name
  --ckpt CKPT_FN
  --time TIME_JSON
  --device DEVICE
  --batch BATCH_SIZE
  --proc N_PROC

arguments

  • -p/--pdb: Input PDB file (mandatory).
  • -d/--dcd: Input DCD file (optional). If a DCD file is given, the input PDB file will be used to define its topology.
  • -o/--out/--output: Output PDB or DCD file (mandatory). If a DCD file is given, it will be a DCD file. Otherwise, a PDB file will be created.
  • -opdb: If a DCD file is given, it will write the last snapshot as a PDB file. (optional)
  • --cg: Coarse-grained representation to use (optional, default=CalphaBasedModel).
    • CalphaBasedModel: CA-trace (atom names should be "CA")
    • ResidueBasedModel: Residue center-of-mass (atom names should be "CA")
    • SidechainModel: Sidechain center-of-mass (atom names should be "SC")
    • CalphaCMModel: CA-trace + Residue center-of-mass (atom names should be "CA" and "CM")
    • CalphaSCModel: CA-trace + Sidechain center-of-mass (atom names should be "CA" and "SC")
    • BackboneModel: Model only with backbone atoms (N, CA, C)
    • MainchainModel: Model only with mainchain atoms (N, CA, C, O)
    • Martini: Martini model
    • Martini3: Martini3 model
    • PRIMO: PRIMO model
  • --chain-break-cutoff: The CA-CA distance cutoff that determines chain breaks. (default=10 Angstroms)
  • --fix/--fix_atom: preserve coordinates in the input CG model. For example, CA coordinates in a CA-trace model will be kept in its cg2all output model.
  • --standard-name: output atom names follow the IUPAC nomenclature. (default=False; output atom names will use CHARMM atom names)
  • --ckpt: Input PyTorch ckpt file (optional). If a ckpt file is given, it will override "--cg" option.
  • --time: Output JSON file for recording timing. (optional)
  • --device: Specify a device to run the model. (optional) You can choose "cpu" or "cuda", or the script will detect one automatically.
    "cpu" is usually faster than "cuda" unless the input/output system is really big or you provided a DCD file with many frames because it takes a lot for loading a model ckpt file on a GPU.
  • --batch: the number of frames to be dealt at a time. (optional, default=1)
  • --proc: Specify the number of threads for loading input data. It is only used for dealing with a DCD file. (optional, default=OMP_NUM_THREADS or 1)

examples

Conversion of a PDB file

convert_cg2all -p tests/1ab1_A.calpha.pdb -o tests/1ab1_A.calpha.all.pdb --cg CalphaBasedModel

Conversion of a DCD trajectory file

convert_cg2all -p tests/1jni.calpha.pdb -d tests/1jni.calpha.dcd -o tests/1jni.calpha.all.dcd --cg CalphaBasedModel

Conversion of a PDB file using a ckpt file

convert_cg2all -p tests/1ab1_A.calpha.pdb -o tests/1ab1_A.calpha.all.pdb --ckpt CalphaBasedModel-104.ckpt

convert_all2cg

convert an all-atom protein structure to coarse-grained model

usage: convert_all2cg [-h] -p IN_PDB_FN [-d IN_DCD_FN] -o OUT_FN [--cg {supported_cg_models}]

options:
  -h, --help            show this help message and exit
  -p IN_PDB_FN, --pdb IN_PDB_FN
  -d IN_DCD_FN, --dcd IN_DCD_FN
  -o OUT_FN, --out OUT_FN, --output OUT_FN
  --cg

arguments

  • -p/--pdb: Input PDB file (mandatory).
  • -d/--dcd: Input DCD file (optional). If a DCD file is given, the input PDB file will be used to define its topology.
  • -o/--out/--output: Output PDB or DCD file (mandatory). If a DCD file is given, it will be a DCD file. Otherwise, a PDB file will be created.
  • --cg: Coarse-grained representation to use (optional, default=CalphaBasedModel).
    • CalphaBasedModel: CA-trace (atom names should be "CA")
    • ResidueBasedModel: Residue center-of-mass (atom names should be "CA")
    • SidechainModel: Sidechain center-of-mass (atom names should be "SC")
    • CalphaCMModel: CA-trace + Residue center-of-mass (atom names should be "CA" and "CM")
    • CalphaSCModel: CA-trace + Sidechain center-of-mass (atom names should be "CA" and "SC")
    • BackboneModel: Model only with backbone atoms (N, CA, C)
    • MainchainModel: Model only with mainchain atoms (N, CA, C, O)
    • Martini: Martini model
    • Martini3: Martini3 model
    • PRIMO: PRIMO model

an example

convert_all2cg -p tests/1ab1_A.pdb -o tests/1ab1_A.calpha.pdb --cg CalphaBasedModel

script/cryo_em_minimizer.py

Local optimization of protein model structure against given electron density map. This script is a proof-of-concept that utilizes cg2all network to optimize at CA-level resolution with objective functions in both atomistic and CA-level resolutions. It is highly recommended to use cuda environment.

usage: cryo_em_minimizer [-h] -p IN_PDB_FN -m IN_MAP_FN -o OUT_DIR [-a]
                         [-n N_STEP] [--freq OUTPUT_FREQ]
                         [--chain-break-cutoff CHAIN_BREAK_CUTOFF]
                         [--restraint RESTRAINT]
                         [--cg {CalphaBasedModel,CA,ca,ResidueBasedModel,RES,res}]
                         [--standard-name] [--uniform_restraint]
                         [--nonuniform_restraint] [--segment SEGMENT_S]

options:
  -h, --help            show this help message and exit
  -p IN_PDB_FN, --pdb IN_PDB_FN
  -m IN_MAP_FN, --map IN_MAP_FN
  -o OUT_DIR, --out OUT_DIR, --output OUT_DIR
  -a, --all, --is_all
  -n N_STEP, --step N_STEP
  --freq OUTPUT_FREQ, --output_freq OUTPUT_FREQ
  --chain-break-cutoff CHAIN_BREAK_CUTOFF
  --restraint RESTRAINT
  --cg {CalphaBasedModel,CA,ca,ResidueBasedModel,RES,res}
  --standard-name
  --uniform_restraint
  --nonuniform_restraint
  --segment SEGMENT_S

arguments

  • -p/--pdb: Input PDB file (mandatory).
  • -m/--map: Input electron density map file in the MRC or CCP4 format (mandatory).
  • -o/--out/--output: Output directory to save optimized structures (mandatory).
  • -a/--all/--is_all: Whether the input PDB file is atomistic structure or not. (optional, default=False)
  • -n/--step: The number of minimization steps. (optional, default=1000)
  • --freq/--output_freq: The interval between saving intermediate outputs. (optional, default=100)
  • --chain-break-cutoff: The CA-CA distance cutoff that determines chain breaks. (default=10 Angstroms)
  • --restraint: The weight of distance restraints. (optional, default=100.0)
  • --cg: Coarse-grained representation to use (default=ResidueBasedModel)
  • --standard-name: output atom names follow the IUPAC nomenclature. (default=False; output atom names will use CHARMM atom names)
  • --uniform_restraint/--nonuniform_restraint: Whether to use uniform restraints. (default=True) If it is set to False, the restraint weights will be dependent on the pLDDT values recorded in the PDB file's B-factor columns.
  • --segment: The segmentation method for applying rigid-body operations. (default=None)
    • None: Input structure is not segmented, so the same rigid-body operations are applied to the whole structure.
    • chain: Input structure is segmented based on chain IDs. Rigid-body operations are independently applied to each chain.
    • segment: Similar to "chain" option, but the structure is segmented based on peptide bond connectivities.
    • 0-99,100-199: Explicit segmentation based on the 0-index based residue numbers.

an example

./cg2all/script/cryo_em_minimizer.py -p tests/3isr.af2.pdb -m tests/3isr_5.mrc -o 3isr_5+3isr.af2 --all

Datasets

The training/validation/test sets are available at zenodo.

Reference

Lim Heo & Michael Feig, "One particle per residue is sufficient to describe all-atom protein structures", bioRxiv (2023). Link

DOI

cg2all's People

Contributors

huhlim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cg2all's Issues

NOTICE: please expect slower responses

I developed the method during my postdoctoral research and transitioned to industry. I am trying to maintain the method as quick as I can during my free time; however, please expect slower responses for bug fixes.

error in runing cg2all

I encountered this error during the conversion.

Traceback (most recent call last):
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/bin/convert_cg2all", line 8, in
sys.exit(main())
^^^^^^
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/cg2all/script/convert_cg2all.py", line 190, in main
batch = next(iter(input_s)).to(device)
^^^^^^^^^^^^^^^^^^^
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 628, in next
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/torch/_utils.py", line 543, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
^^^^^^^^^^^^^^^^^^^^
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 58, in
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/cg2all/lib/libdata.py", line 347, in getitem
node_feat = cg.geom_to_feature(geom_s, cg.continuous, dtype=self.dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nmrbox/0035/xiaxu/anaconda3/envs/cg2all/lib/python3.11/site-packages/cg2all/lib/libcg.py", line 213, in geom_to_feature
f_in["0"] = torch.as_tensor(torch.cat(f_in["0"], axis=1), dtype=dtype) # 17
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 111 but got size 113 for tensor number 1 in the list.

What could be the reason for this?

Error installing cg2all

Description

When installing the cg2all package, there is an issue with the mdtraj dependency.

Steps to reproduce

Simply install the CPU version of the package:

pip install git+http://github.com/huhlim/cg2all

Expected Behaviour

Cg2all installs.

Actual Behaviour

Here is part of the traceback of the error message:

   Compiling mdtraj/geometry/neighborlist.pyx because it changed.
  [ 1/12] Cythonizing mdtraj/formats/binpos/binpos.pyx
  [ 2/12] Cythonizing mdtraj/formats/dcd/dcd.pyx
  [ 3/12] Cythonizing mdtraj/formats/dtr/dtr.pyx
  [ 4/12] Cythonizing mdtraj/formats/tng/tng.pyx
  [ 5/12] Cythonizing mdtraj/formats/xtc/trr.pyx
  [ 6/12] Cythonizing mdtraj/formats/xtc/xtc.pyx
  [ 7/12] Cythonizing mdtraj/geometry/drid.pyx
  [ 8/12] Cythonizing mdtraj/geometry/neighborlist.pyx
  [ 9/12] Cythonizing mdtraj/geometry/neighbors.pyx
  [10/12] Cythonizing mdtraj/geometry/src/_geometry.pyx
  Traceback (most recent call last):
    File "/home/marbesu/opt/mambaforge/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
      main()
    File "/home/marbesu/opt/mambaforge/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/home/marbesu/opt/mambaforge/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 149, in prepare_metadata_for_build_wheel
      return hook(metadata_directory, config_settings)
    File "/tmp/pip-build-env-_u4n_672/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 380, in prepare_metadata_for_build_wheel
      self.run_setup()
    File "/tmp/pip-build-env-_u4n_672/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 487, in run_setup
      super(_BuildMetaLegacyBackend,
    File "/tmp/pip-build-env-_u4n_672/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 338, in run_setup
      exec(code, locals())
    File "<string>", line 304, in <module>
    File "/tmp/pip-build-env-_u4n_672/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1134, in cythonize
      cythonize_one(*args)
    File "/tmp/pip-build-env-_u4n_672/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1301, in cythonize_one
      raise CompileError(None, pyx_file)
  Cython.Compiler.Errors.CompileError: mdtraj/geometry/src/_geometry.pyx
  [end of output]

Error while running tests

Hello.
Thanks for this awesome work.
I tried to install following the instructions.
The installation goes well, but when I try to convert the test systems, I get the following error.
I have tried different versions of cudatoolkit 11.3,11.6,11.7 and 11.8.
I have also tried installing dgl from source but I get the same error.
Have you seen this before?

(cg2all) tambones@PC-EQA2-03:~/softwares/cg2all/tests$ convert_cg2all -p 1ab1_A.martini.pdb -o test_martini.pdb --cg Martini
Downloading from Zenodo ... /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/cg2all/model/Martini.ckpt
Traceback (most recent call last):
  File "/home/tambones/anaconda3/envs/cg2all/bin/convert_cg2all", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/cg2all/script/convert_cg2all.py", line 183, in main
    batch = next(iter(input_s)).to(device)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/heterograph.py", line 5709, in to
    ret._graph = self._graph.copy_to(utils.to_dgl_context(device))
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/heterograph_index.py", line 255, in copy_to
    return _CAPI_DGLHeteroCopyTo(self, ctx.device_type, ctx.device_id)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "dgl/_ffi/_cython/./function.pxi", line 295, in dgl._ffi._cy3.core.FunctionBase.__call__
  File "dgl/_ffi/_cython/./function.pxi", line 227, in dgl._ffi._cy3.core.FuncCall
  File "dgl/_ffi/_cython/./function.pxi", line 217, in dgl._ffi._cy3.core.FuncCall3
dgl._ffi.base.DGLError: [20:54:36] /opt/dgl/src/runtime/cuda/cuda_device_api.cc:343: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading: CUDA: unspecified launch failure
Stack trace:
  [bt] (0) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/libdgl.so(+0x8b08b5) [0x7fbe978b08b5]
  [bt] (1) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/libdgl.so(dgl::runtime::CUDADeviceAPI::CopyDataFromTo(void const*, unsigned long, void*, unsigned long, unsigned long, DGLContext, DGLContext, DGLDataType)+0x82) [0x7fbe978b2d12]
  [bt] (2) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/libdgl.so(dgl::runtime::NDArray::CopyFromTo(DGLArray*, DGLArray*)+0x10d) [0x7fbe977291ed]
  [bt] (3) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/libdgl.so(dgl::runtime::NDArray::CopyTo(DGLContext const&) const+0x103) [0x7fbe97764d53]
  [bt] (4) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/libdgl.so(dgl::UnitGraph::CopyTo(std::shared_ptr<dgl::BaseHeteroGraph>, DGLContext const&)+0x3ff) [0x7fbe97872d4f]
  [bt] (5) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/libdgl.so(dgl::HeteroGraph::CopyTo(std::shared_ptr<dgl::BaseHeteroGraph>, DGLContext const&)+0xf6) [0x7fbe97771596]
  [bt] (6) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/libdgl.so(+0x77ffd6) [0x7fbe9777ffd6]
  [bt] (7) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/libdgl.so(DGLFuncCall+0x48) [0x7fbe9770e278]
  [bt] (8) /home/tambones/anaconda3/envs/cg2all/lib/python3.11/site-packages/dgl/_ffi/_cy3/core.cpython-311-x86_64-linux-gnu.so(+0x1a79f) [0x7fbf292e479f]

I would be really grateful for any suggestions.
Best,
Amin

atom renaming not consistent with IUPAC nomenclature

Hello cg2all team,
first of all, thanks for your fantastic tool.

I have been running cg2all and realized (if I followed the logic correctly) that the data_dict["ATOM_NAME_ALT_s"] loaded from cg2all/data/residue_constants.pkl in the cg2all/lib/residue_constants.py is used to name the newly added side chain atoms. Since, in particular, CD is used instead of CD1 for Ile and this does not follow the IUPAC atom nomenclature for Ile -- I would like to ask you if there is any specific reason for that particular naming scheme.

Thanks in advance!

Error when converting a DCD trajectory file

I successfully installed the package (CPU version) and tried converting pdb and dcd files from CG to All-atom structures. It worked pretty well for a single pdb file. But I kept getting the following errors when converting a dcd file using the example command line:

Command line:
convert_cg2all -p tests/1jni.calpha.pdb -d tests/1jni.calpha.dcd -o tests/1jni.calpha.all.dcd --cg CalphaBasedModel

Error:
File "/storage1/fancao/miniconda/envs/cg2all_cpu/lib/python3.8/site-packages/cg2all/lib/libcg.py", line 91, in read_cg
self.bfactors_cg[:, i_res, i_atm] = self.traj.bfactors[:, atom.index]
AttributeError: 'Trajectory' object has no attribute 'bfactors'

It seemed like mdtraj.Trajectory does not have 'bfactors' but codes are using this as a attribute anyway.

Am I missing something?

Google Colab issue

Hi,

I've been using cg2all on Google Colab and it worked well a few months ago. Today I was trying to use and I keep receiving the same error message:


DGL backend not selected or invalid.  Assuming PyTorch for now.
Setting the default backend to "pytorch". You can change it in the ~/.dgl/config.json file or export the DGLBACKEND environment variable.  Valid options are: pytorch, mxnet, tensorflow (all lowercase)
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
[/usr/local/lib/python3.10/site-packages/torch/__init__.py](https://localhost:8080/#) in _load_global_deps()
    171     try:
--> 172         ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
    173     except OSError as err:

10 frames
OSError: /usr/local/lib/python3.10/site-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11

During handling of the above exception, another exception occurred:

OSError                                   Traceback (most recent call last)
[/usr/lib/python3.10/ctypes/__init__.py](https://localhost:8080/#) in __init__(self, name, mode, handle, use_errno, use_last_error, winmode)
    372 
    373         if handle is None:
--> 374             self._handle = _dlopen(self._name, mode)
    375         else:
    376             self._handle = handle

OSError: /usr/local/lib/python3.10/site-packages/nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11


Can you please help me to address this issue?

Thank you.

Possible error in PDBset in libdata.py

getitem part of PDBset calls cg.get_geometry, which input variables are pos (containing coordinates of cg model) and cg.atom_mask_cg, and cg.continuous[0].

However, if cg model have invalid residues ( i.e, if cg.atom_mask_cg[:,0] have some zeros), the shape of pos and cg.atom_mask_cg could be different, due to pos = r_cg[valid_residue, :], while cg.atom_mask_cg does not have equivalent process.

Having invalid residue in the cg model would be inappropriate to run the algorithm, but it would be good to prevent posible error.

Batch size needs to be a divisor of the number of frames

Batch size needs to be a divisor of the number of frames. For example, if there are 100 frames in the input DCD file, batch_size should be a divisor (e.g., 2, 4, 5, 10, ...). Otherwise, some of the last frames are not processed.

Martini3 model

Martini3 model has a different mapping rule. Train a model for Martini3.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.