Coder Social home page Coder Social logo

lbcb-sci / rinalmo Goto Github PK

View Code? Open in Web Editor NEW
54.0 4.0 6.0 334 KB

RiboNucleic Acid (RNA) Language Model

Home Page: https://sikic-lab.github.io/

License: Apache License 2.0

Python 100.00%
bioinformatics foundation-model language-model rna rna-secondary-structure rna-seq rna-structure-prediction secondary-structure splice-site-prediction structural-biology

rinalmo's Introduction

RiboNucleic Acid Language Model - RiNALMo

Rafael Josip Penić1, Tin Vlašić2, Roland G. Huber3, Yue Wan2, Mile Šikić2
1Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia
2Genome Institute of Singapore (GIS), Agency for Science, Technology and Research (A*STAR), Singapore
3Bioinformatics Institute (BII), Agency for Science, Technology and Research (A*STAR), Singapore

This is the official implementation of the paper "RiNALMo: General-Purpose RNA Language Models Can Generalize Well on Structure Prediction Tasks".

About

Ribonucleic acid (RNA) plays a variety of crucial roles in fundamental biological processes. Recently, RNA has become an interesting drug target, emphasizing the need to improve our understanding of its structures and functions. Over the years, sequencing technologies have produced an enormous amount of unlabeled RNA data, which hides important knowledge and potential. Motivated by the successes of protein language models, we introduce RiboNucleic Acid Language Model (RiNALMo) to help unveil the hidden code of RNA. RiNALMo is the largest RNA language model to date with 650 million parameters pre-trained on 36 million non-coding RNA sequences from several available databases. RiNALMo is able to extract hidden knowledge and capture the underlying structure information implicitly embedded within the RNA sequences. RiNALMo achieves state-of-the-art results on several downstream tasks. Notably, we show that its generalization capabilities can overcome the inability of other deep learning methods for secondary structure prediction to generalize on unseen RNA families.

Quick Start - Inference

Use following commands for the installation (Prerequisites: Python>=3.8 and CUDA>=11.8):

git clone https://github.com/lbcb-sci/RiNALMo
cd RiNALMo
pip install .
pip install flash-attn==2.3.2

After installation you can easily use RiNALMo to obtain nucleotide representations:

import torch
from rinalmo.pretrained import get_pretrained_model

DEVICE = "cuda:0"

model, alphabet = get_pretrained_model(model_name="giga-v1")
model = model.to(device=DEVICE)
model.eval()
seqs = ["ACUUUGGCCA", "CCCGGU"]

tokens = torch.tensor(alphabet.batch_tokenize(seqs), dtype=torch.int64, device=DEVICE)
with torch.no_grad(), torch.cuda.amp.autocast():
  outputs = model(tokens)

print(outputs["representation"])

Installation

  1. Clone the repo.
git clone https://github.com/lbcb-sci/RiNALMo
cd RiNALMo
  1. Create conda environment. All external dependencies should be contained in environment.yml.
# create conda environment for RiNALMo
conda env create -f environment.yml

# activate RiNALMo environment
conda activate rinalmo
  1. Download pre-trained weights.
mkdir weights
cd weights
wget https://zenodo.org/records/10725749/files/rinalmo_giga_pretrained.pt
  1. Download fine-tuned weights.
# Download fine-tuned weights for secondary structure prediction.
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-16s_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-23s_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-5s_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-srp_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-grp1_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-telomerase_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-tmRNA_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-tRNA_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_archiveII-RNaseP_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_ss_bprna_ft.pt

# Download fine-tuned weights for splice-site prediction.
wget https://zenodo.org/records/10725749/files/rinalmo_giga_splice_acceptor_ft.pt
wget https://zenodo.org/records/10725749/files/rinalmo_giga_splice_donor_ft.pt

# Download fine-tuned weights for mean ribosome loading prediction.
wget https://zenodo.org/records/10725749/files/rinalmo_giga_mrl_ft.pt

cd ..

Usage

We provide pre-trained RiNALMo weights and fine-tuned weights for three downstream tasks: mean ribosome loading prediction, secondary structure prediction and splice-site prediction. For both evaluation and fine-tuning use train_<downstream_task>.py scripts.

Evaluation

In order to evaluate the provided fine-tuned RiNALMo models and prediction heads, please run the scripts using the following input arguments:

# skip fine-tuning and run the evaluation on the test set
--test_only
# path to the '.pt' file containing fine-tuned model weights
--init_params
# dataset on which you would like to evaluate the fine-tuned model
--dataset
# download and prepare data (if needed)
--prepare_data
# Directory that will contain or already contains training, validation and test data
data_dir
# directory for all the output files
--output_dir

Example

To evaluate the fine-tuned RiNALMo model and prediction head on archiveII 5S rRNA test dataset for secondary structure prediction, use the rinalmo_giga_ss_archiveII-5s_ft.pt weights. Here, we provide an example run command.

python train_sec_struct_prediction.py ./ss_data --test_only --init_params ./weights/rinalmo_giga_ss_archiveII-5s_ft.pt --dataset archiveII_5s --prepare_data --output_dir ./outputs/archiveII/5s/ --accelerator gpu --devices 1

Fine-tuning

In order to fine-tune RiNALMo, use --pretrained_rinalmo_weights ./weights/rinalmo_giga_pretrained.pt input argument. Use --help to learn about other available arguments. For the splice-site prediction task, the dataset and data preprocessing code are available at https://git.unistra.fr/nscalzitti/spliceator.git.

License

Copyright 2024 Šikić Lab - AI in Genomics

RiNALMo Code License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Model Parameters License

The RiNALMo parameters are made available under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license. You can find details at: https://creativecommons.org/licenses/by/4.0/legalcode.

Citation

If you find our work useful in your research, please cite:

@article{penic2024_rinalmo,
  title={RiNALMo: General-Purpose RNA Language Models Can Generalize Well on Structure Prediction Tasks},
  author={Penić, Rafael Josip and Vlašić, Tin and Huber, Roland G. and Wan, Yue and Šikić, Mile},
  journal={arXiv preprint arXiv:2403.00043},
  year={2024}
}

Contact

If you have any questions, please feel free to email the authors or open an issue.

Acknowledgment

This work was supported in part by the National Research Foundation (NRF) Competitive Research Programme (CRP) under Project Identifying Functional RNA Tertiary Structures in Dengue Virus (NRF-CRP27-2021RS-0001) and in part by the A*STAR under Grant GAP2: A*STAR RNA-Foundation Model (A*STAR RNA-FM) (I23D1AG079).

The computational work for the paper was partially performed on resources of the National Supercomputing Centre, Singapore https://www.nscc.sg.

rinalmo's People

Contributors

retiro avatar rjpenic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rinalmo's Issues

Cannot access pretrained weights

Thank you for opensource the project. When I am trying to access the weight through wget https://zenodo.org/records/10725749/files/rinalmo_giga_pretrained.pt, it returns error. And I cannot open the link as well. Could you please check the files?

Same RNA, different representation

Hello, is this normal?
my test.py:
import torch
from rinalmo.pretrained import get_pretrained_model

DEVICE = "cuda:0"

model, alphabet = get_pretrained_model(model_name="rinalmo_giga_pretrained")
model = model.to(device=DEVICE)
seqs = ["CCCGGU","CCCGGU"]

tokens = torch.tensor(alphabet.batch_tokenize(seqs), dtype=torch.int64, device=DEVICE)
with torch.no_grad(), torch.cuda.amp.autocast():
outputs = model(tokens)

print(outputs["representation"])
but the output of two same sequences is different:
python test.py
tensor([[[ 0.0209, -0.3792, -0.9592, ..., -0.3661, -0.4986, -1.0630],
[-0.1543, -0.8713, -0.6534, ..., -0.7442, -0.4688, 0.0491],
[-0.1923, -1.0140, -1.3560, ..., -2.0971, -1.1946, -0.7145],
...,
[ 1.5532, -1.9415, -1.3395, ..., -1.3404, -1.1100, 0.9047],
[-0.1968, -0.5992, 0.3608, ..., -1.4525, -0.8330, 0.4122],
[-1.1677, 0.0836, -0.1704, ..., -0.8856, -0.8993, -0.1143]],

    [[ 0.1576, -0.0849, -1.1658,  ..., -0.1120, -0.8494, -0.4571],
     [ 0.2583, -0.0431, -0.1226,  ..., -1.9443, -0.7913,  0.4501],
     [ 0.0782, -0.8882, -0.7555,  ..., -0.7302, -1.6658,  0.0445],
     ...,
     [ 1.3045, -1.9552, -2.3737,  ..., -0.5877, -1.6685,  0.6632],
     [ 0.5900, -0.9660, -0.0392,  ..., -1.1003, -2.0937,  1.4232],
     [-0.7117, -0.8371, -0.3525,  ..., -1.1058, -1.0734, -0.6338]]],
   device='cuda:0')

the pre-trained weight download from https://zenodo.org/records/10725749/files/rinalmo_giga_pretrained.pt

No support for flash-attn transformer model

Hi there,
Have you ever pre-trained the model which was built from scratch and didn't use flash-attn, as recorded in your code? The case is my cuda version is 11.4 and doesn't support flash-attn, such that the model weights in "rinalmo_giga_pretrained.pt", which is built in flash-attn mode, are not compatible with the model built from scratch.
I'm trying to modify the plain model structure to be consistent with flash-attn model and therefore load flash-attn weights onto my plain model, but this is nontrivial for me and I can't assure it will finally work. Any suggestions?

How to understand the meaning of RNA representation

Hello, thank you for your reply, I have solved it, but now I have a problem:
I modified my test.py file as follows:

import torch
from rinalmo.pretrained import get_pretrained_model
DEVICE = "cuda:0"
model, alphabet = get_pretrained_model(model_name="rinalmo_giga_pretrained")
model.eval()
model = model.to(device=DEVICE)
seqs = ["CCCGGU"]
tokens = torch.tensor(alphabet.batch_tokenize(seqs), dtype=torch.int64, device=DEVICE)
with torch.no_grad(), torch.cuda.amp.autocast():
outputs = model(tokens)
for rep in outputs["representation"]:
print(rep.shape)

output:
torch.Size([8, 1280])

if seqs = ["ACUUUGGCCA"]
output:
torch.Size([12, 1280])

if seqs = ["ACUUUGGCCA","CCCGGU"]
output:
torch.Size([12, 1280])
torch.Size([12, 1280])

It seems that the output dimension is determined by the maximum sequence length of the input(Every sequence begins with a [CLS] token,and ends with an [EOS] token), and the excess dimensions are filled according to your rules.
Can I understand that each 1280 tensor represents a base?
But according to your paper:
an RNA sequence is tokenized and turned into a 1280 dimension vector using a learned input embedding model.
How do I understand the meaning of this output, and how do I fix the sequence dimensions to facilitate my downstream tasks, such as predicting interactions between RNAs?

Pre-trained weights for RiNALMo-135M?

Hello! Are there any plans to release the pre-trained weights for the smaller RiNALMo-135M model and/or any of the other configurations available (nano, micro)?

Train database usage

Hi,

I noticed you are using a combination of database including rnacentral, rfam, ensembl and nt.

Can I please ask why did you chose these databases?

Specifically, rnacentral should be a superset of rfam and ensembl. While nt is not a part of rnacentral, it should have been very similar to the ENA database, which is also a subset of rnacentral.

Besides, what data deduplication pipelines is applied to remove the redundancy?

Inference

Hi,
I've had no issues installing and running your code. I am interested in using your model to do inference, for example predicting the MRL for various RNA sequences. I've tried modifying your code to do so without much success at the moment.

Any help you can provide to do such a task would be greatly appreciated. The idea is to pass a new csv dataset of RNA sequences and output the predictions of the finetuned model (MRL for example).

Thanks!

cluster

Hello,
I would like to ask how you use MMSeqs2 to cluster RNA, such as what values ​​are set to parameters such as identity and coverage.
Thanks!

A small issue in layer_norm

Hey,

Thank you for your great work, it looks awesome.

When I tried to reproduce your work, I noticed a tiny issue:

In line 122-128 of module, you used the layer norm output of hidden states as residual path of attention, where in standard transformer implementation, the hidden states before layer norm is used as residual path.

There shouldn't be a big problem, but if anyone is unable to reproduce the claimed result, this may be the cause.

error when running small inference code: "list_to_cuuint64_array"

/tmp/tmp5oe3edsd/main.c: In function ‘list_to_cuuint64_array’:
/tmp/tmp5oe3edsd/main.c:354:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
for (Py_ssize_t i = 0; i < len; i++) {
^
/tmp/tmp5oe3edsd/main.c:354:3: note: use option -std=c99 or -std=gnu99 to compile your code
/tmp/tmp5oe3edsd/main.c: In function ‘list_to_cuuint32_array’:
/tmp/tmp5oe3edsd/main.c:365:3: error: ‘for’ loop initial declarations are only allowed in C99 mode
for (Py_ssize_t i = 0; i < len; i++) {
^
Traceback (most recent call last):
File "/projects/p32327/RNAFOLD/RiNALMo-main/try.py", line 13, in
outputs = model(tokens)
^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/p32327/RNAFOLD/RiNALMo-main/rinalmo/model/model.py", line 26, in forward
representation, attn_weights = self.transformer(
^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/p32327/RNAFOLD/RiNALMo-main/rinalmo/model/modules.py", line 58, in forward
x, attn = checkpoint.checkpoint(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/_compile.py", line 24, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 489, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 489, in checkpoint
ret = function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/p32327/RNAFOLD/RiNALMo-main/rinalmo/model/modules.py", line 125, in forward
mh_out, attn = self.mh_attn(x, key_padding_mask=key_padding_mask, return_attn_probs=need_attn_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/projects/p32327/RNAFOLD/RiNALMo-main/rinalmo/model/attention.py", line 193, in forward
qkv = self.rotary_emb(qkv, seqlen_offset=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in wrapped_call_impl
return self.call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/flash_attn/layers/rotary.py", line 438, in forward
return apply_rotary_emb_qkv
(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/flash_attn/layers/rotary.py", line 233, in apply_rotary_emb_qkv

return ApplyRotaryEmbQKV
.apply(qkv, cos, sin, cos_k, sin_k, interleaved, seqlen_offsets)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/torch/autograd/function.py", line 553, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/flash_attn/layers/rotary.py", line 151, in forward
apply_rotary(
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/flash_attn/ops/triton/rotary.py", line 213, in apply_rotary
rotary_kernel[grid](
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/runtime/jit.py", line 550, in run
bin.c_wrapper(
^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/compiler/compiler.py", line 692, in getattribute
self._init_handles()
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/compiler/compiler.py", line 670, in _init_handles
bin_path = {driver.HIP: "hsaco_path", driver.CUDA: "cubin"}[driver.backend]
^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/runtime/driver.py", line 157, in getattr
self._initialize_obj()
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/runtime/driver.py", line 154, in _initialize_obj
self._obj = self._init_fn()
^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/runtime/driver.py", line 187, in initialize_driver
return CudaDriver()
^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/runtime/driver.py", line 77, in init
self.utils = CudaUtils()
^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/runtime/driver.py", line 47, in init
so = _build("cuda_utils", src_path, tmpdir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/common/build.py", line 106, in _build
ret = subprocess.check_call(cc_cmd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vqc8153/miniconda3/envs/rna/lib/python3.11/subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp5oe3edsd/main.c', '-O3', '-I/home/vqc8153/miniconda3/envs/rna/lib/python3.11/site-packages/triton/common/../third_party/cuda/include', '-I/home/vqc8153/miniconda3/envs/rna/include/python3.11', '-I/tmp/tmp5oe3edsd', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmp5oe3edsd/cuda_utils.cpython-311-x86_64-linux-gnu.so', '-L/.singularity.d/libs']' returned non-zero exit status 1.

mRNA representation

After reading the paper I am unsure if this model can be directly used for mRNA representation without fine-tuning.
Namely, the training data contains no mRNAs (if I understand correctly). However, the model was then FT on mRNA tasks.
1.) May I ask why mRNAs were not used in pre-training - what is the reason for that?
2.) Did you check masking performance on different sequence types including mRNAs? Similar as for structure prediction, although mRNAs were also missing there.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.