Coder Social home page Coder Social logo

depthcontrast's Introduction

Self-Supervised Pretraining of 3D Features on any Point-Cloud

This code provides a PyTorch implementation and pretrained models for DepthContrast, as described in the paper Self-Supervised Pretraining of 3D Features on any Point-Cloud.

DepthContrast Pipeline

DepthContrast is an easy to implement self-supervised method that works across model architectures, input data formats, indoor/outdoor 3D, single/multi-view 3D data. Similarly to 2D contrastive approaches, DepthContrast learns representations by comparing transformations of a 3D pointcloud/voxel. It does not require any multi-view information between frames, such as point-to-point correspondances. It makes our framework generalize to any 3D pointcloud or voxel input. DepthContrast pretrains high capacity models for 3D recognition tasks, and leverages large-scale 3D data. It shows state-of-the-art performance on detection and segmentation benchmarks, outperforming all prior work on detection.

Model Zoo

We release our PointNet++ and MinkowskiEngine UNet models pretrained with DepthContrast with the hope that other researchers might also benefit from these pretrained backbones. Due to license issue, models pretrained on Waymo cannot be released. For PointnetMSG and Spconv-UNet models, we encourage the researchers to train by themselves using the provided script.

We first provide PointNet++ models with different sizes.

network epochs batch-size ScanNet Det with VoteNet url args
PointNet++-1x 150 1024 61.9 model config
PointNet++-2x 200 1024 63.3 model config
PointNet++-3x 150 1024 64.1 model config
PointNet++-4x 100 1024 63.8 model config

The ScanNet detection evaluation metric is mAP at IOU=0.25. You need to change the scale parameter in the config files accordingly.

We provide the joint training results here, with different epochs. We use epoch 400 to generate the results reported in the paper.

Backbone epochs batch-size url args
PointNet++ & MinkowskiEngine UNet 300 1024 model config
PointNet++ & MinkowskiEngine UNet 400 1024 model config
PointNet++ & MinkowskiEngine UNet 500 1024 model config
PointNet++ & MinkowskiEngine UNet 600 1024 model config
PointNet++ & MinkowskiEngine UNet 700 1024 model config

Running DepthContrast unsupervised training

Requirements

You can use the requirements.txt to setup the environment. First download the git-repo and install the pointnet modules:

git clone --recursive https://github.com/facebookresearch/DepthContrast.git 
cd pointnet2
python setup.py install

Then install all other packages:

pip install -r requirements.txt

or

conda install --file requirements.txt

For voxel representation, you have to install MinkowskiEngine. Please see here on how to install it.

For the lidar point cloud pretraining, we use models from OpenPCDet. It should be in the third_party folder. To install OpenPCDet, you need to install spconv, which is a bit difficult to install and may not be compatible with MinkowskiEngine. Thus, we suggest you use a different conda environment for lidar point cloud pretraining.

Singlenode training

DepthContrast is very simple to implement and experiment with.

To experiment with it on one GPU and debugging, you can do:

python main.py /path/to/cfg/file

For the actual training, please use the distributed trainer. For multi-gpu training in one node, you can run:

python main.py /path/to/cfg_file --multiprocessing-distributed --world-size 1 --rank 0 --ngpus number_of_gpus

To run it with just one gpu, just set the --ngpus to 1. For submitting it to a slurm node, you can use ./scripts/pretrain_node1.sh. For hyper-parameter tuning, please change the config files.

Multinode training

Distributed training is available via Slurm. We provide several SBATCH scripts to reproduce our results. For example, to train DepthContrast on 4 nodes and 32 GPUs with a batch size of 1024 run:

sbatch ./scripts/pretrain_node4.sh /path/to/cfg_file

Note that you might need to remove the copyright header from the sbatch file to launch it.

Evaluating models

For votenet finetuning, please checkout this repo for more details.

For H3DNet finetuning, please checkout this repo for more details.

For voxel scene segmentation task finetuning, please checkout this repo for more details.

For lidar point cloud object detection task finetuning, please checkout this repo for more details.

Common Issues

For help or issues using DepthContrast, please submit a GitHub issue.

License

See the LICENSE file for more details.

Citation

If you find this repository useful in your research, please cite:

@inproceedings{zhang_depth_contrast,
  title={Self-Supervised Pretraining of 3D Features on any Point-Cloud},
  author={Zhang, Zaiwei and Girdhar, Rohit and Joulin, Armand and Misra, Ishan},
  journal={arXiv preprint arXiv:2101.02691},
  year={2021}
}

depthcontrast's People

Contributors

imisra avatar zaiweizhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

depthcontrast's Issues

shuffle bn for voxel input

hi,zaiwei,I see that in the code you don't use shuffle bn when input is voxel,it's just the experiment result or some idea behind the operation?

Questions about KITTI fine-tuning experiments

Hi! First I would like to thank you for this great work! It is very interesting and I like it a lot!
In the meantime, I have two questions about the KITTI fine-tuning experiments:

  1. I found there are multiple random splits for 5-50% KITTI data (https://github.com/zaiweizhang/OpenPCDet/tree/master/data/kitti/split_infos) (this issue). For the numbers reported in the paper, are they the mean AP across multiple random splits?
  2. From the fine-tuning instructions in https://github.com/zaiweizhang/OpenPCDet, if I understand correctly, it seems across 5-50% KITTI data fine-tuning experiments, we are using the same kitti_dbinfos_train.pkl, which is generated from the 100% KITTI train set, for augmentation. Should different splits use different kitti_dbinfos_train.pkl generated from corresponding splits for augmentation?

Thanks in advance!

Only one scene loaded by reader.py for preparing Scannet dataset

Hi,
I have downloaded the Scannet dataset.
Now I am following the instruction in the readme file to run the reader.py file to extract the depth images and other related info from the .sens file.
However I noticed that the reader.py file is only processing one file from the scannet dataset:

def main():
  scans = glob(opt.scans_path+"/*")
  for scan in scans:
    scenename = scan.split("/")[-1]
    if scenename != "scene0000_00":
      continue
    filename = os.path.join(scan, scenename+".sens")

Here the if condition makes sure only one .sens file is processed.
Is this intentional or a debug statement that has to be commented out for processing the whole dataset?
And if this is intentional then could you take some time out to explain the reasoning behind it?
I am not very familiar with the scannet dataset

Install requirements for pip

First, thank you for providing this well curated code base.

I was just wondering, if the requirements.txt that you provide in DepthContrast/ is usable for an installation with pip?

I think the requirements.txt is one for a conda installation but as far as I know this can't be used with pip (correct me if I am wrong). It would probably save me and probably also other pip users a lot of work, if you could also provide a requirements for pip :).

Pretraining on KITTI

Hi,

I'm thinking of performing a pretraining with KITTI, as I have limited computing resources, and using Waymo is out of my reach.

Do you have any direction you can point me at, as to where in the code I should be looking to do those changes?

I've noticed you sugest the use of RalphMao's repo for batch downloading Waymo, which "converts the data into KITTI-like format". Do you think one feasible way would be to adapt your extract_pointcloud.py to extract from KITTI, for example?

Thanks in advance!

Downloading ScanNet dataset

Hi,

I highly appreciate you for sharing the code.
It seems that the entire data(ScanNet) take up about 1.2 TB to fully download, and I was wondering if you could share the specific data you employed during your experiment?

Please let me know if you have any question or concern and I am looking forward to hearing back from you.
Thank you,

Memory leak at the beginning of pretraining

Hi, I always meet memory explosion and the pretraining is interrupted:

The interrupt happened at the different iteration, maybe epoch 1 or 5 or ....

Traceback (most recent call last):
File "main.py", line 197, in
main()
File "main.py", line 74, in main
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args, cfg))
File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 108, in join
(error_index, name)
Exception: process 0 terminated with signal SIGFPE

OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.

As instructed in the README.md, I tried below steps

  1. git clone --recursive https://github.com/facebookresearch/DepthContrast.git
  2. cd (third_party/) pointnet2
  3. python setup.py install

But I'm facing the below error
Traceback (most recent call last): File "D:\repos\DepthContrast\third_party\pointnet2\setup.py", line 22, in <module> CUDAExtension( File "C:\Users\swarup\Anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1047, in CUDAExtension library_dirs += library_paths(cuda=True) File "C:\Users\swarup\Anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1186, in library_paths paths.append(_join_cuda_home(lib_dir)) File "C:\Users\swarup\Anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 2230, in _join_cuda_home raise EnvironmentError('CUDA_HOME environment variable is not set. ' OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.

I'm using

  1. Windows 11
  2. pip 21.2.4 (python 3.9)
  3. torch 1.13.0+cu117
  4. torchvision 0.14.0+cu117

I hope this is not related to CUDA version !

Training with smaller GPU memory?

Hi
when the first time I train the model (using waymo datasets) with the default setting it works fine. But it was using GPU having 24G VRAM.

I would like to ask if I train it with a smaller GPU, eg. having 8G VRAM, how to change the config so that it is wont run out of VRAM?

Thanks

can't understand table 4?

Hi, thanks for sharing so great job, after reading the paper, I'm confused with table 4 in paper.
Why you didn't compare your two representation input(point and voxel) to one representation input(point or voxel)?
what's more, Why you seperately compare your method with point or voxel model,and the point or voxel model is trained on different dataset.

Pre-train with Adam optimizer

Hello, thank you for this great work.

I find you apply SGD+momentum optimizer to the pre-training, but other optimizers are used for fine-tuning. I would like to know that have you tried other optimizers such as Adam, AdamW and LARS for pre-training? And will other choices lead to worse performance of pre-training?

Thank you very much.

Running VoteNet on Matterport3D

Hi,

Thanks for your great work.

I'm currently working on reproducing results on Matterport3D, and I just follow README in this fork of votenet https://github.com/zaiweizhang/votenet/tree/master/mp3d. But when I try to download mp3d dataset and process it for the experiments, I found that mp3d dataset is quite different from scannet, which means I can not find *_vh_clean_2.ply, *aggregation.json, *_vh_clean_2.0.010000.segs.json and metadata for mp3d dataset used in batch_load_scannet_data.py.

So could you please make it more clear about how to prepare mp3d dataset for the votenet experiments?

Thanks!

Voxelization in transformer

Hi, thank you provide the code of the amazing works!

I am curious about the voxel-level input. In the process of voxelization, the feature of the voxel adopts the feature of the point that has a minimum index instead of using the average of all the points in the voxel grid. Is this related to MinkowskiEngine?

Thank you!

Trainining curve for ScanNet

@zaiweizhang Is it possible that you could share the training curve for the ScanNet or point to me when it indicates a good converging point (e.g., the final converged loss values). Just curious about how it looks like as I am training it as well. Thank you very much.

environment install

Hi, I use:
conda create --name --file requirements.txt -c intel
to install environment. But there are a lot of conflicts. Do you know how to resolve it?
image

training crash

Hello,

I tried to use waymo datasets to pretrain the model, however, I got the following error.
Could you please check how to fix it?
Thank you very much.

============================== Args ==============================
cfg configs/point_within_lidar_template.yaml
quiet False
world_size 1
rank 0
dist_url tcp://localhost:15475
dist_backend nccl
seed None
gpu 0
ngpus 1
multiprocessing_distributed False
Traceback (most recent call last):
File "main.py", line 190, in
main()
File "main.py", line 70, in main
main_worker(args.gpu, ngpus_per_node, args, cfg)
File "main.py", line 81, in main_worker
model = main_utils.build_model(cfg['model'], logger)
File "/mnt/Titan/git_repos/open_repos/DepthContrast/utils/main_utils.py", line 142, in build_model
return models.build_model(cfg, logger)
File "/mnt/Titan/git_repos/open_repos/DepthContrast/models/init.py", line 11, in build_model
return BaseSSLMultiInputOutputModel(model_config, logger)
File "/mnt/Titan/git_repos/open_repos/DepthContrast/models/base_ssl3d_model.py", line 58, in init
self.trunk = self._get_trunk()
File "/mnt/Titan/git_repos/open_repos/DepthContrast/models/base_ssl3d_model.py", line 275, in _get_trunk
trunks.append(models.TRUNKSself.config['arch_point'])
TypeError: 'NoneType' object is not callable

Training time

Thanks for sharing the awesome repo! I want to ask how long should the models be trained?

Why apply slighter data augementation to LiDAR data?

Hello, thank you for this excellent work.

I notice that you apply slighter data augmentation to LiDAR data than other point clouds, including:

  1. yz-flip instead of xz-flip and yz-flip
  2. [0.95, 1.05] random scale instead of [0.8, 1.2]
  3. ±pi/4 rotation instead of ±pi

Can you tell me the reason about applying each slighter augmentation? Thank you very much!

Loading backbone to OpenPCDet

Hi! I tried to load a checkpoint from the pretrained backbones to PointRCNN. In this case, I tried to pick up from epoch 50 of the pretraining, just to test.

Here is the command:

python -m torch.distributed.launch --nproc_per_node=1 train.py --launcher pytorch --cfg_file cfgs/kitti_models/pointrcnn_iou_finetune.yaml --pretrained_model /home/baraujo/DepthContrast/third_party/OpenPCDet/checkpoints/checkpoint-ep50.pth.tar

Here is the error:

Traceback (most recent call last):
  File "train.py", line 205, in <module>
    main()
  File "train.py", line 132, in main
    init_model_from_weights(model, state, freeze_bb=False)
  File "/home/baraujo/DepthContrast/third_party/OpenPCDet/tools/checkpoint.py", line 59, in init_model_from_weights
    assert (
AssertionError: Unknown state dict key: classy_state_dict

I don't totally understand some operation in the beggining of init_model_from_weights(). The checkpoints I got from the pretraining only have this keys: dict_keys(['epoch', 'model', 'optimizer', 'train_criterion']), they don't have a "classy_state_dict" or a "base_model" key, like I think you expect.

Thanks!

environment install error

When I install the environments, a lot of packages conflict each other.
could you please release a docker for training?

Recently, I have read your new work “Self-Supervised Pretraining for Large-Scale Point Clouds“ , it is a great work! Will you release the code of it.

Thank you!

CUDA error when run pretrain

Hi, I want to pretrain on my own dataset.
But I met CUDA error (spconv):

RuntimeError: CUDA error: an illegal memory access was encountered.

Traceback (most recent call last): File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/home/xuhang/jch/works/DepthContrast/main.py", line 129, in main_worker run_phase('train', train_loader, model, optimizer, train_criterion, epoch, args, cfg, logger, tb_writter) File "/home/xuhang/jch/works/DepthContrast/main.py", line 158, in run_phase embedding = model(sample) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 447, in forward output = self.module(*inputs[0], **kwargs[0]) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/xuhang/jch/works/DepthContrast/models/base_ssl3d_model.py", line 268, in forward return self.multi_input_with_head_mapping_forward(batch) File "/home/xuhang/jch/works/DepthContrast/models/base_ssl3d_model.py", line 71, in multi_input_with_head_mapping_forward outputs = self._single_input_forward(batch[input_key], feature_names, input_key, input_idx) File "/home/xuhang/jch/works/DepthContrast/models/base_ssl3d_model.py", line 108, in _single_input_forward feats = self.trunk[target](batch, feature_names) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/xuhang/jch/works/DepthContrast/models/trunks/spconv_unet.py", line 225, in forward x = self.conv_input(input_sp_tensor) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/spconv/modules.py", line 134, in forward input = module(input) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/spconv/conv.py", line 181, in forward use_hash=self.use_hash) File "/home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6/site-packages/spconv/ops.py", line 95, in get_indice_pairs int(use_hash)) RuntimeError: CUDA error: an illegal memory access was encountered (copy_kernel_cuda at /pytorch/aten/src/ATen/native/cuda/Copy.cu:180) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fe17840a193 in /home/xuhang/anaconda3/envs/depthcontrast/lib/python3.6 /site-packages/torch/lib/libc10.so)

I can not find the reason, only the input is different between this project (the point cloud is loaded from .bin files not .npy).

Has anyone encountered a similar problem?

My runing commend:

CUDA_VISIBLE_DEVICES=1,2,3,4 python main.py configs/point_vox_lidar.yaml --multiprocessing-distributed --world-size 1 --rank 0 --ngpus 4

My environment is:
Pytorch1.4 + cuda10.1 + spconv1.2

Per point feature vector?

Is there a way to extract per point (vertex) features performing a forward pass as a descriptor for each point, given that this is a U-net architecture?

The current implementation containing bugs on dataloader.

Dear authors,

I think the repo is an excellent implementation of your excellent paper. However, it shares a common bug in setting the seed number for data augmentation as pointed in here and here

I have tested in this repo and found out that each sample will only have two one-time augmented versions across the whole training epochs, which is problematic to the self-contrastive learning setting.

You may want to revisit the experiments in the paper.

Best,
Pan He

Prepare Dataset (scannet)

Hi, thank you provide awesome work!

When I preparing the scannet, I am curious the command:

python extract_pointcloud.py path/to/extracted_data/depth/ path/to/extracted_pointcloud_visualization/ path/to/extracted_pointclouds/ scannet_datalist.npy

The argv[1] seems that the scans/depth generated from step1 and the argv[3] is the path where put the extracted pcd.
What path should I put in the argv[2] or how to ignore the argv[2]?

Thank you!

environmental install

Hi, I am trying to install the requirements.txt and I run into the following:

PackagesNotFoundError: The following packages are not available from current channels:

  • mkldnn==0.14.0=0
  • future==0.16.0=py36_1
  • icc_rt==2018.0.3=intel_0
  • bz2file==0.98=py36_0
  • libiconv==1.15=0
  • openmp==2018.0.3=intel_0
  • notebook==6.1.5=py36_0
  • intelpython==2018.0.3=0

Current channels:

I have tried both the new and the old one requirements.txt. Could you please recomment a solution?
Thanks!

Conda environment file

Hi,

I found your work interesting and would like to run your code. However, I'm having some trouble installing the dependencies stated in requirements.txt. Some packages appear to be missing from the conda and conda-forge channels, but I think it might have to do with the way you exported the dependencies.

Would you be able to export the working environment using:
conda env export --no-build > requirements.yaml

Thanks!

Evaluating contrastive learning

This is a more of a theory question: How could I evaluate the sucess of this contrastive representation learning in an unsupervised manner, using validation data?

What I'm thinking for now:

  1. Loading a checkpoint, doing a forward pass of a validation dataset, calculating the NCELossMoco loss, and analysing the its evolution.
  2. Using t-SNE to plot the embeddings of different pointclouds after multiple random augmentations, to do a qualitative analysis.
  3. Formulating a new expression similar to the NCE loss (but something simpler, not necessarily derivable), doing forward passes of the validation data and calculating the formula. I found no expressions of this kind, but I'm new to contrastive learning.

PS.
I've actually done the t-SNE if your are interested. I used the embeddings generated by a model pretrained on the KITTI training set. Each color corresponds to a particular KITTI sample (identified by its number). Hexagons represent embeddings from PointNet++ and X's are from spconv-UNet. The augmenations done are particularly strong with multiple bigger patches removed, a more tight cuboid crop, random noise, X and Y flips.

tSNE

Preparation of ScanNet

Hi, could you provide more detailed instructions about how to prepare the ScanNet dataset for model training? I have downloaded the ScanNet dataset but found it hard to follow the steps you provide, e.g., how to specify the arguments in the command below.
python extract_pointcloud.py /path/to/extracted_data /path/to/extracted_pointcloud_visualization /path/to/extracted_pointclouds scannet_datalist.npy

KITTI data splits

Did you use any special method for generating the training splits for KITTI or are they just randomly sampled from the dataset?

For exemple: are kitti_infos_train_5_0.pkl, kitti_infos_train_5_1.pkl and kitti_infos_train_5_2.pkl just 3 different sets of labels for a random 5% of the pointclouds?

Thanks

pre-trained Waymo model

Dear authors,

thank you very much for your work. Would it please be possible to provide the weights of your model trained on the Waymo dataset if I provided you that I am also registered and can use the Waymo Open dataset? I know that other projects allow this.

Thank you very much in advance.

Comparing PointRCNN performance to baseline

Hi @zaiweizhang! This is a minor question related to the paper. I did my own baseline training of PointRCNN (with no pretraining) and achieved better results than those you presented in the paper, namely in the car and cyclist classes. The only thing I changed from the original config was the batchsize per GPU (to 10), used 1 GPU and tuned the lr accordingly (to original_lr * 10 / 24 = 0.0084).

Doing a median of the moderate mAP for the last 11 epochs for the baseline, and trying to read the results from the paper's graphs I get:

My baseline Paper's baseline Paper finetuning results
Car 80,5 ~ 80 ~ 80
Cyclist 72,6 ~ 70,3 ~ 72,5
Ped 53,1 ~ 57,3 ~ 57,7

So, do you think the fact that the baseline I trained is better than what you reported can be due to the batchsize difference?Interestingly enough, pedestrian got a worse result in my baseline than in yours. Is your baseline the trained model supplied in OpenPCDet or did you train it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.