Coder Social home page Coder Social logo

megvii-research / occdepth Goto Github PK

View Code? Open in Web Editor NEW
270.0 17.0 23.0 66.55 MB

Maybe the first academic open work on stereo 3D SSC method with vision-only input.

License: Apache License 2.0

Shell 0.20% Python 99.80%
camera-based occupancy semantic-scene-completion stereo-camera

occdepth's Introduction

OccDepth: A Depth-aware Method for 3D Semantic Occupancy Network

PWC

PWC

News

  • 2023/03/30 Release trained models on GeForce RTX 2080 Ti.
  • 2023/02/28 Initial code release. Both Stereo images and RGB-D images inputs are supported.
  • 2023/02/28 Paper released on Arxiv.
  • 2023/02/17 Demo release.

Abstract

In this paper, we propose the first stereo SSC method named OccDepth, which fully exploits implicit depth information from stereo images (or RGBD images) to help the recovery of 3D geometric structures. The Stereo Soft Feature Assignment (Stereo-SFA) module is proposed to better fuse 3D depth-aware features by implicitly learning the correlation between stereo images. In particular, when the input are RGBD image, a virtual stereo images can be generated through original RGB image and depth map. Besides, the Occupancy Aware Depth (OAD) module is used to obtain geometry-aware 3D features by knowledge distillation using pre-trained depth models.

Video Demo

Mesh results compared with ground truth on KITTI-08:

video loading...

Voxel results compared with ground truth on KITTI-08:

video loading...

Full demo videos can be downloaded via `git lfs pull`, the demo videos are saved as "assets/demo.mp4" and "assets/demo_voxel.mp4".

Results

Trained models

The trained models on GeForce RTX 2080 Ti are provided:

Config dataset IoU mIoU Download
config SemanticKITTI 41.60 12.84 model
config NYUv2 49.23 29.34 model

Note: If you want to get better results, you should set share_2d_backbone_gradient = false, backbone_2d_name = tf_efficientnet_b7_ns and feature = feature_2d_oc = 64 (SemanticKITTI) which needs more GPU memory.

Qualitative Results

Fig. 1: RGB based Semantic Scene Completion with/without depth-aware. (a) Our proposed OccDepth method can detect smaller and farther objects. (b) Our proposed OccDepth method complete road better.

Quantitative results on SemanticKITTI

Table 1. Performance on SemanticKITTI (hidden test set).
Method Input SC IoU SSC mIoU
2.5D/3D
LMSCNet(st) OCC 33.00 5.80
AICNet(st) RGB, DEPTH 32.8 6.80
JS3CNet(st) PTS 39.30 9.10
2D
MonoScene RGB 34.16 11.08
MonoScene(st) Stereo RGB 40.84 13.57
OccDepth (ours) Stereo RGB 45.10 15.90
The scene completion (SC IoU) and semantic scene completion (SSC mIoU) are reported for modified baselines (marked with "st") and our OccDepth.

Detailed results on SemanticKITTI.

Compared with baselines.

Baselines of 2.5D/3D-input methods. ”∗ ” means results are cited from MonoScene. ”/” means missing results

Usage

Environment

  1. Create conda environment:
conda create -y -n occdepth python=3.7
conda activate occdepth
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
  1. Install dependencies:
pip install -r requirements.txt
conda install -c bioconda tbb=2020.2

Preparing

SemanticKITTI

NYUv2

  • Download NYUv2 dataset

  • Preprocessed NYUv2 data

    cd OccDepth/
    python occdepth/data/NYU/preprocess.py data_root="/path/to/NYU/depthbin"
    data_preprocess_root="/path/to/NYU/preprocess/folder"

Settings

  1. Setting DATA_LOG, DATA_CONFIG in env_{dataset}.sh, examples:
    ##examples
    export DATA_LOG=$workdir/logdir/semanticKITTI
    export DATA_CONFIG=$workdir/occdepth/config/semantic_kitti/multicam_flospdepth_crp_stereodepth_cascadecls_2080ti.yaml
  2. Setting data_root, data_preprocess_root and data_stereo_depth_root in config file (occdepth/config/xxxx.yaml), examples:
    ##examples
    data_root: '/data/dataset/KITTI_Odometry_Semantic'
    data_preprocess_root: '/data/dataset/kitti_semantic_preprocess'
    data_stereo_depth_root: '/data/dataset/KITTI_Odometry_Stereo_Depth'

Inference

cd OccDepth/
source env_{dataset}.sh
## move the trained model to OccDepth/trained_models/occdepth.ckpt
## 4 gpus and batch size on each gpu is 1
python occdepth/scripts/generate_output.py n_gpus=4 batch_size_per_gpu=1

Evaluation

cd OccDepth/
source env_{dataset}.sh
## move the trained model to OccDepth/trained_models/occdepth.ckpt
## 1 gpu and batch size on each gpu is 1
python occdepth/scripts/eval.py n_gpus=1 batch_size_per_gpu=1

Training

cd OccDepth/
source env_{dataset}.sh
## 4 gpus and batch size on each gpu is 1
python occdepth/scripts/train.py logdir=${DATA_LOG} n_gpus=4 batch_size_per_gpu=1

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

Acknowledgements

Our code is based on these excellent open source projects:

Many thanks to them!

Related Repos

Citation

If you find this project useful in your research, please consider cite:

@article{miao2023occdepth,
Author = {Ruihang Miao and Weizhou Liu and Mingrui Chen and Zheng Gong and Weixin Xu and Chen Hu and Shuchang Zhou},
Title = {OccDepth: A Depth-Aware Method for 3D Semantic Scene Completion},
journal = {arXiv:2302.13540},
Year = {2023},
}

Contact

If you have any questions, feel free to open an issue or contact us at [email protected], [email protected].

occdepth's People

Contributors

ccchen6 avatar rhmiao avatar zsc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

occdepth's Issues

GPU memory requirements

Dear author, Hello:
May I ask how much GPU memory is required for this project to train KITTI data set with bitch_size=1.
thanks

The inference result of pytorch and onnxruntime is not matched

对比一致性的过程中发现:pytorch推理的输出与onnxruntime的推理输出不一致(同时注意到:用生成的同一个onnx模型去多次运行,其推理结果是一致的;但不同次生成的onnx模型间推理的结果都是不一致,好像在导出模型时加入了随机数生成一样)

按照如下步骤进行导出onnx模型、pytorch与onnxruntime推理结果对比验证:

generate data.pkl

通过nyu_dm.py生成data.pkl作为导出onnx模型时使用的fake_data

python occdepth/data/NYU/nyu_dm.py data_root=/home/***/data/dataset/NYUv2/NYU_dataset/depthbin data_preprocess_root=/home/***/data/preprocess/NYU

export onnx

python occdepth/models/OccDepth.py +export_onnx=True

pytorch与onnxruntime推理结果对比

python occdepth/scripts/generate_output.py data_root=/home/***/data/dataset/NYUv2/NYU_dataset/depthbin data_preprocess_root=/home/***/data/preprocess/NYU

推理script generate_output.py有增加onnx推理步骤(修改见patch.zip/generate_output.py),对比它们的模型输出;
pytorch: pred = model(batch),取pred['ssc_logit']
onnx:pred_onnx = session.run(None, ort_inputs),取pred_onnx

输出结果如下所示:

onnx

onnxruntime inference output output.txt:L548

onnxruntime running
pred_onnx: <class 'numpy.ndarray'> (1, 12, 60, 36, 60) [[[[[ 7.46585475e-03  1.09575838e-02  9.67762806e-03 ...
     -9.53939334e-02 -7.29353726e-02  3.04440595e-02]
    [ 6.71601854e-04  2.62035616e-03  2.52045644e-03 ...
      1.94277853e-01  9.20108706e-02  9.13816765e-02]
    [-2.33476050e-04  3.41782440e-03  2.33434513e-03 ...
      9.23375785e-02 -8.50949809e-03 -3.90102877e-03]
    ...

pytorch

pytorch inference output output.txt:L1072

pytorch running
pred_torch: <class 'torch.Tensor'> torch.Size([1, 12, 60, 36, 60]) tensor([[[[[  5.5362,   7.9261,   8.1379,  ...,  -0.0504,   0.2590,
              0.3334],
           [  6.8535,   9.6622,   9.5232,  ...,   4.0312,   3.9445,
              2.8770],
           [  6.5795,   9.1700,   8.9740,  ...,   4.5188,   4.6023,
              3.4743],
           ...,

是否存在onnx导出过程中的参数配置问题或者onnxruntime推理输入参数不匹配等原因?
帮忙描述下onnx inputs和data_loader batch数据字段的对应关系;
能否也提供一份可以直接运行的Kitti&NYU onnx模型
thanks a lot ~_~


git diff for test-code

image

patch file

patch.zip

output log

output.txt

Can't install mmcv properly.

Environment

ubuntu 20.04
cuda 10.2.89
pytorch 1.13.1

Problem

According to the requirement.txt, the version of mmcv is 1.4.0, which seems to be a very old version. I've read the official document of MMCV, bu I still find it hard to install mmcv 1.4.0 that matches pytorch 1.13.1 and cuda 10.2.
I wonder could anybody please kindly post his environment and his approach to install mmcv? Thanks a lot :)

关于数据集的问题

很棒的工作!你们有处理过cityscapes数据集吗?或者如果想像你们处理NYU一样处理cityscape要怎么做?

visiallization problem

python occdepth/scripts/visualization/kitti_vis_pred.py +file=/home/robotlab/reserch/Kittirelated/shared_dir/OccDepth-main/output/kitti/08/000000.pkl +dataset=kitti
Traceback (most recent call last):
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/importlib_metadata/__init__.py", line 292, in __getitem__
    return next(iter(self.select(name=name)))
StopIteration

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "occdepth/scripts/visualization/kitti_vis_pred.py", line 6, in <module>
    from mayavi import mlab
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/mayavi/mlab.py", line 15, in <module>
    from mayavi.core.common import process_ui_events
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/mayavi/core/common.py", line 21, in <module>
    from pyface import api as pyface
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/pyface/api.py", line 13, in <module>
    from .about_dialog import AboutDialog
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/pyface/about_dialog.py", line 15, in <module>
    from .toolkit import toolkit_object
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/pyface/toolkit.py", line 23, in <module>
    toolkit = toolkit_object = find_toolkit("pyface.toolkits")
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/pyface/base_toolkit.py", line 282, in find_toolkit
    return import_toolkit(ETSConfig.toolkit, entry_point)
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/pyface/base_toolkit.py", line 216, in import_toolkit
    entry_point_group = importlib_metadata.entry_points()[entry_point]
  File "/home/robotlab/anaconda3/envs/occdepth/lib/python3.7/site-packages/importlib_metadata/__init__.py", line 294, in __getitem__
    raise KeyError(name)
KeyError: 'pyface.toolkits

Is it a Qt problem?
How can I solve it ?
Thanks a lot

无法导入MultiScaleDeformableAttention

在运行generate_output.py是报错无法导入MultiScaleDeformableAttention,点进去相关的代码都在,这个问题有知道怎么解决的小伙伴吗?

python occdepth/scripts/generate_output.py n_gpus=4 batch_size_per_gpu=1
/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmcv/cnn/bricks/transformer.py:28: UserWarning: Fail to import ``MultiScaleDeformableAttention`` from ``mmcv.ops.multi_scale_deform_attn``, You should install ``mmcv-full`` if you need this module. 
  warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from '
/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmdet/models/utils/transformer.py:27: UserWarning: `MultiScaleDeformableAttention` in MMCV has been moved to `mmcv.ops.multi_scale_deform_attn`, please update your MMCV
  '`MultiScaleDeformableAttention` in MMCV has been moved to '
Traceback (most recent call last):
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmdet/models/utils/transformer.py", line 23, in <module>
    from mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmcv/ops/__init__.py", line 2, in <module>
    from .assign_score_withk import assign_score_withk
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmcv/ops/assign_score_withk.py", line 6, in <module>
    '_ext', ['assign_score_withk_forward', 'assign_score_withk_backward'])
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmcv/utils/ext_loader.py", line 13, in load_ext
    ext = importlib.import_module('mmcv.' + name)
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'mmcv._ext'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "occdepth/scripts/generate_output.py", line 2, in <module>
    from occdepth.models.OccDepth import OccDepth
  File "/media/hjc/DeepLearning1/DLProjects/Robot-binocular-vision/OccDepth-main/occdepth/models/OccDepth.py", line 18, in <module>
    from occdepth.models.flosp_depth.flosp_depth import FlospDepth
  File "/media/hjc/DeepLearning1/DLProjects/Robot-binocular-vision/OccDepth-main/occdepth/models/flosp_depth/flosp_depth.py", line 4, in <module>
    from mmdet.models.backbones.resnet import BasicBlock
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmdet/models/__init__.py", line 2, in <module>
    from .backbones import *  # noqa: F401,F403
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmdet/models/backbones/__init__.py", line 2, in <module>
    from .csp_darknet import CSPDarknet
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmdet/models/backbones/csp_darknet.py", line 11, in <module>
    from ..utils import CSPLayer
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmdet/models/utils/__init__.py", line 16, in <module>
    from .transformer import (DetrTransformerDecoder, DetrTransformerDecoderLayer,
  File "/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmdet/models/utils/transformer.py", line 29, in <module>
    from mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention
ImportError: cannot import name 'MultiScaleDeformableAttention' from 'mmcv.cnn.bricks.transformer' (/media/hjc/DeepLearning1/Anaconda3/envs/occdepth1/lib/python3.7/site-packages/mmcv/cnn/bricks/transformer.py)

预训练模型加载问题

作者你好
在使用你们提供的2080ti预训练的模型进行加载时出现以下问题::\Users\77113.conda\envs\occdepth\lib\site-packages\torch\hub.py:268: UserWarning: You are about to download and run code from an untrusted repository. In a future release, this won't be allowed. To add the repository to your trusted list, change the command to {calling_fn}(..., trust_repo=False) and a command prompt will appear asking for an explicit confirmation of trust, or load(..., trust_repo=True), which will assume that the prompt is to be answered with 'yes'. You can also use load(..., trust_repo='check') which will only prompt for confirmation if the repo is not already trusted. This will eventually be the default behaviour
"You are about to download and run code from an untrusted repository. In a future release, this won't "
请问一下这是什么原因导致的

About Training processing

Dear Authors:

I write the issue to ask for help in the training process of OccDepth:

  1. If it is convenient, can you release your model file based on multicam_flospdepth_crp_stereodepth_cascadecls_2080ti.yaml?

2)When I tried to train the model, I found that the process is really slow on my workstation:
/home/duan/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py:2387: UserWarning: torch.distributed._all_gather_base is a private function and will be deprecated. Please use torch.distributed.all_gather_into_tensor instead.
warnings.warn(
Epoch 0: 3%|███▍ | 38/1163 [06:42<3:13:32, 10.32s/it, loss=17.4, v_num=]

It seems cost 3.5h for one epoch on 4 A6000 48G GPUs, I just want to know, how much time will you spend to train a model with 30 epochs you mentioned in the .yaml? and are there any wrong settings or something needs to do?

数据集训练问题

模型的输入是一对双目摄像头,但是论文中使用的semanticKITTI以及NYU数据集都是没有双目图像的,如何将这两个数据集用于训练的?

Training is very slow.

I noticed that the GPU utilization is very low and the dataloader has become the bottleneck of training.
However, when I tried to modify the num_worker, I encountered errors in the training process. Is this caused by numba?
How do you accelerate training?

How to export onnx file from model(OccDepth.py)?

commit a8456da4e70a5fde48cec4d9ca44625c827bac79

purpose:

  • to run inference process in onnxruntime directly(without gpu dependence)
  • It is convenient to port this model to run on [x]pu in the future

  1. run export onnx enabled cmd
    python occdepth/models/OccDepth.py +export_onnx=True
    image
    and i found that it's depend on a data.pkl as input. try to run step 2.
  2. export data.pkl via generate_output.py(with some changes) then copy to project dir
    image
    python occdepth/scripts/generate_output.py
  3. run run export onnx enabled cmd again; howerver get a TypeError as follow showing:
    TypeError: forward() missing 1 required positional argument:
    image
    Which step is wrong? please correct me.

appreciate for your help in advance.
It would be great to provide an onnx(can be used) file. ^_^

Some problem about onnx

image
I met this problem when I run "python occdepth/models/OccDepth.py +export_onnx=True"
It seems that the difference in precision between PyTorch and ONNX is causing the issue.

image
image

what shoule I do to solve this problem?

still onnx file export issue

Screenshot 2024-07-05 15:21:04

I want to have a nice onnx file to know its MACs, params, and probably run it in some NPUs.
I struggled really hard but still bugs here.

Key 'data_root' is not in struct

运行process.py时遇到如下报错,config文件中做了相应的修改,运行还是报错
Traceback (most recent call last):
File "/media/hjc/DeepLearning1/DLProjects/Robot-binocular-vision/OccDepth/occdepth/data/NYU/preprocess.py", line 150, in main
root = os.path.join(config.data_root, "NYU" + split)
omegaconf.errors.ConfigAttributeError: Key 'data_root' is not in struct
full_key: data_root
reference_type=Optional[Dict[Union[str, Enum], Any]]
object_type=dict

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Pre-trained model weights

Hi, firstly, thank you for your nice work, which is very helpful and insightful for me.

Can I get the weights of a pre-trained model (SemanticKITTI mIoU: 15.90) reported in your paper?

Thanks

Training result is terrible

Hi,
我的训练过程是跑在单张3090(24G)上的,cpu是i9-12900k,RAM 64G。训练速率大概是3h~/epoch
最终的训练结果mIoU大概只有1.7。
训练数据集为kitti

test======
Precision=56.3334, Recall=5.6320, IoU=5.3964
class IoU: ['empty', 'car', 'bicycle', 'motorcycle', 'truck', 'other-vehicle', 'person', 'bicyclist', 'motorcyclist', 'road', 'parking', 'sidewalk', 'other-ground', 'building', 'fence', 'vegetation', 'trunk', 'terrain', 'pole', 'traffic-sign'], 
92.1851,  2.7558,  0.0054,  0.0000,  0.0276,  0.8427,  0.0407,  0.0281,  0.0000,  10.8321,  3.6353,  5.1183,  0.0064,  3.4550,  0.5815,  1.9391,  0.3322,  2.3483,  0.8775,  0.9599, 

mIoU=1.7782

DATALOADER:0 TEST RESULTS
{'test/loss': 31.02813148498535,
 'test/loss_geo_scal': 7.334031581878662,
 'test/loss_occ': 0.5500158071517944,
 'test/loss_relation_ce_super': 0.6765334010124207,
 'test/loss_sem_scal': 21.17815589904785,

 'test/loss_ssc': 1.2893925905227661}

请问一下训练效果不太好的原因会是什么呢?

Dimension Mismatch Error in models/unet2d.py

Hello OccDepth developers,

I am experiencing a dimension mismatch issue while working with models/unet2d.py.

Here is a brief description of the issue: Dimension mismatch arises when I set the parameter backbone_2d_name="tf_efficientnet_b5_ns". At this point, whether or not the decoder is used, a dimension mismatch error occurs.

I found that the error originated from these few lines of code:
"tf_efficientnet_b5_ns": [3, 32, 40, 64, 176],
self.resize_output_1_1 = nn.Conv2d(3, out_feature, kernel_size=1)
self.resize_output_1_2 = nn.Conv2d(32, out_feature * 2, kernel_size=1)
self.resize_output_1_4 = nn.Conv2d(48, out_feature * 4, kernel_size=1)

After examining the network structure of efficientnet_b5, I found some dimension definition errors in unet2d.py. I have corrected these errors as follows:

"tf_efficientnet_b5_ns": [3, 24, 40, 64, 176]
self.resize_output_1_1 = nn.Conv2d(MODEL_CHANNELS[self.backbone_2d_name][0], out_feature, kernel_size=1) self.resize_output_1_2 = nn.Conv2d(MODEL_CHANNELS[self.backbone_2d_name][1], out_feature * 2, kernel_size=1) self.resize_output_1_4 = nn.Conv2d(MODEL_CHANNELS[self.backbone_2d_name][2], out_feature * 4, kernel_size=1)

If you encounter these problems, I hope it can help you.

Validating problem

Hi,
I noticed that when training reach about 80~90 percent there will be a validating process, what dose the process validate about?
Dose the validate result desides the epoch result?
Thanks,looking forward to your reply

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.