Coder Social home page Coder Social logo

zhengqili / megadepth Goto Github PK

View Code? Open in Web Editor NEW
715.0 34.0 166.0 1.5 MB

Code of single-view depth prediction algorithm on Internet Photos described in "MegaDepth: Learning Single-View Depth Prediction from Internet Photos, Z. Li and N. Snavely, CVPR 2018".

License: MIT License

Python 100.00%

megadepth's Introduction

MegaDepth: Learning Single-View Depth Prediction from Internet Photos

This is a code of the algorithm described in "MegaDepth: Learning Single-View Depth Prediction from Internet Photos, Z. Li and N. Snavely, CVPR 2018". The code skeleton is based on "https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix". If you use our code or models for academic purposes, please consider citing:

@inproceedings{MDLi18,
  	title={MegaDepth: Learning Single-View Depth Prediction from Internet Photos},
  	author={Zhengqi Li and Noah Snavely},
  	booktitle={Computer Vision and Pattern Recognition (CVPR)},
  	year={2018}
}

Examples of single-view depth predictions on the photos we randomly downloaded from Internet:

Dependencies:

  • The code was written in Pytorch 0.2 and Python 2.7, but it should be easy to adapt it to Python 3 and latest Pytorch version if needed.
  • You might need skimage, h5py libraries installed for python before running the code.

Single-view depth prediction on any Internet photo:

    python demo.py

You should see an inverse depth prediction saved as demo.png from an original photo demo.jpg. If you want to use RGB maps for visualization, like the figures in our paper, you have to install/run semantic segmentation from https://github.com/kazuto1011/pspnet-pytorch trained on ADE20K to mask out sky, because inconsistent depth prediction of unmasked sky will not make RGB visualization resonable.

Evaluation on the MegaDepth test splits:

    python rmse_error_main.py
  • To compute Structure from Motion Disagreement Rate (SDR), change the variable "dataset_root" in python file "rmse_error_main.py" to the root directory of MegaDepth_v1 folder, and change variable "test_list_dir_l" and "test_list_dir_p" to corresponding folder paths of test lists, and run:
    python SDR_compute.py

megadepth's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

megadepth's Issues

Dataset dowload failure

Hi
I tried to download the dataset (the 199GB one). It failed in two attempts on different devices and operation system - is it possible that something's wrong at your end?
Thanks!

GPU Memory Requirement

How much GPU memory should be ideally required to run the forward pass? I was not able to run the demo.py but putting 'with torch.no_grad():' line 33 made it to work. Still I guess 6GB was used. Is it normal?

the unknown depth map

Hi,
I have tried to write out all the depth images.
At first, I thought that there were two types of depth map as you said in the readme file.
One from COLMAP and another from automatic ordinal depth labeling.
The one from ordinal depth labeling should be only two segments(foreground and background) as you depicted in the paper, but I found some of them are not just two segments. Could you explain the reason why and how we can use it?
Thank you.
Original image:
3382618227_2324fb96c2_o
Depth image:
3382618227_2324fb96c2_o_depth
Original image:
590172793_e10d268a31_o
Depth image:
590172793_e10d268a31_o_depth

MegaDepth demo code for post-processing MVS depths

Hi,
Now I've read the demo, but the program can't find some of the files.For example, I cannot find the h5 file when calculating CRF_label. Where are these files downloaded?

CRF_label = hdf5read(segmentation_filename,subgroup_name1)
h5 file: phoenix/S6/zl548/semantic_map/0000/images/9616344_27950454e0_o.jpg.h5

Can you post the training code?

hi, your work is impressive, I trained the model on our indoor dataset according to your paper but get bad prediction. So i want to know more training details

Smaller dataset

Hi, thank you for your great work!
But is there any smaller dataset that has been split? The one on the project's webpage is too big and I failed to download it many times.

Change the size of the input image

Hello, your work is very helpful to me. I want the input RGB image and the output depth map to be 1000 * 1000 pixels. How can I change the code?

Missing data in dataset?

Hi,

I've downloaded your dataset on the website, but I found that there is no data for ordinal depth which is mentioned in section 3.3 of the paper. Where can I get that data? Thank you.

Unable to download the data

Hi,
Are there any other links to download the 199 GB version of the dataset ? the downloading always fails with the current link.

How to test for high resolution picture?

hi,
This work is very grate, can you tell me how to test for high resolution picture(such as 3840x2160)?
I use Titan xp (12G ) to run demo.py, but it is out of memory, how can I make it?
Thanks a lot !

Data cannot download

Hi, thank you for your dataset.

However, I have a problem downloading it. It seems that it is too big. So, could you please split the dataset into several parts, or upload the dataset into other platform like google drive, or provide some smaller sample cases?

Many thanks for your help!

Unable to download the dataset

The original link has been broken(page not found),and can not download the dataset.May be you can update the original link.Or if you don't mind, can you please share your dataset by google drive or some other way?
image

depth to real distance

Hi, suppose that D denotes the depth map. I just wonder whether the value of D equals real distance(mm)?

Questions about megadepth training code

I've been using simple demo training code for the last few days and faced several problems. But there's no place to ask, so I'll write it here.

First is the question of training data. Datasets consist of an image and h5 pairs, some h5 files have depth values, and some h5 files have mask values of integers such as 0,1,2,3. I would like to know how you learned to use these data.

This is a question about training code. When reading the h5 file and calculating the loss for it,
d_gt_0 = torch.log(Variable(targets['gt_0'].cuda(), requires_grad = False))
d_gt_1 = torch.log (Variable (targets ['gt_1']. cuda (), requires_grad = False))
In this way, you can see the log take the ground truth depth value. As a result, the value of 0 is changed to the value of -infinity. If you enter the Loss function in this state, Loss cannot be obtained and the value of nan is returned. I want to know if this is correct.

The last question is related to the dataset and training code. In a dataset, an image pairs an h5 file with a depth value or an h5 file with a mask value. In the Loss function, Prediction_d, ground truth, and mask are entered as inputs. I don't have h5 file for ground truth and h5 file with mask value for one image. I'm curious how to train.

regarding ordinal and gradient loss

Hi there, Thank you for sharing this awesome work!

I had some question specific to model training and losses:

  1. How do you compute gradient in ground-truth image for scale-invariant gradient matching term? Since the ground-truth is sparse, wouldn't it show false or unwanted gradient edges at the transition of ground-truth and no-ground-truth regions?

  2. How do you sample your mini-batches? Does a mini-batch include images from both metric depth and ordinal depth set? If yes, then this make random pixel selection (for ordinal loss) a bit complicated (at least not straight-forward)! Do you do some kind of pre-processing like storing the points of images in a file or something like that? Basically my question is randomly selecting points on the fly during training from a mini-batch comprising metric and ordinal depth with sparse ground-truth seems lot of processing to me.

Hope to hear from you, thank you.

Dataset links are dead

I cannot download the 199GB dataset from the website. All download links are dead. Does anyone have an alternative link for the dataset?
Screenshot 2020-06-15 at 11 44 25
Screenshot 2020-06-15 at 11 44 34

Depth files display zeros only

This might be a problem on my side. When I read the .h5 files, they have the correct dimension, however they contain 0. only. I followed the instructions to read the files in the ReadMe and also tried alternative approaches, but no luck. Might there be a problem with the files?

What is the input to the network?Could you explain the flow?

Hi,
I know the main contribution of this paper is for a newly dataset, MegaDepth, but I am confused of what is the input format you pass through the network(for training stage). Reconstructed depth from SfM&MVS(ground-truth depth) only?
Could you explain the flow because there is no detail about it in the paper? If I missed it, could you tell me which section is it?
Thank you

Regards,
Keisha

Run demo--Cuda:out of memory

Hi, when I run the demo on two different computers, one has one GPU, another one has more than two GPU, but there is the same error. I check that GPU has enough memory.

RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1511304568725/work/torch/lib/THC/generic/THCStorage.cu:66

Dose anyone has the same problem?

About the training code from the project page

Thanks for your excellent work. Recently, I'm working with your training codes from the project page.
I downloaded the total MegaDepth dataset and create the training and validation list. Now when I run the training code to read data, I found that the h5 file didn't contain the correct data.

From your training codes (image_folder.py), the hd5 file should contain keys like '/targets/gt_depth', '/targets/mask' and '/targets/sky_map'. However, I only find the key 'depth'. Is there anything I did wrong? Hope for your reply!
The related codes are pasted below,

# read targets
    hdf5_file_read = h5py.File(targets_path, 'r')
    gt = hdf5_file_read.get('/targets/gt_depth')
    gt = np.transpose(gt, (1, 0))

    mask = hdf5_file_read.get('/targets/mask')
    mask = np.array(mask)
    mask = np.transpose(mask, (1, 0))

    sky_map = hdf5_file_read.get('/targets/sky_map')
    sky_map = np.array(sky_map)
    sky_map = np.transpose(sky_map, (1, 0))

    if prob > 0.5 and is_fliped:
        gt = np.fliplr(gt)
        mask = np.fliplr(mask)
        sky_map = np.fliplr(sky_map)

    color_rgb = np.ascontiguousarray(color_rgb)
    gt = np.ascontiguousarray(gt)
    mask = np.ascontiguousarray(mask)
    sky_map = np.ascontiguousarray(sky_map)

    hdf5_file_read.close()

About post processing code

Hi,

I am wondering that how to use the post processing code because there is no instruction about it.
I saw the data folder, in the main.m, call "sparse/manhattan/". It looks like that I have to download the dataset with 667GB, then I can do the post processing. Can I use the dataset with 199GB to do the same processing instead ?
Thank you

AssertionError: Invalid device id

Traceback (most recent call last):
File "/home/adesoji/MLOPS/MegaDepth/demo.py", line 18, in
model = create_model(opt)
File "/home/adesoji/MLOPS/MegaDepth/models/models.py", line 5, in create_model
model = HGModel(opt)
File "/home/adesoji/MLOPS/MegaDepth/models/HG_model.py", line 18, in init
model= torch.nn.parallel.DataParallel(model, device_ids = [0,1])
File "/home/adesoji/Videos/ENTER/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 142, in init
_check_balance(self.device_ids)
File "/home/adesoji/Videos/ENTER/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 23, in _check_balance
dev_props = _get_devices_properties(device_ids)
File "/home/adesoji/Videos/ENTER/lib/python3.8/site-packages/torch/_utils.py", line 455, in _get_devices_properties
return [_get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
File "/home/adesoji/Videos/ENTER/lib/python3.8/site-packages/torch/_utils.py", line 455, in
return [_get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
File "/home/adesoji/Videos/ENTER/lib/python3.8/site-packages/torch/_utils.py", line 438, in _get_device_attr
return get_member(torch.cuda)
File "/home/adesoji/Videos/ENTER/lib/python3.8/site-packages/torch/_utils.py", line 455, in
return [_get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
File "/home/adesoji/Videos/ENTER/lib/python3.8/site-packages/torch/cuda/init.py", line 312, in get_device_properties
raise AssertionError("Invalid device id")
AssertionError: Invalid device id

how to speed up the run time performance?

i try the model on my dataset, the result is magic.
but in my nvidia 1070 gpu card, the python code time cost 170ms. did there anyway to speed up the performance?

Invalid Device ID at demo.py

I have:
ubuntu@ubuntu~/MegaDepth$ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri_Sep__1_21:08:03_CDT_2017 Cuda compilation tools, release 9.0, V9.0.176
CUDA works well with tensorflow and other sample cases
I haave a dual nvidia and intel chip
ubuntu@ubuntu:~/MegaDepth$ nvidia-smi Fri Jun 29 17:06:21 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 390.48 Driver Version: 390.48 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 860M Off | 00000000:01:00.0 Off | N/A | | N/A 47C P8 N/A / N/A | 1347MiB / 4046MiB | 1% Default | +-------------------------------+----------------------+-------------------

with Ubuntu 18.04. Torch and torchvision are fully installed for python2.7 and capable in sample projects.

However,
ubuntu@ubuntu:~/MegaDepth$ sudo python2 demo.py [sudo] password for ubuntu: ------------ Options ------------- batchSize: 1 beta1: 0.5 checkpoints_dir: ./checkpoints/ continue_train: False display_freq: 100 display_id: 1 display_winsize: 256 fineSize: 256 gpu_ids: [0, 1] identity: 0.0 input_nc: 3 isTrain: True lambda_A: 10.0 lambda_B: 10.0 loadSize: 286 lr: 0.0002 max_dataset_size: inf model: pix2pix nThreads: 2 name: test_local ndf: 64 ngf: 64 niter: 100 niter_decay: 100 no_flip: False no_html: False no_lsgan: False norm: instance output_nc: 3 phase: train pool_size: 50 print_freq: 100 save_epoch_freq: 5 save_latest_freq: 5000 serial_batches: False use_dropout: False which_epoch: latest which_model_netG: unet_256 -------------- End ---------------- ===========================================LOADING Hourglass NETWORK==================================================== Traceback (most recent call last): File "demo.py", line 15, in <module> model = create_model(opt) File "/home/ubuntu/MegaDepth/models/models.py", line 5, in create_model model = HGModel(opt) File "/home/ubuntu/MegaDepth/models/HG_model.py", line 18, in __init__ model= torch.nn.parallel.DataParallel(model, device_ids = [0,1]) File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 102, in __init__ _check_balance(self.device_ids) File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 17, in _check_balance dev_props = [torch.cuda.get_device_properties(i) for i in device_ids] File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/cuda/__init__.py", line 292, in get_device_properties raise AssertionError("Invalid device id") AssertionError: Invalid device id

I get the same error if I explicitly set the --gpu_ids and/or CUDA_VISIBLE_DEVICES=0 environmental variable
ubuntu@ubuntu-Lenovo-Y50-70:~/MegaDepth$ sudo CUDA_VISIBLE_DEVICES=0 python demo.py --gpu_ids=0 ------------ Options ------------- batchSize: 1 beta1: 0.5 checkpoints_dir: ./checkpoints/ continue_train: False display_freq: 100 display_id: 1 display_winsize: 256 fineSize: 256 gpu_ids: [0] identity: 0.0 input_nc: 3 isTrain: True lambda_A: 10.0 lambda_B: 10.0 loadSize: 286 lr: 0.0002 max_dataset_size: inf model: pix2pix nThreads: 2 name: test_local ndf: 64 ngf: 64 niter: 100 niter_decay: 100 no_flip: False no_html: False no_lsgan: False norm: instance output_nc: 3 phase: train pool_size: 50 print_freq: 100 save_epoch_freq: 5 save_latest_freq: 5000 serial_batches: False use_dropout: False which_epoch: latest which_model_netG: unet_256 -------------- End ---------------- ===========================================LOADING Hourglass NETWORK==================================================== Traceback (most recent call last): File "demo.py", line 15, in <module> model = create_model(opt) File "/home/ubuntu/MegaDepth/models/models.py", line 5, in create_model model = HGModel(opt) File "/home/ubuntu/MegaDepth/models/HG_model.py", line 18, in __init__ model= torch.nn.parallel.DataParallel(model, device_ids = [0,1]) File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 102, in __init__ _check_balance(self.device_ids) File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 17, in _check_balance dev_props = [torch.cuda.get_device_properties(i) for i in device_ids] File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/cuda/__init__.py", line 292, in get_device_properties raise AssertionError("Invalid device id") AssertionError: Invalid device id
if I try to set the --gpu_ids or CUDA_VISIBLE_DEVICES environmental variable to anything other than 0 through the arguments, I get
ubuntu@ubuntu-Lenovo-Y50-70:~/MegaDepth$ sudo CUDA_VISIBLE_DEVICES=1,2,3 python2 demo.py --gpu_ids=1 ------------ Options ------------- batchSize: 1 beta1: 0.5 checkpoints_dir: ./checkpoints/ continue_train: False display_freq: 100 display_id: 1 display_winsize: 256 fineSize: 256 gpu_ids: [1] identity: 0.0 input_nc: 3 isTrain: True lambda_A: 10.0 lambda_B: 10.0 loadSize: 286 lr: 0.0002 max_dataset_size: inf model: pix2pix nThreads: 2 name: test_local ndf: 64 ngf: 64 niter: 100 niter_decay: 100 no_flip: False no_html: False no_lsgan: False norm: instance output_nc: 3 phase: train pool_size: 50 print_freq: 100 save_epoch_freq: 5 save_latest_freq: 5000 serial_batches: False use_dropout: False which_epoch: latest which_model_netG: unet_256 -------------- End ---------------- ===========================================LOADING Hourglass NETWORK==================================================== ./checkpoints/test_local/best_vanila_net_G.pth THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=70 error=38 : no CUDA-capable device is detected Traceback (most recent call last): File "demo.py", line 15, in <module> model = create_model(opt) File "/home/ubuntu/MegaDepth/models/models.py", line 5, in create_model model = HGModel(opt) File "/home/ubuntu/MegaDepth/models/HG_model.py", line 21, in __init__ self.netG = model.cuda() File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 249, in cuda return self._apply(lambda t: t.cuda(device)) File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 176, in _apply module._apply(fn) File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 176, in _apply module._apply(fn) File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 182, in _apply param.data = fn(param.data) File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 249, in <lambda> return self._apply(lambda t: t.cuda(device)) RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:70

If I try to set the gpu id to anything other than 0 in any case through python I get

>>> torch.cuda.set_device(1) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/.local/lib/python2.7/site-packages/torch/cuda/__init__.py", line 262, in set_device torch._C._cuda_setDevice(device) RuntimeError: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:32

I also tried running all these combinations for python3.6 and it just won't work...

can you guys off an single-GPU pre-trained model

Hi, thanks for your great work. But, when I want to use the pre-trained model, it said the model only support the Dual-GPU. Can you kindly provide a single -GPU pre-trained model?

Thanks.
changlin

use as library

hey, I was able to convert demo.py to a library with few. it did not take long.

I would love to see this repo as a library available on pip/conda. I could even send some PR if you are interested.

Camera

hello,i would like to ask if the pose information of each image camera used in megadepth is directly obtained from Colmap
sparse reconstruction

Data Download problem

Hi,

Good paper in monocular depth estimation. Do you know why the dataset refused to unzip and gives zipping incorrect error or EOF error?

How to generate the depths in "Megadepth v1 dataset" with our own dataset?

Hi, Thanks for sharing.
I notice that the depths files' format in "Megadepth v1 dataset" is different from the depths files generated by the COLMAP(I mean I tried to do the COLMAP‘s reconstruction pipeline but the depths files are binary files, not hdf5).
I was confused by these files.
Do all the depths in "Megadepth v1 dataset" were generated by your Megadepth work or generated by the COLMAP(I notice you have modified somewhere in COLMAP)?
I'm wondering how to generate the same formate depths with our own dataset.

Do you consider posting the post-processing code in python?

When I use python's scripts that Colmap provided, I cannot get the exact depth maps like those I can get when I run your post-processing code. I'm not able to work with MATLAB so now I'm wondering if you'll ever write the post-processing code in Python and publish it?

Thank you.

Download failed.

Megadepth dataset v1 Download link is unstable, often download failure Can you upload data to other platforms?

Unable to download the dataset from https://www.cs.cornell.edu/projects/megadepth/

Hi,
I have tried to download the dataset in different computers and operating systems. It always fails to download the 199GB and 667GB dataset. Is there any other stable way to download the dataset? Could you please check this? Thank you very much!
I checked previous issues and it seems other people also cannot download the dataset.

Why can't a high-resolution image get a depth map?

The following problems occur when inputting a high-resolution image:
Traceback (most recent call last):
File "demo.py", line 50, in
test_simple(model)
File "demo.py", line 26, in test_simple
img = np.float32(io.imread(img_path))/255.0
TypeError: float() argument must be a string or a number

How did you extract Make3D depthmaps from the dataset?

Hi, First of all, thanks for the Paper & the Code.

The Make3D Dataset Has Two Parts : Images + Depths

The Depth part of the dataset is described by the ReadMe as :

Laser Range data with Ray PositionData Format: Position3DGrid (55x305x4)

Position3DGrid(:,:,1) is Vertical axis in meters (Y)

Position3DGrid(:,:,2) is Horizontal axis in meters (X)

Position3DGrid(:,:,3) is Projective Depths in meters (Z)

Position3DGrid(:,:,4) is Depths in meters (d)

The Images are in the shape of (2272, 1704, 3) BUT the Depths are in the shape of (55, 305, 4). By the way of its structure and ratios, one can tell that the Depth Data is not an image (55 x 305 != 2272 x 1704).

So then, how were you able to extract the depth maps? is there a code or a procedure which I've missed?

Thanks

sfm model bin files without db

Hi, thanks for your great work!
Is there any chance to provide the colmap SfM models' bin files(cameras, images, points3D) without the db files. The whole models is too large leading to download failure many times

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.