Coder Social home page Coder Social logo

sbmc's Introduction

Sample-based Monte Carlo Denoising using a Kernel-Splatting Network

teaser_image

Michaël Gharbi ([email protected]), Tzu-Mao Li, Miika Aittala, Jaakko Lehtinen, Frédo Durand

Check out our project page.

Quick start

The quickest way to get started is to run the code from a Docker image. Proceed as follows:

  1. Download and install Docker on your machine.

  2. Allow docker to be executed without sudo

    1. Add username to the docker group
    sudo usermod -aG docker ${USER}
    1. To apply the new group membership, log out of the server and back in, or type the following:
    su - ${USER}
    1. Confirm that your user is now added to the docker group by typing:
    id -nG
  3. To enable GPU acceleration in your Docker instance, install the NVidia container toolkit: https://github.com/NVIDIA/nvidia-docker. We provide a shortcut to install the latter:

    make nvidia_docker 
  4. Once these prerequisites are installed, you can build a pre-configured Docker image and run it:

    make docker_build
    make docker_run

    If all goes well, this will launch a shell on the Docker instance and you should not have to worry about configuring the Linux or Python environment.

    Alternatively, you can build a CPU-only version of the Docker image:

    make docker_build_cpu
    make docker_run_cpu
  5. (optional) From within the running Docker instance, run the package's tests:

    make test
  6. Again, within the Docker instance. Try a few demo commands, e.g. run a pretrained denoiser on a test input:

    make demo/denoise

    This should download the pretrained models to $(DATA)/pretrained_models, some demo scenes to $(DATA)/demo/scenes, and render some noisy samples data to $(OUTPUT)/demo/test_samples. After that, our model will be run to produce a denoised output: $(OUTPUT)/demo/ours_4spp.exr (linear radiance) and $(OUTPUT)/demo/ours_4spp.png (clamped 8bit rendering).

    In the docker, $(OUTPUT) maps to /sbmc_app/output by default. Outside the docker this is mapped to the output subfolder of this repository, so that both data and output persist across runs.

    See below, or have a look at the Makefile for more demo/* commands you can try.

Docker-less installation and dependencies

If you just intend to install our library, you can run:

HALIDE_DISTRIB_DIR=<path/to/Halide> python setup.py install

from the root of this repo. In any cases the docker file in dockerfiles should help you configure your runtime environment.

We build on the following dependencies:

  • Halide: our splatting kernel operator is implemented in Halide https://halide-lang.org/. The setup.py script looks for the path to the Halide distribution root under the environment variable HALIDE_DISTRIB_DIR. If this variable is not defined, the script will prompt you whether to download the Halide locally.
  • Torch-Tools: we use the ttools library for PyTorch helpers and our training and evaluation scripts https://github.com/mgharbi/ttools. This should get installed automatically when running python setup.py install.

Demo

We provide a patch to PBRTv2's commit #e6f6334f3c26ca29eba2b27af4e60fec9fdc7a8d https://github.com/mmp/pbrt-v2 in pbrt_patches/sbmc_pbrt.diff. This patch contains our modification to the renderer to save individual samples to disk.

Render samples from a PBRTv2 test scene

To render samples as .bin files from a .pbrt scene description, use the scripts/render_samples.py script. This script assumes the PBRT scene file contains only the scene description. It will create the appropriate header description for the camera, sampler, path-tracer, etc. For an example, try:

make demo/render_samples

Generating new random scenes for training

In the manuscript we described a scene generation procedure that used the SunCG dataset. Because of the legal issues that were later discovered with this dataset, we decided to no longer support this source of training scenes.

You can still use our custom, outdoor random scenes generator to generate training data, scripts/generate_training_data.py. For an example, run:

make demo/generate_scenes

Visualizing the image content of .bin sample files.

We provide a helper script to inspect the content of .bin sample files, scripts/visualize_dataset.py. For instance, to visualize the training data generated in the previous section, run:

make demo/visualize

Run pretrained models

To run a pre-trained model, use scripts/denoise.py. The command below runs our model and that of [Bako2017] on a test image:

make demo/denoise

Comparisons to previous work

In the dockerfile, we setup the code from several previous work to facilitate comparison. We provide our modifications to the original codebases as patch files in pbrt_patches/. The changes are mostly simple modification to the C++ code so it compiles with gcc.

The comparison include:

  • [Sen2011] "On Filtering the Noise from the Random Parameters in Monte Carlo Rendering"
  • [Rousselle2012] "Adaptive Rendering with Non-Local Means Filtering"
  • [Kalantari2015] "A Machine Learning Approach for Filtering Monte Carlo Noise"
  • [Bitterli2016] "Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings"
  • [Bako2017] "Kernel-Predicting Convolutional Networks for Denoising Monte Carlo Renderings"

To run the comparisons:

make demo/render_reference
make demo/comparisons

Training a new model

To train your own model, you can use the script scripts/train.py. For instance, to train our model:

make demo/train

Or to train that of Bako et al.:

make demo/train_kpcn

Those scripts will also launch a Visdom server to enable you to monitor the training. In your web browser, to view the plots navigate to http://localhost:2001.

Numerical evaluation

The script scripts/compute_metrics.py can be used to evaluate a set of .exr renderings numerically. It will print out the averages and save the result to .csv files.

For example, you can download the renderings we produced for our paper evaluation and compute the metrics by running:

make demo/eval

Precomputed .exr results from our submission

We provide the pre-rendered .exr results used in our Siggraph submission on-demand. To download them, run the command below. Please note this data is rather large (54 GB).

make precomputed_renderings

Test scene for evaluation

You can download the .pbrt scenes we used for evaluation by running:

make test_scenes

This will only download the scene description and assets. The images (or samples) themselves still need to be rendered from this data, using the scripts/render_exr.py and scripts/render_samples.py scripts respectively.

Samples data: our .bin fileformat

Some sample data used throughout the demo commands can be downloaded using:

make demo_data

Pretrained models

Download our pretrained models with the following command:

make pretrained_models

sbmc's People

Contributors

florianvazelle avatar mgharbi avatar parikshit-hooda avatar pqrth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sbmc's Issues

typo in readme.md

appropriate is written as apropriate. A PR to fix it will reference this issue.

Error: CUDA: cuModuleLoadData failed: CUDA_ERROR_INVALID_PTX

I installed everything locally on my machine by executing all commandos normally executed in the docker file.
Everything works when only using CPU, but when trying to use the GPU i get the following error:

2021-01-08 11:03:52 Thesis-Emil __main__[54541] INFO Loading latest checkpoint success
2021-01-08 11:03:52 Thesis-Emil __main__[54541] INFO setup time 1551.5 ms
2021-01-08 11:03:52 Thesis-Emil __main__[54541] INFO starting the denoiser
2021-01-08 11:03:52 Thesis-Emil __main__[54541] INFO Denoising scene: test_samples
S2G START
Error: CUDA: cuModuleLoadData failed: CUDA_ERROR_INVALID_PTX
make: *** [Makefile:262: demo/denoise] Aborted (core dumped)

I narrowed it down, and it happens when the Scatter2Gather and KernelWeighting Halide functions are being called.
Even when running in a docker instance I get the same error.

There is one thing I changed, and that is using the Halide link provided in the PR, as the one in the github repo isn't working anymore.

Any idea what this error is, and how it would be fixed, as that would help me progress in my thesis.

System specs:

Ubuntu 20.04
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

Thanks in advance
Emil

make test error. "NameError: name 'ops' is not defined"

Hello,I successfully created the image follow your installation process ,but an error was reported when run 'make test'. Also this error was reported when I use ' python scripts/denoise.py' . And I'm already installed 'halide' successfully. I have not idea about this problem, I need your help.

@staticmethod
def forward(ctx, data, weights):
    bs, c, h, w = data.shape
    output = data.new()
    sum_w = data.new()
    output.resize_as_(data)
    sum_w.resize_(bs, h, w)
    if _is_cuda(data, weights):
        ops.kernel_weighting_cuda_float32(data, weights, output, sum_w)
        NameError: name 'ops' is not defined

sbmc/functions.py:96: NameError

image

Error while running docker_build

Hi I followed all instructions to run this repo but im getting the following error when im trying to run docker_build. Please help

 AT_FORALL_SCALAR_TYPES_WITH_COMPLEX(HL_PT_DEFINE_TYPECHECK);
                                    ^
/sbmc_app/halide/include/HalidePyTorchHelpers.h: In instantiation of ‘Halide::Runtime::Buffer<scalar_t> Halide::PyTorch::wrap(at::Tensor&) [with scalar_t = float]’:
build/temp.linux-x86_64-3.7/scatter2gather_cpu_float32.pytorch.h:15:74:   required from here
/sbmc_app/halide/include/HalidePyTorchHelpers.h:67:46: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated [-Wdeprecated-declarations]
     scalar_t* pData  = tensor.data<scalar_t>();
                                              ^
In file included from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:11:0,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                 from /sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                 from build/temp.linux-x86_64-3.7/wrapper.cpp:1:
/sbmc_app/anaconda/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:303:7: note: declared here
   T * data() const {
       ^
error: command 'gcc' failed with exit status 1

many "recipe for target 'objs/XXX.o' failed"

my system is windows10,and I download Docker Desktop from docker official website,I think all is ok,but it appears some issues like "recipe for target 'objs/XXX.o' failed"when I run your Dockerfile. details:
First I download the sbmc code,and I put it in my E:\pythonProject\sbmc-master\sbmc-master,
and in the ./dockerfiles directory,I remove cuda-sbmc.dockerfile file,and rename cpu-sbmc.dockerfile to Dockerfile,ensure only a Dockerfile in ./dockerfiles directory,and in CMD,
using command "docker build -t ubuntu:16.04 -f ./dockerfiles/Dockerfile .",Note the . at the end
results:


E:\pythonProject\sbmc-master\sbmc-master> docker build -t ubuntu:16.04 -f ./dockerfiles/Dockerfile .
Sending build context to Docker daemon 2.517MB
Step 1/42 : FROM ubuntu:16.04
---> 5f2bf26e3524
Step 2/42 : MAINTAINER Michael Gharbi [email protected]
---> Using cache
---> 8af92dd3ac0e
Step 3/42 : RUN apt-get upgrade
---> Using cache
---> 237df2f19a56
Step 4/42 : RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
---> Using cache
---> 0cfb1c3f67a0
Step 5/42 : RUN apt-get install -y curl
---> Using cache
---> 29745fa35500
Step 6/42 : RUN apt-get install -y build-essential vim git bash liblz4-dev libopenexr-dev bison libomp-dev cmake flex qt5-default libeigen3-dev wget unzip libncurses5-dev
---> Using cache
---> 20a56ef1270f
Step 7/42 : SHELL ["/bin/bash", "-c"]
---> Using cache
---> 7e34d52c71c2
Step 8/42 : RUN mkdir -p /sbmc_app /sbmc_app/output /sbmc_app/data
---> Using cache
---> 9f189b99f218
Step 9/42 : COPY pbrt_patches /sbmc_app/patches
---> Using cache
---> ff7485b92471
Step 10/42 : WORKDIR /sbmc_app
---> Using cache
---> 758bcd09eaaf
Step 11/42 : RUN git clone https://github.com/mmp/pbrt-v2.git pbrt_tmp
---> Using cache
---> 56050147aa8a
Step 12/42 : RUN cd pbrt_tmp && git checkout e6f6334f3c26ca29eba2b27af4e60fec9fdc7a8d
---> Using cache
---> acbe48063275
Step 13/42 : RUN mv pbrt_tmp/src pbrt
---> Using cache
---> 79c6e5f47092
Step 14/42 : RUN rm -rf pbrt_tmp
---> Using cache
---> 1c4f9e42ce3a
Step 15/42 : RUN patch -d pbrt -p1 -i /sbmc_app/patches/sbmc_pbrt.diff
---> Using cache
---> d2b754aec000
Step 16/42 : RUN cd pbrt && make -j
---> Running in 977fcbb4626c
/bin/mkdir -p bin objs
Building object objs/main_pbrt.o
Building object objs/core_parallel.o
Building object objs/core_intersection.o
Building object objs/core_volume.o
Building object objs/core_targa.o
Building object objs/core_transform.o
Building object objs/core_spectrum.o
Building object objs/core_imageio.o
Building object objs/core_scene.o
Building object objs/core_timer.o
Building object objs/core_film.o
Building object objs/core_memory.o
Building object objs/core_shape.o
Building object objs/core_quaternion.o
Building object objs/core_integrator.o
Building object objs/core_samplerecord.o
Building object objs/core_sh.o
......
......
......
Makefile:117: recipe for target 'objs/core_spectrum.o' failed
Makefile:117: recipe for target 'objs/core_volume.o' failed
Makefile:117: recipe for target 'objs/core_filter.o' failed
Makefile:109: recipe for target 'objs/accelerators_bvh.o' failed
Makefile:125: recipe for target 'objs/film_image.o' failed
Makefile:133: recipe for target 'objs/integrators_pathkpcn.o' failed
Makefile:137: recipe for target 'objs/lights_infinite.o' failed
Makefile:137: recipe for target 'objs/lights_projection.o' failed
Makefile:145: recipe for target 'objs/materials_plastic.o' failed
Makefile:129: recipe for target 'objs/filters_mitchell.o' failed
Makefile:129: recipe for target 'objs/filters_triangle.o' failed
Makefile:129: recipe for target 'objs/filters_gaussian.o' failed
Makefile:149: recipe for target 'objs/renderers_metropolis.o' failed
Makefile:153: recipe for target 'objs/samplers_stratified.o' failed
Makefile:157: recipe for target 'objs/shapes_loopsubdiv.o' failed
Makefile:145: recipe for target 'objs/materials_matte.o' failed
Makefile:157: recipe for target 'objs/shapes_heightfield.o' failed
Makefile:113: recipe for target 'objs/cameras_orthographic.o' failed
Makefile:113: recipe for target 'objs/cameras_environment.o' failed
Makefile:161: recipe for target 'objs/textures_checkerboard.o' failed
Makefile:133: recipe for target 'objs/integrators_path.o' failed
Makefile:117: recipe for target 'objs/core_film.o' failed
Makefile:133: recipe for target 'objs/integrators_glossyprt.o' failed
Makefile:129: recipe for target 'objs/filters_sinc.o' failed
Makefile:145: recipe for target 'objs/materials_substrate.o' failed
Makefile:165: recipe for target 'objs/volumes_volumegrid.o' failed


I don't know what happened,the ./dockerfiles/Dockerfile is:
"
FROM ubuntu:16.04
MAINTAINER Michael Gharbi [email protected]

RUN apt-get upgrade
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y curl
RUN apt-get install -y
build-essential
vim
git
bash
liblz4-dev
libopenexr-dev
bison
libomp-dev
cmake
flex
qt5-default
libeigen3-dev
wget
unzip
libncurses5-dev

SHELL ["/bin/bash", "-c"]

RUN mkdir -p /sbmc_app /sbmc_app/output /sbmc_app/data
COPY pbrt_patches /sbmc_app/patches

WORKDIR /sbmc_app

RUN git clone https://github.com/mmp/pbrt-v2.git pbrt_tmp
RUN cd pbrt_tmp && git checkout e6f6334f3c26ca29eba2b27af4e60fec9fdc7a8d
RUN mv pbrt_tmp/src pbrt
RUN rm -rf pbrt_tmp

RUN patch -d pbrt -p1 -i /sbmc_app/patches/sbmc_pbrt.diff
RUN cd pbrt && make -j
...
...
...
"
when RUN cd pbrt && make -j ,Caused the problem.

Broken links to previous methods

cvc.ucsb.edu seems to be unreachable to me, which hosts two previous methods: Sen2011 and Kalantari2015). This seems to be a server issue, but I still would like to file an issue here because it also breaks the make docker_build command which is the first step to setting up the docker environment.

some different errors when "RUN cd sbmc && python setup.py develop" in cuda-sbmc.dockerfile

error 1:
Installed /sbmc_app/sbmc
Processing dependencies for sbmc==0.0.1
Searching for wget
Reading https://pypi.org/simple/wget/
Traceback (most recent call last):
File "setup.py", line 109, in
main()
File "setup.py", line 105, in main
cmdclass=dict(build_ext=hlpt.HalideBuildExtension),
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/init.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/sbmc_app/anaconda/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/sbmc_app/anaconda/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/sbmc_app/anaconda/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/command/develop.py", line 38, in run
self.install_for_development()
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/command/develop.py", line 156, in install_for_development
self.process_distribution(None, self.dist, not self.no_deps)
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 752, in process_distribution
[requirement], self.local_index, self.easy_install
File "/sbmc_app/anaconda/lib/python3.7/site-packages/pkg_resources/init.py", line 782, in resolve
replace_conflicting=replace_conflicting
File "/sbmc_app/anaconda/lib/python3.7/site-packages/pkg_resources/init.py", line 1065, in best_match
return self.obtain(req, installer)
File "/sbmc_app/anaconda/lib/python3.7/site-packages/pkg_resources/init.py", line 1077, in obtain
return installer(requirement)
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 667, in easy_install
not self.always_copy, self.local_index
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/package_index.py", line 658, in fetch_distribution
self.find_packages(requirement)
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/package_index.py", line 491, in find_packages
self.scan_url(self.index_url + requirement.unsafe_name + '/')
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/package_index.py", line 831, in scan_url
self.process_url(url, True)
File "/sbmc_app/anaconda/lib/python3.7/site-packages/setuptools/package_index.py", line 361, in process_url
page = f.read()
File "/sbmc_app/anaconda/lib/python3.7/http/client.py", line 470, in read
s = self._safe_read(self.length)
File "/sbmc_app/anaconda/lib/python3.7/http/client.py", line 622, in _safe_read
raise IncompleteRead(b''.join(s), amt)
http.client.IncompleteRead: IncompleteRead(1992 bytes read, 1840 more expected)
The command '/bin/bash -c cd sbmc && python setup.py develop' returned a non-zero code: 1

error 2:
Installed /sbmc_app/sbmc
Processing dependencies for sbmc==0.0.1
Searching for wget
Reading https://pypi.org/simple/wget/
Downloading https://files.pythonhosted.org/packages/47/6a/62e288da7bcda82b935ff0c6cfe542970f04e29c756b0e147251b2fb251f/wget-3.2.zip#sha256=35e630eca2aa50ce998b9b1a127bb26b30dfee573702782aa982f875e3f16061
Best match: wget 3.2
Processing wget-3.2.zip
Writing /tmp/easy_install-la3hsewr/wget-3.2/setup.cfg
Running wget-3.2/setup.py -q bdist_egg --dist-dir /tmp/easy_install-la3hsewr/wget-3.2/egg-dist-tmp-dmrfue2p
zip_safe flag not set; analyzing archive contents...
Moving wget-3.2-py3.7.egg to /sbmc_app/anaconda/lib/python3.7/site-packages
Adding wget 3.2 to easy-install.pth file

Installed /sbmc_app/anaconda/lib/python3.7/site-packages/wget-3.2-py3.7.egg
Searching for lz4
Reading https://pypi.org/simple/lz4/
Downloading https://files.pythonhosted.org/packages/59/46/3ac0bb528c8e67ae2dad5692712be5e6ca5ccca8dda94a10801bcb741986/lz4-2.2.1-cp37-cp37m-manylinux1_x86_64.whl#sha256=95b8db1be938f5c17043bb9143bc120d270f7cebcb439686ec43d8c0b36b6377
Best match: lz4 2.2.1
Processing lz4-2.2.1-cp37-cp37m-manylinux1_x86_64.whl
Installing lz4-2.2.1-cp37-cp37m-manylinux1_x86_64.whl to /sbmc_app/anaconda/lib/python3.7/site-packages
writing requirements to /sbmc_app/anaconda/lib/python3.7/site-packages/lz4-2.2.1-py3.7-linux-x86_64.egg/EGG-INFO/requires.txt
Adding lz4 2.2.1 to easy-install.pth file

Installed /sbmc_app/anaconda/lib/python3.7/site-packages/lz4-2.2.1-py3.7-linux-x86_64.egg
Searching for scikit-image
Reading https://pypi.org/simple/scikit-image/
Downloading https://files.pythonhosted.org/packages/dc/48/454bf836d302465475e02bc0468b879302145b07a005174c409a5b5869c7/scikit_image-0.16.2-cp37-cp37m-manylinux1_x86_64.whl#sha256=2aa962aa82d815606d7dad7f045f5d7ca55c65b4320d47e15a98fc92612c2d6c
error: sha256 validation failed for scikit_image-0.16.2-cp37-cp37m-manylinux1_x86_64.whl; possible download problem?
The command '/bin/bash -c cd sbmc && python setup.py develop' returned a non-zero code: 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.