Coder Social home page Coder Social logo

astra-vision / monoscene Goto Github PK

View Code? Open in Web Editor NEW
661.0 12.0 65.0 130.51 MB

[CVPR 2022] "MonoScene: Monocular 3D Semantic Scene Completion": 3D Semantic Occupancy Prediction from a single image

Home Page: https://astra-vision.github.io/MonoScene/

License: Apache License 2.0

Python 100.00%
nyu-depth-v2 semantic-scene-completion semantic-scene-understanding single-image-reconstruction monocular 2d-to-3d semantic-kitti kitti-360 mayavi pytorch

monoscene's People

Contributors

anhquancao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monoscene's Issues

Fail to reproduce NYU dataset accuracy

Thanks for this great work, I found the nyu accuracy is hard to reproduce,
the accuracy with my self-trained model:

Precision=56.4726, Recall=61.7487, IoU=41.8369                                                                                                                                      
class IoU: ['empty', 'ceiling', 'floor', 'wall', 'window', 'chair', 'bed', 'sofa', 'table', 'tvs', 'furn', 'objs'],                                                                 
88.7854,  7.9803,  93.3229,  11.3533,  9.9692,  14.0669,  45.8456,  35.6397,  14.4716,  9.2469,  27.8702,  11.9180,                                                                 
mIoU=25.6077                                                                                                                                                                        
--------------------------------------------------------------------------------                                                                                                    
DATALOADER:0 TEST RESULTS                                                                                                                                                           
{'test/loss': 6.906957626342773,                                                                                                                                                    
 'test/loss_geo_scal': 1.1675410270690918,                                                                                                                                          
 'test/loss_relation_ce_super': 0.6730754375457764,                                                                                                                                 
 'test/loss_sem_scal': 2.944209098815918,                                                                                                                                           
 'test/loss_ssc': 2.1221349239349365} 

the results in the paper:

image

  • my reproduce process is same as readme
  • reproduce machine: 2xV100

BTW, the kitti reproduce accuracy is also lower than the paper and release model:

experiments SC IoU mIoU Precision Recall
results from paper 37.12 11.50 \ \
results from github release model 36.7950 11.2993 52.1908 55.5026
reproduce-2xV100 36.1542 10.7542 50.7314 55.7177
reproduce-4xA100 36.4819 10.8438 50.1127 57.2874

Is the difference caused by the validation set and test set? or the difference is a normal accuracy jitter?
Do you have any comments or suggestions about this problem?

Thanks.

About the results of val set

Hi, I tried to get the performance of Semantic Kitti in Tab3, but I only get mIoU of 10.7-11.0. I wonder if I want to get the mIoU of 11.5 as Tab3, what else should I do?

about calib.txt file

I looked through a lot of blogs and found very little description of these parameters. The code in pykitti also does not specify what the data is. I only know that this is related to the camera intrinsics.Could you please give me a general explanation of these parameters?

Question about Table 2.

Hi, thank you first for your great work MonoScene!

I have a question about the Table 2 in paper. I an confused about the performance of 2.5D/3D method.
Z JNS)CI~ZV4QKZ6C5)JG%V
I check the paper of AICNet and 3DSketch, which report 59.2/ 33.3 and 71.3/41.1 on NYU v2 for IoU and mIoU.
Could you please elaborate how you get the performance of AICNet and 3DSketch in Table 2?

Thank you so much

Training Time

Hello, I re-implemented your work with mmdet3d, but I find the running time is 8 days with 8 V100. So could you provide the training log or could you tell me the training time of your code?

about test with model file trained by myself

this is the output with pre-train model:
3ac5d9b9d7568fdbe8b939975ece344
this is the output with model trained by myself:
34432caeb48bf41e246bc68f4773791

remember that,in another issue the GPU is not enough and you told me to change the feature from 64 to 32 while training
#1 (comment)

this feature is different from batchsize,right?

failed to run test

When I try to run this script, it crashed without giving any information:
python monoscene/scripts/generate_output.py +output_path=$MONOSCENE_OUTPUT dataset=kitti_360 +kitti_360_root=$KITTI_360_ROOT +kitti_360_sequence=2013_05_28_drive_0028_sync n_gpus=1 batch_size=1

image

Any suggestion will be much appreciated.

CUDA out of memory

Hi, very impressive work. When I try to train NYU dataset, by using following cmd:

python monoscene/scripts/train_monoscene.py dataset=NYU NYU_root=$NYU_ROOT NYU_preprocess_root=$NYU_PREPROCESS logdir=$NYU_LOG n_gpus=1 batch_size=2

I get this:

CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 15.73 GiB total capacity; 12.49 GiB already allocated; 93.81 MiB free; 12.96 GiB reserved in total by PyTorch)

My GPU is RTX 5000,which have 16GB Memory. I tried batch_size=1, but still the same.

Do you have any suggestion about my problem? Thanks.

Segmentation Fault

Hi I tried running the model but getting this error

    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 4226) exited unexpectedly

Regenerate the code

  1. I am using docker so you can see all of the installed versions and package in docker file
docker pull sohaibanwaar/monoscene
# You need to pull the git and attach with docker volume using the command below
# repo path = Git clone path
docker run -it --gpus all -v <repo path>:/monoscene sohaibanwaar/nvidia-10.2-ubuntu-conda-python bash

Got error while Inference

python monoscene/scripts/generate_output.py \
    +output_path=$MONOSCENE_OUTPUT \
    dataset=NYU \
    NYU_root=$NYU_ROOT \
    NYU_preprocess_root=$NYU_PREPROCESS \
    n_gpus=1 batch_size=1

About the training data for test submission

Hi, I have a question about the test submission for semantic kitti. For test submission, should we retrain the model with train+val sequences or directly use the trained model for validation ?

Thanks!

Extend to video

Hello Cao,

Any plan to extend this awesome job to video stream? Kind of like what monoRecon did.

Pretrained models on other dataset: NuScenes

Hi @anhquancao,

Thanks so much for your paper and your implementation. Do you have your pretrained model on the NuScenes? If yes, could you share it? The reason is that I want to build upon your work on the NuScenes dataset but there exists a large domain gap between the two (SemanticKITTI and NuScenes) so the pretrained on SemanticKITTI works does not well on the NuScenes.

Thanks!

baselines reproduce

Thanks for this great work, I am terribly sorry to bother you with this issue.I found it hard to use these baselines.If possible,could you output your config and script for these baseline.

image

cannot find calib

PS F:\Studying\CY-Workspace\MonoScene-master> python monoscene/scripts/eval_monoscene.py dataset=kitti kitti_root=$KITTI_ROOT kitti_preprocess_root=$KITTI_PREPROCESS n_gpus=1 batch_size=
1
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
n_relations 4
Using cache found in C:\Users\DELL/.cache\torch\hub\rwightman_gen-efficientnet-pytorch_master
Loading base model ()...Done.
Removing last two layers (global_pool & classifier).
Building Encoder-Decoder model..Done.
Traceback (most recent call last):
File "monoscene/scripts/eval_monoscene.py", line 71, in main
data_module.setup()
File "F:\anaconda\envs\monoscene\lib\site-packages\pytorch_lightning\core\datamodule.py", line 440, in wrapped_fn
fn(*args, **kwargs)
File "F:\Studying\CY-Workspace\MonoScene-master\monoscene\scripts/../..\monoscene\data\semantic_kitti\kitti_dm.py", line 34, in setup
color_jitter=(0.4, 0.4, 0.4),
File "F:\Studying\CY-Workspace\MonoScene-master\monoscene\scripts/../..\monoscene\data\semantic_kitti\kitti_dataset.py", line 60, in init
os.path.join(self.root, "dataset", "sequences", sequence, "calib.txt")
File "F:\Studying\CY-Workspace\MonoScene-master\monoscene\scripts/../..\monoscene\data\semantic_kitti\kitti_dataset.py", line 193, in read_calib
with open(calib_path, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'dataset\sequences\00\calib.txt'

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Dear author.
I met the error below, when I run the visualize code.
I run the visualize code in a linux server.

python monoscene/scripts/visualization/NYU_vis_pred.py +file=/path/to/output/file.pkl

The error is that.

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.

Aborted (core dumped)

capture photos with same viewport as input RGB image

Excuese me, I leverage NYU_vis_pred.py to visualize the pkl file and capture the 3D reconstruction scene
image
it's great to visualize the camera too. But it seems a mesh and can not take photos.
image

I want to take a photo with the same viewport as input RGB image, could you please tell me the reference code to realize my goal.
(希望通过相机参数拍摄与输入图片同视角的图片,请问有可参考的代码吗)
Best regards,
Yihan Wen

SemanticKITTI test submission

Hi,
Thanks for the wonderful work.
Would you please provide the code for generating test sets on semantic kitti? I write a code for generating the result, but found that the details is not in line with expectations, because the car mIoU is 0. This maybe a problem of class maping.

image

Look forward for your reply.

Hidden Test set - Semantic Kitti dataset

Hi, thanks for your amazing wonderful work. For the semantic kitti data preparation, your code only contains 00-10 sequences which contain training and val splits, in your paper you mentioned the testing split is "hidden split" which is on "online server"? Could you please tell me how to prepare the testing split (11-21 sequences), it looks that there is no .label and .invalid files for these sequences.

Thanks!

about visualization

(monoscene) ruidong@ubuntu-X299-UD4-Pro:~/workplace/MonoScene$ python monoscene/scripts/visualization/kitti_vis_pred.py +file=/home/ruidong/workplace/MonoScene/outputs/kitti/08/000000.pkl +dataset=kitt
monoscene/scripts/visualization/kitti_vis_pred.py:23: DeprecationWarning: np.float is a deprecated alias for the builtin float. To silence this warning, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
coords_grid = coords_grid.astype(np.float)
Traceback (most recent call last):
File "monoscene/scripts/visualization/kitti_vis_pred.py", line 196, in main
d=7,
File "monoscene/scripts/visualization/kitti_vis_pred.py", line 75, in draw
grid_coords = np.vstack([grid_coords.T, voxels.reshape(-1)]).T
AttributeError: 'tuple' object has no attribute 'T'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Pretrained model on KITTI-360

Hi @anhquancao ,

Thanks so much and congratulate you on the acceptance of your paper at CVPR2022. Do you try to train your model on the KITTI-360 dataset? I want to evaluate the result on KITTI-360, have you ever tried to generate the SSC groundtruth of KITTI-360?

Thanks.

Dataloader not loading data

hi @anhquancao
Can you please help here why Dataloader is not picking up the data

Command

python monoscene/scripts/generate_output.py     +output_path=$MONOSCENE_OUTPUT     dataset=NYU     NYU_root=$NYU_ROOT     NYU_preprocess_root=$NYU_PREPROCESS     n_gpus=1 batch_size=1

Result

NYU ROOT /monoscene/nyu/NYU_dataset/depthbin/
NYU PREPROCESS ROOT /monoscene/nyu/NYU_dataset/depthbin/preprocess
NYU FRUSTUM SIZE 8
NYU BATCH SIZE 1
NYU NUM WORKERS 1
n_relations 4
Using cache found in /root/.cache/torch/hub/rwightman_gen-efficientnet-pytorch_master
Loading base model ()...Done.
Removing last two layers (global_pool & classifier).
Building Encoder-Decoder model..Done.
0it [00:00, ?it/s]
(monoscene) root@d0a

Input Image

Hi Guys,
I want to use a JPG image as input, can you opensource the code that exchange the image into PKL?
Thank you

NYUv2 test set can't use

after run
python monoscene/scripts/eval_monoscene.py
dataset=NYU
NYU_root=$NYU_ROOT
NYU_preprocess_root=$NYU_PREPROCESS
n_gpus=1 batch_size=1

FileNotFoundError: [Errno 2] No such file or directory: '/home/azuryl/project/MonoScene/NYU/depthbin/NYUtest/NYU0015_0000_color.jpg'

About visualization

Hello, I followed your visualization code, but I found that the visualization results seemed different from what you showed. What should I do to get the results you displayed on your home page?
image

TypeError: 'int' object is not subscriptable

(monoscene) ruidong@ubuntu-X299-UD4-Pro:~/workplace/MonoScene$ python monoscene/scripts/train_monoscene.py dataset=kitti enable_log=true kitti_root=$KITTI_ROOT kitti_preprocess_root=$KITTI_PREPROCESS kitti_logdir=$KITTI_LOG n_gpus=2 batch_size=2
^[[Dexp_kitti_1_FrusSize_8_nRelations4_WD0.0001_lr0.0001_CEssc_geoScalLoss_semScalLoss_fpLoss_CERel_3DCRP_Proj_2_4_8
n_relations (32, 32, 4)
Traceback (most recent call last):
File "monoscene/scripts/train_monoscene.py", line 118, in main
class_weights=class_weights,
File "/home/ruidong/workplace/MonoScene/monoscene/models/monoscene.py", line 80, in init
context_prior=context_prior,
File "/home/ruidong/workplace/MonoScene/monoscene/models/unet3d_kitti.py", line 62, in init
self.feature * 4, self.feature * 4, size_l3, bn_momentum=bn_momentum
File "/home/ruidong/workplace/MonoScene/monoscene/models/CRP3D.py", line 15, in init
self.flatten_size = size[0] * size[1] * size[2]
TypeError: 'int' object is not subscriptable

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

about test

FileNotFoundError: [Errno 2] No such file or directory: '/home/ruidong/workplace/MonoScene/trained_models/monoscene_kitti.ckpt'

the last printing of trainning is:
Epoch 29: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2325/2325 [1:06:52<00:00, 1.73s/it, loss=3.89, v_num=]

Dynamic object in SSC

Thank you for your impressive work.
I'm a little confused about the dynamic object in SSC.When using the semantic-kitti dataset, I found the dynamic object(such as vehicles) will leave long trace in GT voxels.
After i trained the model by the semantic-kitti dataset, I also found this question.
I wonder how do you deal with moving objects? In the demo you show the moving objects don't leave a long trace.

I am looking forward to your reply.
Best,

about train

PS F:\Studying\CY-Workspace\MonoScene-master> python monoscene/scripts/train_monoscene.py dataset=kitti enable_log=true kitti_root=$KITTI_ROOT kitti_preprocess_root=$KITTI_PREPROCESS kit
ti_logdir=$KITTI_LOG n_gpus=1 batch_size= 1
LexerNoViableAltException: 1
^
See https://hydra.cc/docs/next/advanced/override_grammar/basic for details

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Porting the work of this paper to a new dataset

Hello author, first of all thank you for your great work. I want to directly apply your work to the nuscenes dataset, is it possible? Does the nuscenes dataset need point cloud data to assist in generating voxel data?

source of the NYUv2 dataset

Thanks for the wonderful work.

I wonder if you can provide the original source of the nyuv2 dataset, because I am confused about the depth data reading. The depth data is stored in the png format of uint16. I don't know how to convert this data to absolute metric depth data(in meters).

BTW, I found the depth transformation func in this toolkit does not match the data in this repo
http://cs.nyu.edu/~silberman/code/toolbox_nyu_depth_v2.zip

% Parameters for making depth absolute.
depthParam1 = 351.3;
depthParam2 = 1092.5;
function imgDepthAbs = depth_rel2depth_abs(imgDepthOrig)
  assert(isa(imgDepthOrig, 'double'));

  [H, W] = size(imgDepthOrig);
  assert(H == 480);
  assert(W == 640);

  camera_params;

  imgDepthAbs = depthParam1 ./ (depthParam2 - imgDepthOrig);
  
  imgDepthAbs(imgDepthAbs > maxDepth) = maxDepth;
  imgDepthAbs(imgDepthAbs < 0) = 0;
end

Look forward to your reply. Thanks.

How to fixed the grid size in the mayavi?

Dear, author,
I found that if only one object is in the scene, the grid size is larger than the two objects in the scene.
My code is below.

    # Compute the voxels coordinates
    grid_coords = get_grid_coords(
        [voxels.shape[0], voxels.shape[1], voxels.shape[2]], voxel_size
    )

    # Attach the predicted class to every voxel

    grid_coords = np.vstack([grid_coords.T, voxels.reshape(-1)]).T


    # Remove empty and unknown voxels
    occupied_voxels = grid_coords[(grid_coords[:, 3] > 0) & (grid_coords[:, 3] < 255)]

    # Draw occupied voxels
    plt_plot = mlab.points3d(
        occupied_voxels[:, 0],
        occupied_voxels[:, 1],
        occupied_voxels[:, 2],
        occupied_voxels[:, 3],
        colormap="viridis",
        scale_factor=voxel_size - 0.5 * voxel_size,
        mode="cube",
        opacity=1.0,
        vmin=0,
        vmax=12,
    )

    colors = np.array(
        [
            [100, 150, 245, 255],
            [100, 230, 245, 255],
            [30, 60, 150, 255],
            [80, 30, 180, 255],
            [100, 80, 250, 255],
            [255, 30, 30, 255],
            [255, 40, 200, 255],
            [150, 30, 90, 255],
            [255, 0, 255, 255],
            [255, 150, 255, 255],
            [75, 0, 75, 255],
            [175, 0, 75, 255],
            [255, 200, 0, 255],
            [255, 120, 50, 255],
            [0, 175, 0, 255],
            [135, 60, 0, 255],
            [150, 240, 80, 255],
            [255, 240, 150, 255],
            [255, 0, 0, 255],
        ]
    ).astype(np.uint8)

    plt_plot.glyph.scale_mode = "scale_by_vector"

    plt_plot.module_manager.scalar_lut_manager.lut.table = colors

    mlab.show()
    mlab.savefig(filename=save_path)

image
image

ImportError: cannot import name 'get_num_classes' from 'torchmetrics.utilities.data'

there is something wrong with my machine and I reinstall my ubuntu. I re-gitclone the code and just keep the data.but when I follow the readme to do installation,it print:

(monoscene) potato@ubuntu-X299-UD4-Pro:/workplace/MonoScene$ pip install -e ./
Obtaining file:///home/potato/workplace/MonoScene
Installing collected packages: monoscene
Running setup.py develop for monoscene
Successfully installed monoscene-0.0.0
(monoscene) potato@ubuntu-X299-UD4-Pro:
/workplace/MonoScene$ python monoscene/scripts/train_monoscene.py dataset=kitti enable_log=true kitti_root=$KITTI_ROOT kitti_preprocess_root=$KITTI_PREPROCESS kitti_logdir=$KITTI_LOG n_gpus=1 batch_size=1 sem_scal_loss=False
Traceback (most recent call last):
File "monoscene/scripts/train_monoscene.py", line 1, in
from monoscene.data.semantic_kitti.kitti_dm import KittiDataModule
File "/home/potato/workplace/MonoScene/monoscene/data/semantic_kitti/kitti_dm.py", line 3, in
import pytorch_lightning as pl
File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/init.py", line 20, in
from pytorch_lightning import metrics # noqa: E402
File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/metrics/init.py", line 15, in
from pytorch_lightning.metrics.classification import ( # noqa: F401
File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/metrics/classification/init.py", line 14, in
from pytorch_lightning.metrics.classification.accuracy import Accuracy # noqa: F401
File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/metrics/classification/accuracy.py", line 18, in
from pytorch_lightning.metrics.utils import deprecated_metrics, void
File "/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/pytorch_lightning/metrics/utils.py", line 22, in
from torchmetrics.utilities.data import get_num_classes as _get_num_classes
ImportError: cannot import name 'get_num_classes' from 'torchmetrics.utilities.data' (/home/potato/anaconda3/envs/monoscene/lib/python3.7/site-packages/torchmetrics/utilities/data.py)

Questions about cross-entropy loss

Dear authors, thanks for your great works! In your paper, you say that "the losses are computed only where y is
defined". I wonder if this means you do not add supervision on non-occupied voxels and only use multi-class classification loss on occupied voxels ? If this holds true, why the model can identify which voxels are occupied ?

Training Time

I like this work. How long did it take you to train the model ?

Weird results on Kitti360

Hi @anhquancao, thanks for your great work!

I've tested the inference pipeline on Kitti360 (same sequence) data as your Readme guideline using monoscene_kitti.ckpt (kitti sematic) pretrained model, but the visualization results are not as expected. Do you have any idea about this?
image

BRs,
TuanHo

The model does not converge on my own dataset

Dear author,
I am trying to train the model on another dataset. However, the loss is around 22 and degrades very slowly. Does the model converge fast on KITTI dataset? If possible, could you show a loss curve?

train log file

Dear authors, thanks for your great works! I‘d like to know if it is convenient for you to provide log files during training, because I cannot reproduce the same results at present, I want to refer to your logs for comparison
thank you

latex figure

Hello, how did the segmentation image in the paper add the color squares? Can you give me a latex code? thanks

how to infer a single rgb image

image

if I understand correct, the Features Line of Sight Projection need know the 3d voxels position then project to 2d images,but when infer a rgb image(not in NYU), we don't know the 3d voxels position, how can I do project?

Cuda out of memory

Dear author, you said that Use smaller 2D backbone by chaning the basemodel_name and num_features
The pretrained model name is here. You can try the efficientnet B5 can reduces the memory, I want to know the B5 weight and the value of num_features?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.