Coder Social home page Coder Social logo

z-x-yang / cfbi Goto Github PK

View Code? Open in Web Editor NEW
323.0 323.0 43.0 3.87 MB

The official implementation of CFBI(+): Collaborative Video Object Segmentation by (Multi-scale) Foreground-Background Integration.

License: BSD 3-Clause "New" or "Revised" License

Python 99.29% Shell 0.71%
computer-vision pytorch-implementation video-object-segmentation

cfbi's Introduction

A Github Pages template for academic websites. This was forked (then detached) by Stuart Geiger from the Minimal Mistakes Jekyll Theme, which is © 2016 Michael Rose and released under the MIT License. See LICENSE.md.

I think I've got things running smoothly and fixed some major bugs, but feel free to file issues or make pull requests if you want to improve the generic template / theme.

Note: if you are using this repo and now get a notification about a security vulnerability, delete the Gemfile.lock file.

Instructions

  1. Register a GitHub account if you don't have one and confirm your e-mail (required!)
  2. Fork this repository by clicking the "fork" button in the top right.
  3. Go to the repository's settings (rightmost item in the tabs that start with "Code", should be below "Unwatch"). Rename the repository "[your GitHub username].github.io", which will also be your website's URL.
  4. Set site-wide configuration and create content & metadata (see below -- also see this set of diffs showing what files were changed to set up an example site for a user with the username "getorg-testacct")
  5. Upload any files (like PDFs, .zip files, etc.) to the files/ directory. They will appear at https://[your GitHub username].github.io/files/example.pdf.
  6. Check status by going to the repository settings, in the "GitHub pages" section
  7. (Optional) Use the Jupyter notebooks or python scripts in the markdown_generator folder to generate markdown files for publications and talks from a TSV file.

See more info at https://academicpages.github.io/

To run locally (not on GitHub Pages, to serve on your own computer)

  1. Clone the repository and made updates as detailed above
  2. Make sure you have ruby-dev, bundler, and nodejs installed: sudo apt install ruby-dev ruby-bundler nodejs
  3. Run bundle clean to clean up the directory (no need to run --force)
  4. Run bundle install to install ruby dependencies. If you get errors, delete Gemfile.lock and try again.
  5. Run bundle exec jekyll liveserve to generate the HTML and serve it from localhost:4000 the local server will automatically rebuild and refresh the pages on change.

Changelog -- bugfixes and enhancements

There is one logistical issue with a ready-to-fork template theme like academic pages that makes it a little tricky to get bug fixes and updates to the core theme. If you fork this repository, customize it, then pull again, you'll probably get merge conflicts. If you want to save your various .yml configuration files and markdown files, you can delete the repository and fork it again. Or you can manually patch.

To support this, all changes to the underlying code appear as a closed issue with the tag 'code change' -- get the list here. Each issue thread includes a comment linking to the single commit or a diff across multiple commits, so those with forked repositories can easily identify what they need to patch.

cfbi's People

Contributors

z-x-yang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cfbi's Issues

Training details

Hi, it is a good work!
I have some questiones: How many computers are there when you train your CFBI net, and how many GPU per computer do you use? I saw this link, and I can not make sure your training mode is single-machine-multi-card or multi-machine-multi-card?

Some question

I'm implementing CFBI but i have some question.
1.How to apply collaborative instance-level attention vector to CE?
2.How to fuse low-level backbone features and decoded feature? Just concat?
Thank you.

inference on 4k images.

how can i run inference on 4k( 3840 x 2160) images, as presently DAVIS or youtube-vos uses 480p resolution.

Clarification on usage -

how to input a generic video / mp4 ?

for example - this repo provides some pointers
https://github.com/senguptaumd/Background-Matting

spit out individual pngs

cd Background-Matting/sample_video
mkdir input background
ffmpeg -i teaser.mov input/%04d_img.png -hide_banner

apply segmentation

cd Background-Matting/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd ../..
python test_segmentation_deeplab.py -i sample_video/input

I can see the script ./ytb_eval_fast.sh evaluating - but where do the files go/ where are they created? I couldn't find them.(I'm on ssh in the cloud)
locate -b '00005.jpg' didn't turn up any results

Prcessing Seq 0557542e8d [11/474]:
Ref Frame: 00000.jpg, Time: 0.09423136711120605
Frame: 00005.jpg, Obj Num: 1, Time: 1.1294879913330078
Frame: 00010.jpg, Obj Num: 1, Time: 0.855078935623169
Frame: 00015.jpg, Obj Num: 1, Time: 0.8564548492431641
Frame: 00020.jpg, Obj Num: 1, Time: 0.84938645362854
Frame: 00025.jpg, Obj Num: 1, Time: 0.8649313449859619
Frame: 00030.jpg, Obj Num: 1, Time: 0.8523914813995361
Frame: 00035.jpg, Obj Num: 1, Time: 0.8554131984710693
Frame: 00040.jpg, Obj Num: 1, Time: 0.8590476512908936
Frame: 00045.jpg, Obj Num: 1, Time: 0.8608062267303467
Frame: 00050.jpg, Obj Num: 1, Time: 0.845700740814209
Frame: 00055.jpg, Obj Num: 1, Time: 0.8705801963806152
Frame: 00060.jpg, Obj Num: 1, Time: 0.8516411781311035
Frame: 00065.jpg, Obj Num: 1, Time: 0.8602666854858398
Frame: 00070.jpg, Obj Num: 1, Time: 0.8541202545166016
Frame: 00075.jpg, Obj Num: 1, Time: 0.8581304550170898
Frame: 00080.jpg, Obj Num: 1, Time: 0.8512144088745117
Frame: 00085.jpg, Obj Num: 1, Time: 0.8659780025482178
Frame: 00090.jpg, Obj Num: 1, Time: 0.8513703346252441
Frame: 00095.jpg, Obj Num: 1, Time: 0.8629040718078613
Frame: 00100.jpg, Obj Num: 1, Time: 0.8581538200378418
Frame: 00105.jpg, Obj Num: 1, Time: 0.8561139106750488
Frame: 00110.jpg, Obj Num: 1, Time: 0.8489248752593994
Frame: 00115.jpg, Obj Num: 1, Time: 0.8641960620880127
Frame: 00120.jpg, Obj Num: 1, Time: 0.8525502681732178
Frame: 00125.jpg, Obj Num: 1, Time: 0.8621423244476318
Frame: 00130.jpg, Obj Num: 1, Time: 0.8552994728088379
Frame: 00135.jpg, Obj Num: 1, Time: 0.8562443256378174
Frame: 00140.jpg, Obj Num: 1, Time: 0.8510112762451172
Frame: 00145.jpg, Obj Num: 1, Time: 0.8548166751861572
Frame: 00150.jpg, Obj Num: 1, Time: 0.8519599437713623
Frame: 00155.jpg, Obj Num: 1, Time: 0.8683714866638184
Frame: 00160.jpg, Obj Num: 1, Time: 0.8495943546295166
Frame: 00165.jpg, Obj Num: 1, Time: 0.863872766494751
Frame: 00170.jpg, Obj Num: 1, Time: 0.8625590801239014
Frame: 00175.jpg, Obj Num: 1, Time: 0.8523445129394531

UPDATE
I rsync content back to my desktop and can see DAVIS/annotations/480p/bear and a lot of the images. eg. bmx etc
I'm not sure how to input a new video and where the output is supposed to end up.

UPDATE 2
running this I can see files show up in check point folder
rsync --exclude '.git' --progress -Pav -e "ssh -i p2-machinelearning.pem" [email protected]://home/ubuntu/gitWorkspace/CFBI /home/jp/Documents/gitWorkspace/

00% 10.36kB/s 0:00:00 (xfr#327, to-chk=33/27176)
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/00000.png
2,777 100% 6.80kB/s 0:00:00 (xfr#328, to-chk=32/27176)
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/00005.png
3,703 100% 9.06kB/s 0:00:00 (xfr#329, to-chk=31/27176)
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/00010.png
3,724 100% 9.11kB/s 0:00:00 (xfr#330, to-chk=30/27176)
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/00015.png
3,710 100% 9.08kB/s 0:00:00 (xfr#331, to-chk=29/27176)
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/00020.png
3,907 100% 9.54kB/s 0:00:00 (xfr#332, to-chk=28/27176)
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/00025.png
3,899 100% 9.52kB/s 0:00:00 (xfr#333, to-chk=27/27176)
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/00030.png
3,740 100% 9.13kB/s 0:00:00 (xfr#334, to-chk=26/27176)
CFBI/result/resnet101_cfbi/eval/youtubevos/youtubevos_resnet101_cfbi_ckpt_unknown/Annotations/056a2ff390/00035.png

DIgging deeper I can see the youtube input
Screenshot from 2020-08-24 15-52-57

should be able to craft some script to input a file easily enough. I'll post once I have it ready. (should be just as above with ffmpeg - just need to tweak inputs / and how the eval script picks things up.)

Screenshot from 2020-08-24 16-00-58

Screenshot from 2020-08-24 15-52-57

UPDATE 3
it seems the meta data.json is tangled with eval script - it would be nice to be able to just throw an mp4 / mov file and process video frames /png.

the influence of batchsize and learning rate for reproducing the DAVIS2017 training only `s result

Hi:
Thanks for your great work!
I have some questions about reproducing the DAVIS2017 result.
(1) I train the CFBI only use DAVIS2017 and evel it in DAVIS2017 valid. Due to the limitation of GPU memory, I first used 2 rtx2080Ti GPU, 1 batch per GPU, a total of 2batch, to reproduce the result, but it is about 4 points lower than that on paper, but it is still normal. However, when I use 4 rtx2080Ti GPUs with a total of 4 batches, the result is even lower. Even with the training, the loss and the performance on the valid set does not decrease, but rises after 10K iters.
So I would like to ask how much the performance of CFBI is affected by batchsize. If training only on davis2017 dataset, how can the learning rate and batchsize be set to reproduce the results on paper? LR = 0.06, BS = 3 per Tesla V100 as the paper mentioned? Since the parameters of backbone have not changed during the training process, and GN is used in other BN layer of the model.
2. How much effect does the current sequence's length have on the experimental results? DAVIS2017 only training use 3 or 4?
Look forward to your reply!
Thanks.

Fatal mistake in Sequential Training.

Some people cloned the training code before I withdrew it. However, we found a fatal mistake in Sequential Training. The mistake will decrease the performance by more than 1 point of performance.

If you want a fixed version, please contact us by email.

How FPS is calculated

Hi,

May I know how to calculate FPS?

In dataset such as DAVIS2017, each frame might contain several objects to track, but the proposed method needs to track each object separately in each frame. In that case, how the FPS is calculated? If the object number is N and each object segmentation time cost is T in one frame, does it mean T*N s is needed in each frame?

Thanks!

Fatal mistake in Sequential Training.

Some people cloned the training code before I withdrew it. However, we found a fatal mistake in Sequential Training. The mistake will decrease the performance by more than 1 point of performance.

If you want a fixed version, please contact us by email.

Error occurred when i want to see the tensorboard

I want to see the tensorboard when finetune CFBI on DAVIS 2017,so I change the self.TRAIN_TBLOG in the file resnet101_cfbi_davis_finetune.py to be True. But after the change ,an error Error occurred:

Traceback (most recent call last):
File "tools/train_net.py", line 73, in
main()
File "tools/train_net.py", line 70, in main
mp.spawn(main_worker, nprocs=cfg.TRAIN_GPUS, args=(cfg,))
File "/root/miniconda3/envs/pytracking/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/root/miniconda3/envs/pytracking/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/root/miniconda3/envs/pytracking/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/root/projects/code/full_CFBI/full_CFBI/tools/train_net.py", line 13, in main_worker
trainer.sequential_training()
File "./networks/engine/train_manager.py", line 275, in sequential_training
self.process_log(
File "./networks/engine/train_manager.py", line 343, in process_log
for seq_step, running_loss, running_iou in zip(range(running_losses), running_losses, running_ious):
TypeError: 'list' object cannot be interpreted as an integer

Do you know how I can solve this error?

CUDA out of memory error

Thanks for your nice code.
I test the result on the youtobe-vos, but it seems that 12GB gpu is not enough for it.
Can you tell how much is needed for test? And training?

An error occurred during evaluation on the Davis dataset

Hi all,

This is my command I run and I got an unexpected error. I change the path to DAVIS correctly.

python evaluation_method.py --task semi-supervised --results_path my_results/semi-supervised
Evaluating sequences for the semi-supervised task...
0%| | 0/30 [00:00<?, ?it/s]bike-packing frame 00001 not found!
The frames have to be indexed PNG files placed inside the corespondent sequence folder.
The indexes have to match with the initial frame.
IOError: No such file or directory
0%| | 0/30 [00:00<?, ?it/s]
Could anyone help me to figure out the problem?

About reproducing the results on the DAVIS2017 dataset

Hi, I want to reproduce the result on DAVIS2017 dataset, but the config file is only trained on the youtube and finetuned on the davis2017. I would like to ask if you only train on davis2017 and test on davis2017, there are additional config files or need to be adjusted parameters?

Thanks!

Training code

Hi author:
Thanks for your great work!
I am very interested in your work and looking forward to your training code. I have sent two emails to you for asking the training code, but I don't receive any reply. My email addresses are [email protected] and [email protected]

evaluation on DAVIS2016

Hello, I am using pre-trained weights (trained on YT+DAVIS) name=ResNet101-CFBI-DAVIS, J&F=81.9, the result eval on davis17 is correct. But the eval score on davis16 is lower than the result of the paper(I use the same weight eval on davis16), so I want to know which weight or how to run the eval result on davis16?
Thanks!!

make key mask frame ?

how to make the first key frame with mask ?
it is different to other 24bit color image.
I try to make it, but the result is bad.

About find the best checkpoint

Hi!
When I want to train the model on the dataset DAVIS and Youtube-VOS, I find the code only records the loss and iou curves on the 'train' split but without the curves on the 'val' split, then how can I find the best model on the 'val' split? Is the only way is to test on the 'val' split one by one?
Could you tell me how you find the best checkpoint?
Thanks very much!

Why doesn't CFBI need Pretraining?

The Space-Time Memory Networks (Oh et al., ICCV 2019) relies pretraining on the synthetic dataset. I wonder why doesn't CFBI need pretraining?

Multi-GPU

How can I use a multi-gpu setup with your code ? (really great btw)

Training

I'm trying to add more clues to the instance level attention.
I need to know when the sequential training is switching to another video during an epoch.
I've studied your train_manager.py and cfbi.py + datasets.py but I don't know if you feed the whole batch to cfbi.forward() or just one frame.
(sorry I'm kind of new to pytorch)

Is the whole batch fed into "train_manager.py, line 254." from "train_manager.py, line 217-223" or is there any separation?

About YouTube-VOS testing

Hi, thanks for your great work!

By the way, I have a question about testing CFBI on the YouTube-VOS dataset.
How is the input stride set? 1 (all frames) or 5 (default)?
And how is the inference speed calculated?

About the backbone file.

I notice that the backbone you choose is res100+deeplabv3p, could you please provide the link to download the original file of this backbone(resnet101-deeplabv3p.pth.tar)? Thank you.

ensembler modification

I've made modification on the CollaborativeEnsembler(in ensembler.py) params.
In cfbi.py i've changed the size of the attention_dim param, for the self.dynamic_seghead.
I'm trying to negate the weigths from the various IA_gate in the CollaborativeEnsembler to retrain them with the new attention_dim size.
Problem is that I can't figure out where those are in the state_dict

foreground and background predictions confusions.

I have tried using Davis-2017-test-dev sequence named horsejump-stick, there the algorithm prediction confuses the foreground and backgound. Below are the few images. can you please let me know what may have caused this.

00025
00030

How to create mask for 1st frame?

I am trying to create my own dataset for inference. I created mask for first frame and then supply it with my own dataset but I am getting cuda out of memory error.

Traceback (most recent call last):
File "tools/eval_net.py", line 72, in
main()
File "tools/eval_net.py", line 69, in main
evaluator.evaluating()
File ".\networks\engine\eval_manager.py", line 181, in evaluating
all_pred, current_embedding = self.model.forward_for_eval(ref_emb, ref_m, prev_emb, prev_m, current_img, gt_ids=obj_num, pred_size=[ori_height,ori_width])
File ".\networks\cfbi\cfbi.py", line 93, in forward_for_eval
tf_board=False)
File ".\networks\cfbi\cfbi.py", line 186, in before_seghead_process
use_float16=cfg.MODEL_FLOAT16_MATCHING)
File ".\networks\layers\matching.py", line 325, in global_matching_for_eval
reference_embeddings_flat, query_embeddings_flat, reference_labels_flat, n_chunks)
File ".\networks\layers\matching.py", line 125, in _nearest_neighbor_features_per_object_in_chunks
wrong_label_mask)
File ".\networks\layers\matching.py", line 81, in _nn_features_per_object_for_chunk
WRONG_LABEL_PADDING_DISTANCE)
RuntimeError: CUDA out of memory. Tried to allocate 26.97 GiB (GPU 0; 11.00 GiB total capacity; 558.30 MiB already allocated; 8.43 GiB free; 594.00 MiB reserved in total by PyTorch)

Can you please let me know how I can create mask for first frame? I also attached mask file.
00000

How to do quantitative testing?

how do you perform quantitative tests to generate specific ones? You don’t seem to publish the code for quantitative testing?
image

lower results of evaluation on youtube-vos and davis2017

I train CFBI and evaluate it on youtube-vos and davis2017 .But the result is lower than yours. The best result of youtube-vos is score 0.796, J_seen 0.789, J_unseen 0.740, F_seen 0.833, F_unseen 0.81. These results are about 0.03 lower than that on GitHub. The best result of davis2017 is J&F-Mean 0.774, J-Mean 0.751, F-Mean 0.796. These results are about 0.04 lower than that on GitHub.

I set the batch_size 4 when training youtube-vos and set global_chunks 20 when evaluating it. I set the batch_size 2 when training davis2017. I didn't download the datasets from the links in the README.md but downloaded it from the official website. I set the self.TRAIN_TBLOG True in the resnet101_cfbi_davis_finetune.py when training davis2017 and make the changes in the code according to your answer #44 (comment).

Do you know why the result is not ideal, did I do something wrong?

how to make the 4-bit anotation image?

I want to make my own dataset but I do not known how to get the 4-bit depth anotation.The PS software could only make the 8-bit ones.And when I use them,my GPU memory is not enouph.

different and weird result

i run the code with the bigs folder in dataset,
while i test it with default dataset, result is good, like this :
good

but when i change the mask in first mask image, it shows me some weird result, like this,
bad

left in photo is the first key frame.

float16 flag error

Hi! Thank you for sharing this great repo! I was using it to infer on my own data, and when turning float16=True, I got the following error

Ref Frame: 00001.jpg, Time: 4.778830528259277 Traceback (most recent call last): File "tools/eval_net.py", line 65, in <module> main() File "tools/eval_net.py", line 62, in main evaluator.evaluating() File "./networks/engine/eval_manager.py", line 189, in evaluating all_pred, current_embedding = self.model.forward_for_eval(ref_emb, ref_m, prev_emb, prev_m, current_img, gt_ids=obj_num, pred_size=[ori_height,ori_width]) File "./networks/cfbi/cfbi.py", line 93, in forward_for_eval tf_board=False) File "./networks/cfbi/cfbi.py", line 184, in before_seghead_process use_float16=cfg.MODEL_FLOAT16_MATCHING) File "./networks/layers/matching.py", line 305, in global_matching_for_eval reference_embeddings_flat, query_embeddings_flat, reference_labels_flat, n_chunks) File "./networks/layers/matching.py", line 107, in _nearest_neighbor_features_per_object_in_chunks ref_square = reference_embeddings_flat.pow(2).sum(1) RuntimeError: "pow" not implemented for 'Half'

I wonder if there's a solution to this, or should I use some mix-precision package. Thank you very much for your help!

RuntimeError: open(/sharefile): Permission denied

Hi, thanks for your great work! When I try to run python tools/train_net.py --config configs.resnet101_cfbi --da
taset youtubevos, I meet this problem, can you help me ?
File "/home/longma/anaconda2/envs/p3torchstm/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 484, in _new_process_group_helper
timeout=timeout)
RuntimeError: open(/sharefile): Permission denied

Join-Label

In YouTube-VOS, not all the objects appear in the first frame for the first time. Thus, we have to introduce new labels for new objects, if necessary.
if not sample['meta']['flip'] and not(current_label is None) and join_label is None:
join_label = current_label

In evaluation, I cannot understand the function of join label, can you explain it ?

ann_f 调用前没有找到定义

image

鑫哥,这个类YOUTUBE_VOS_Train里面的这个属性ann_f你好像调用前没有找到定义,应该怎么定义?另外,我只有一个四卡的2080ti可以运行你的程序吗?

Implementation detail

I have some problems about implementation detail.

1.The feature strides of Resnet-101 backbone based deeplabv3+ are [4, 8, 8, 8], so i guest the low level feature before embedding is the output of 'layer1' with stride 4. Is that right?

2.Limiting the number of object each frame less that 2 with crop size (384, 384) , batch size 1, and 5 frames, will cost 10G memory in my experiment. Is it because of my poor implementation?

Looking forward to your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.