Coder Social home page Coder Social logo

bismex / metabin Goto Github PK

View Code? Open in Web Editor NEW
69.0 5.0 13.0 3.81 MB

[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification

Shell 0.50% Python 97.97% Makefile 0.01% MATLAB 0.56% Cython 0.97%
cvpr2021 cvpr person-reid person-reidentification re-id meta-learning maml domain-generalization dg normalization

metabin's Introduction

Feel free to visit my homepage and awesome person re-id github page


Meta Batch-Instance Normalization for Generalizable Person Re-Identification (MetaBIN), [CVPR 2021]


<Illustration of unsuccessful generalization scenarios and our framework>

  • (a) Under-style-normalization happens when the trained BN model fails to distinguish identities on unseen domains.
  • (b) Over-style-normalization happens when the trained IN model removes even ID-discriminative information.
  • (c) Our key idea is to generalize BIN layers by simulating the preceding cases in a meta-learning pipeline. By overcoming the harsh situations, our model learns to avoid overfitting to source styles.

MetaBIN

git clone our_repository

1) Prerequisites

  • Ubuntu 18.04
  • Python 3.6
  • Pytorch 1.7+
  • NVIDIA GPU (>=8,000MiB)
  • Anaconda 4.8.3
  • CUDA 10.1 (optional)
  • Recent GPU driver (Need to support AMP [link])

2) Preparation

conda create -n MetaBIN python=3.6
conda activate MetaBIN
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.1 -c pytorch
pip install tensorboard
pip install Cython
pip install yacs
pip install termcolor
pip install tabulate
pip install scikit-learn

pip install h5py
pip install imageio
pip install openpyxl 
pip install matplotlib 
pip install pandas 
pip install seaborn

3) Test only

  • Download our model [link] to MetaBIN/logs/Sample/DG-mobilenet
├── MetaBIN/logs/Sample/DG-mobilenet
│   ├── last_checkpoint
│   ├── model_0099999.pth
│   ├── result.png
  • Download test datasets [link] to MetaBIN/datasets/
├── MetaBIN/datasets
│   ├── GRID
│   ├── prid_2011
│   ├── QMUL-iLIDS
│   ├── viper
  • Execute run_file cd MetaBIN/ sh run_evaluate.sh

  • you can get the following results

Datasets Rank-1 Rank-5 Rank-10 mAP mINP TPR@FPR=0.0001 TPR@FPR=0.001 TPR@FPR=0.01
ALL_GRID_average 49.68% 67.52% 76.80% 58.10% 58.10% 0.00% 0.00% 46.35%
ALL_GRID_std 2.30% 3.56% 3.14% 2.58% 2.58% 0.00% 0.00% 26.49%
ALL_VIPER_only_10_average 56.90% 76.71% 82.03% 65.98% 65.98% 0.00% 0.00% 50.97%
ALL_VIPER_only_10_std 2.97% 2.11% 2.06% 2.35% 2.35% 0.00% 0.00% 8.45%
ALL_PRID_average 72.50% 88.20% 91.30% 79.78% 79.78% 0.00% 0.00% 91.00%
ALL_PRID_std 2.20% 2.60% 2.00% 1.88% 1.88% 0.00% 0.00% 1.47%
ALL_iLIDS_average 79.67% 93.33% 97.33% 85.51% 85.51% 0.00% 0.00% 56.13%
ALL_iLIDS_std 4.40% 2.47% 2.26% 2.80% 2.80% 0.00% 0.00% 15.77%
** all_average ** 64.69% 81.44% 86.86% 72.34% 72.34% 0.00% 0.00% 61.11%
  • Other models [link]

Advanced (train new models)

4) Check the below repository structure

MetaBIN/
├── configs/
├── datasets/ (*need to download and connect it by symbolic link [check section 4], please check the folder name*)
│   ├── *cuhk02
│   ├── *cuhk03
│   ├── *CUHK-SYSU
│   ├── *DukeMTMC-reID
│   ├── *GRID
│   ├── *Market-1501-v15.09.15
│   ├── *prid_2011
│   ├── *QMUL-iLIDS
│   ├── *viper
├── demo/
├── fastreid/
├── logs/ 
├── pretrained/ 
├── tests/
├── tools/
'*' means symbolic links which you make (check below sections) 

5) download dataset and connect it

  • Download dataset

    • For single-source DG
      • Need to download Market1501, DukeMTMC-REID [check section 8-1,2]
    • For multi-source DG
      • Training: Market1501, DukeMTMC-REID, CUHK02, CUHK03, CUHK-SYSU [check section 8-1,2,3,4,5]
      • Testing: GRID, PRID, QMUL i-LIDS, VIPer [check section 8-6,7,8,9]
  • Symbolic link (recommended)

    • Check symbolic_link_dataset.sh
    • Modify each directory (need to change)
    • cd MetaBIN
    • bash symbolic_link_dataset.sh
  • Direct connect (not recommended)

    • If you don't want to make symbolic link, move each dataset folder into ./datasets/
    • Check the folder name for each dataset

6) Create pretrained and logs folder

  • Symbolic link (recommended)
    • Make 'MetaBIN(logs)' and 'MetaBIN(pretrained)' folder outside MetaBIN
├── MetaBIN
│   ├── configs/
│   ├── ....
│   ├── tools/
├── MetaBIN(logs)
├── MetaBIN(pretrained)
  • cd MetaBIN

  • bash symbolic_link_others.sh

  • Download pretrained models and change name

    • mobilenetv2_x1_0: [link]
    • mobilenetv2_x1_4: [link]
    • change name as mobilenetv2_1.0.pth, mobilenetv2_1.4.pth
  • Or download pretrained models [link]

  • Direct connect (not recommended)

    • Make 'pretrained' and 'logs' folder in MetaBIN
    • Move the pretrained models to pretrained

7) Train

  • If you run code in pycharm

    • tools/train_net.py -> Edit congifuration
    • Working directory: your folders/MetaBIN/
    • Parameters: --config-file ./configs/Sample/DG-mobilenet.yml
  • Single GPU

python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml

  • Single GPU (specific GPU)

python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml MODEL.DEVICE "cuda:0"

  • Resume (model weights is automatically loaded based on last_checkpoint file in logs)

python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml --resume

  • Evaluation only

python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml --eval-only

8) Datasets

  • (1) Market1501

    • Create a directory named Market-1501-v15.09.15
    • Download the dataset to Market-1501-v15.09.15 from link and extract the files.
    • The data structure should look like
    Market-1501-v15.09.15/
    ├── bounding_box_test/
    ├── bounding_box_train/
    ├── gt_bbox/
    ├── gt_query/
    ├── query/
    
  • (2) DukeMTMC-reID

    • Create a directory called DukeMTMC-reID
    • Download DukeMTMC-reID from link and extract the files.
    • The data structure should look like
    DukeMTMC-reID/
    ├── bounding_box_test/
    ├── bounding_box_train/
    ├── query/
    
  • (3) CUHK02

    • Create cuhk02 folder
    • Download the data from link and put it under cuhk02.
      • The data structure should look like
    cuhk02/
    ├── P1/
    ├── P2/
    ├── P3/
    ├── P4/
    ├── P5/
    
  • (4) CUHK03

    • Create cuhk03 folder
    • Download dataset to cuhk03 from link and extract “cuhk03_release.zip”, resulting in “cuhk03/cuhk03_release/”.
    • Download the new split (767/700) from person-re-ranking. What you need are “cuhk03_new_protocol_config_detected.mat” and “cuhk03_new_protocol_config_labeled.mat”. Put these two mat files under cuhk03.
    • The data structure should look like
    cuhk03/
    ├── cuhk03_release/
    ├── cuhk03_new_protocol_config_detected.mat
    ├── cuhk03_new_protocol_config_labeled.mat
    
  • (5) Person Search (CUHK-SYSU)

    • Create a directory called CUHK-SYSU
    • Download CUHK-SYSU from link and extract the files.
    • Cropped images can be created by my matlab code make_cropped_image.m (this code is included in the datasets folder)
    • The data structure should look like
    CUHK-SYSU/
    ├── annotation/
    ├── Image/
    ├── cropped_image/
    ├── make_cropped_image.m (my matlab code)
    
  • (6) GRID

    • Create a directory called GRID
    • Download GRID from link and extract the files.
    • Split sets (splits.json) can be created by python code grid.py
    • The data structure should look like
    GRID/
    ├── gallery/
    ├── probe/
    ├── splits_single_shot.json (This will be created by `grid.py` in `fastreid/data/datasets/` folder)
    
  • (7) PRID

    • Create a directory called prid_2011
    • Download prid_2011 from link and extract the files.
    • Split sets (splits_single_shot.json) can be created by python code prid.py
    • The data structure should look like
    prid_2011/
    ├── single_shot/
    ├── multi_shot/
    ├── splits_single_shot.json (This will be created by `prid.py` in `fastreid/data/datasets/` folder)
    
  • (8) QMUL i-LIDS

    QMUL-iLIDS/
    ├── images/
    ├── splits.json (This will be created by `iLIDS.py` in `fastreid/data/datasets/` folder)
    
  • (9) VIPer

    • Create a directory called viper
    • Download viper from link and extract the files.
    • Split sets can be created by my matlab code make_split.m (this code is included in the datasets folder)
    • The data structure should look like
    viper/
    ├── cam_a/
    ├── cam_b/
    ├── make_split.m (my matlab code)
    ├── split_1a # Train: split1, Test: split2 ([query]cam1->[gallery]cam2)
    ├── split_1b # Train: split2, Test: split1 (cam1->cam2)
    ├── split_1c # Train: split1, Test: split2 (cam2->cam1)
    ├── split_1d # Train: split2, Test: split1 (cam2->cam1)
    ...
    ...
    ├── split_10a
    ├── split_10b
    ├── split_10c
    ├── split_10d
    

9) Code structure

  • Our code is based on fastreid link

  • fastreid/config/defaults.py: default settings (parameters)

  • fastreid/data/datasets/: about datasets

  • tools/train_net.py: Main code (train/test/tsne/visualize)

  • fastreid/engine/defaults.py: build dataset, build model

    • fastreid/data/build.py: build datasets (base model/meta-train/meta-test)
    • fastreid/data/samplers/triplet_sampler.py: data sampler
    • fastreid/modeling/meta_arch/metalearning.py: build model
      • fastreid/modeling/backbones/mobilenet_v2.py or resnet.py: backbone network
      • fastreid/heads/metalearning_head.py: head network (bnneck)
    • fastreid/solver/build.py: build optimizer and scheduler
  • fastreid/engine/train_loop.py: main train code

    • run_step_meta_learning1(): update base model
    • run_step_meta_learning2(): update balancing parameters (meta-learning)

10) Handling errors

  • AMP
    • If the version of your GPU driver is old, you cannot use AMP(automatic mixed precision).
    • If so, modify the AMP option to False in /MetaBIN/configs/Sample/DG-mobilenet.yml
    • The memory usage will increase.
  • Fastreid evaluation
    • If a compile error occurs in fastreid, run the following command.
    • cd fastreid/evaluation/rank_cylib; make all
  • No such file or directory 'logs/Sample'
    • Please check logs (section 3)
  • No such file or directory 'pretrained'
    • Please check pretrained (section 6)
  • No such file or directory 'datasets'
    • Please check datasets (section 8)
  • RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
    • Please check the CUDA version on your graphics card and pytorch.

Citation

@InProceedings{choi2021metabin,
title = {Meta Batch-Instance Normalization for Generalizable Person Re-Identification},
author = {Choi, Seokeon and Kim, Taekyung and Jeong, Minki and Park, Hyoungseob and Kim, Changick},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021}
}

metabin's People

Contributors

bismex avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

metabin's Issues

Missing License

Under which License is this code usable?
Could you add a License file?

Some question about pretrained model.

Hello!

I found that there are two pre-training models in Section 6, and I want to ask which model should I import if we want to achieve the best results?

looking forward to your answer, thanks! :)

How to run msmt in single-source.

When I replace the dataset name of D-resnet.yml to 'msmt17'. error happened.
loss_dict['loss_triplet_add'] = triplet_loss( File "/home/share/chengzhi/PROGRAM/ACM/MetaBIN-master/./fastreid/modeling/losses/triplet_loss.py", line 162, in triplet_loss dist_ap, dist_an = hard_example_mining(dist_mat, is_pos, is_neg) File "/home/share/chengzhi/PROGRAM/ACM/MetaBIN-master/./fastreid/modeling/losses/triplet_loss.py", line 64, in hard_example_mining dist_an.append(torch.min(dist_mat[i][is_neg[i]])) RuntimeError: operation does not have an identity.
It is because that is_neg[I] are all False. Should I change something?

TypeError: new(): invalid data type 'str'

def fast_batch_collator(batched_inputs):
"""
A simple batch collator for most common reid tasks
"""
elem = batched_inputs[0]
if isinstance(elem, torch.Tensor):
out = torch.zeros((len(batched_inputs), *elem.size()), dtype=elem.dtype)
for i, tensor in enumerate(batched_inputs):
out[i] += tensor
return out

elif isinstance(elem, container_abcs.Mapping):
    return {key: fast_batch_collator([d[key] for d in batched_inputs]) for key in elem}

elif isinstance(elem, float):
    return torch.tensor(batched_inputs, dtype=torch.float64)
elif isinstance(elem, int_classes):
    return torch.tensor(batched_inputs)  # bug!!!
elif isinstance(elem, string_classes):
    return batched_inputs

CUHK-SYSU dataset struct

your cuhk-sysu dataset show like below:
CUHK-SYSU/
├── annotation/
├── Image/
├── cropped_image/
├── make_cropped_image.m (my matlab code)

but the structure of dataset i download just like this:
cuhksysu/
├── cropped_images/

or this:
cuhksysu/
├──images/
├──labels_with_ids/

Results of Market->Duke is inferior to reported in paper

Thanks for the nice work you made, but when we training for Market1501->DukeMTMC, the best result is 53.64%(recall@1), whic is inferior to reported in paper 55.16%(recall@1)

our training log
image

your screen shot
image

and training envionment is

sys.platform            linux
Python                  3.8.10 (default, Jun  4 2021, 15:09:15) [GCC 7.5.0]
numpy                   1.21.2
fastreid                0.1.0 @/root/metabin/MetaBIN/./fastreid
FASTREID_ENV_MODULE     <not set>
PyTorch                 1.7.0+cu110 @/root/miniconda3/lib/python3.8/site-packages/torch
PyTorch debug build     True
GPU available           True
GPU 0                   Tesla V100-SXM2-32GB
CUDA_HOME               /usr/local/cuda
Pillow                  8.3.2
torchvision             0.8.1+cu110 @/root/miniconda3/lib/python3.8/site-packages/torchvision
torchvision arch flags  sm_35, sm_50, sm_60, sm_70, sm_75, sm_80

We training with script below without modifying any configs

python3 ./tools/train_net.py --config-file ./configs/Sample/M-resnet.yml

So what might be causing the performance drop?

multi-gpus

Thank you for making your work public, and can this project support multiple GPUs? thanks for your reply.

Performance difference between trained model and reported result

Hi,
Thanks for sharing the awesome code. I followed the training instructions in README.md, and run the following code:
python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml MODEL.DEVICE "cuda:0"

The program runs well, and after training, I got the following results:
`Evaluation results in csv format:

Datasets Rank-1 Rank-5 Rank-10 mAP mINP TPR@FPR=0.0001 TPR@FPR=0.001 TPR@FPR=0.01
ALL_GRID_average 40.80% 63.20% 74.40% 51.29% 51.29% 0.00% 0.00% 11.34%
ALL_GRID_std 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
ALL_VIPER_only_10_average 61.08% 78.80% 86.08% 69.36% 69.36% 0.00% 0.00% 38.14%
ALL_VIPER_only_10_std 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
ALL_PRID_average 77.00% 90.00% 94.00% 83.01% 83.01% 0.00% 0.00% 81.95%
ALL_PRID_std 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
ALL_iLIDS_average 81.67% 96.67% 98.33% 88.32% 88.32% 0.00% 0.00% 61.50%
ALL_iLIDS_std 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
** all_average ** 65.14% 82.17% 88.20% 72.99% 72.99% 0.00% 0.00% 48.23%
[08/16 17:30:24 fastreid.utils.events]: eta: 0:00:01 iter: 184899 total_loss: 3.406 1)loss_cls: 1.441 1)loss_triplet: 0.001 2)loss_stc: 0.920 2)loss_triplet_add: 1.049 2)loss_triplet_mtrain: 0.000 3)loss_triplet_mtest: 0.000 time: 1.6764 data_time: 0.0224 lr: N/A max_mem: 6326M`

The result seems quite different from the paper, e.g., Rank1 on GRID is 40.8, while in the paper it is 48.4, for all other datasets the results are 2~3% better. Can you share your config for training "model_0099999.pth", so we can reproduce similar results
in paper? Thanks!

CUDA capability---rtx 3060

NVIDIA GeForce RTX 3060 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the NVIDIA GeForce RTX 3060 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

Results of training without meta learning

Hello. When I was reading your paper, I had a doubt: What would be the result if meta-learning was not used during training? So I went to study your code but didn't understand it. Could you please tell me the result without meta-learning or how to modify the code?

Results are different from paper when using single domain for training

Thank you for making your work public, but when I use market or duke as the training set, the experimental results are always lower than those in paper. I conducted multiple experiments, and each time I got exactly the same experimental results, even including the loss during training. I wanna know what causes this?
Specifically, when I use duke as the training set and market as the test set, the rank1 and map are 66.48% and 33.82%, respectively. And in the paper they are 69.2% and 35.9% respectively.
Thank you again for your contribution and look forward to your answer!

About the setting of single-source DG training

Hello, I am trying to reproduce the work of this paper. But I want to try to place a single source domain test. I have downloaded the data set Market1501 and DukeMTMC-REID, and set the corresponding data set according to your document.
But your document does not mention how to set up a single source domain and perform training. I did some attempts by myself but an error occurred when the program was running.

After setting the DATASETS in file ./configs/Sample/DG-mobilenet.yml, I try to train by "python3 ./tools/train_net.py --config-file ./configs/Sample/DG-mobilenet.yml MODEL.DEVICE "cuda:4".
image

But I got an error as below:

Traceback (most recent call last):
  File "./tools/train_net.py", line 142, in <module>
    args=(args,),
  File "./fastreid/engine/launch.py", line 71, in launch
    main_func(*args)b
  File "./fastreid/engine/defaults.py", line 539, in train
    super().train(self.start_iter, self.max_iter)y
    self.run_step_meta_learning2() # update balancing parameters (meta-learning)
  File "./fastreid/engine/train_loop.py", line 638, in run_step_meta_learning2
    opt['domains'] = data_mtest['others']['domains']
TypeError: 'NoneType' object is not subscriptable

I am very distressed, can you tell me the settings for single-source domain training? Many thanks!!

cropped cuhk-sysu

I found cuhk-sysu for person search while did not know how to make a cropped version of the set. Can you provide the method of cropping the images?

The version I got forms like this:
CUHK-SYSU/
├── annotation/
├── Image/

datasets request

hello bro,i am a student from USTC,recently our group need to train model with cuhk02 and sysu,could you please send these datasets to our email:[email protected],i promise these datasets will only be used into study!very pleasure to you!

The cuhk03 dataset structure I download

the cuhk03 dataset structure i download shows like below :
├── cuhk_dataset-master
│ ├── CUHK01/
│ ├──campus/xxxx.png ...
│ ├── detected/
│ ├──train/xxx.png ...
│ ├──train_resized/xxx.png ...
│ ├──val/xxx.png ...
│ ├──val_resized/xxx.png ...
│ ├──resize.py
│ ├── labeled/
│ ├──train/xxx.png ...
│ ├──train_resized/xxx.png ...
│ ├──val/xxx.png ...
│ ├──resize.py
│ ├── model.h5
│ ├── pairing.py

or like this:
├── cuhk03_release
│ ├── cuhk-03.mat
│ ├── readme.txt

your dataset shows like it:
cuhk03/
├── cuhk03_release/
├── cuhk03_new_protocol_config_detected.mat
├── cuhk03_new_protocol_config_labeled.mat

so, how can i use those dateset i had download

How to select NUM_DOMAIN?

Hi, how to select 'MTRAIN: NUM_DOMAIN' if I have 2,3,4,5,6,7 source domains respectively? Thanks!

The parameter updation

As said in this paper, you separate an episode that updates the balancing parameters from another episode
that updates the rest of the parameters, and then perform both episodes alternately at each training iteration.
However, I did not find the corresponding code for this part. I found that all the parameters are updated at the same time.
Please help answer some of my confusion. Thank you.

Training on vehicle data set VERI

Hello, I modified the single domain training based on the D-resnet.yml, but I encountered the following error, how should I solve it?

Traceback (most recent call last):
File "H:/re-id代码/MetaBIN-master/train_net.py", line 136, in
launch(
File "H:\re-id代码\MetaBIN-master\fastreid\engine\launch.py", line 71, in launch
main_func(*args)
File "H:/re-id代码/MetaBIN-master/train_net.py", line 131, in main
return trainer.train() # train_loop.py -> train
File "H:\re-id代码\MetaBIN-master\fastreid\engine\defaults.py", line 539, in train
super().train(self.start_iter, self.max_iter)
File "H:\re-id代码\MetaBIN-master\fastreid\engine\train_loop.py", line 144, in train
self.run_step_meta_learning2() # update balancing parameters (meta-learning)
File "H:\re-id代码\MetaBIN-master\fastreid\engine\train_loop.py", line 606, in run_step_meta_learning2
losses, loss_dict = self.basic_forward(data_mtrain, self.model, opt) # forward
File "H:\re-id代码\MetaBIN-master\fastreid\engine\train_loop.py", line 856, in basic_forward
loss_dict = model.losses(outs, opt)
File "H:\re-id代码\MetaBIN-master\fastreid\modeling\meta_arch\metalearning.py", line 163, in losses
loss_dict['loss_triplet_add'] = triplet_loss(
File "H:\re-id代码\MetaBIN-master\fastreid\modeling\losses\triplet_loss.py", line 174, in triplet_loss
dist_ap, dist_an = hard_example_mining(dist_mat, is_pos, is_neg)
File "H:\re-id代码\MetaBIN-master\fastreid\modeling\losses\triplet_loss.py", line 76, in hard_example_mining
dist_an.append(torch.min(dist_mat[i][is_neg[i]]))
RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.

Process finished with exit code 1

Missing file

Could you please give me the following file?
・CUHK-SYSU/annotation/Person.mat

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.