Coder Social home page Coder Social logo

x-vlm's Introduction

X-VLM: learning multi-grained vision language alignments

Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts. Yan Zeng, Xinsong Zhang, Hang Li. arXiv 2021.

  • Nov 2022: Release X2-VLM: All-In-One for Vision Language Tasks; All-In-One == Image + Video + Transfer to Other Languages / Domains
  • May 2022: The paper has been accepted by ICML 2022
  • Jan 2022: Release official PyTorch implementation and X-VLM checkpoints
  • Nov 2021: Release preprint in arXiv

X-VLM (216M parameters: swin-base + 6L text + 6L cross): PWC PWC PWC PWC PWC PWC PWC PWC PWC

Hiring

We are looking for interns / FTEs at ByteDance AI-LAB (in Beijing / Shanghai)! If you are interested in working with us on vision language models, please send your resume to [email protected].

Features

  • Support several backbones
    • vision encoder: deit / clip-vit / swin-transformer
    • text encoder: bert / roberta
  • Support apex O1 / O2 for pre-training
  • Read from and write to HDFS
  • Distributed training across nodes for both pre-training and fine-tuning

Please read the code for more details.

Requirements

  • Install python3 environment
pip3 install -r requirements.txt
  • Download raw images from corresponding websites
  • Download the json files we provided, which contains image read paths and captions and/or bbox annotations
  • If running pre-training scripts:
  • Organize these files like this (% is for pre-training only):
X-VLM/
    data/
        finetune/
            refcoco+/*.json
            *.json
        
        %pretrain_4m/*.json
        %swin_base_patch4_window7_224_22k.pth
        %bert-base-uncased/
            config.json
            pytorch_model.bin
            tokenizer_config.json
            tokenizer.json
            vocab.txt

    images/
        coco/
            train2014/*.jpg
            val2014/*.jpg
            test2015/*.jpg
        
        visualgenome/
            image/*.jpg
        
        nlvr2/
            images/
                train/0-99/*.png
            dev/*.png
            test1/*.png
        
        %sbu/*.jpg
        %cc-3m/*.jpg

Pretrain

python3 run.py --task "pretrain_4m_base" --dist "1" --output_dir "output/pretrain_4m_base"

For distributed training across nodes, see run.py for more details. To make a fair comparison of some recent works, we pre-trained X-VLM (4M/16M) for 200K steps.

Data


🌟UPDATE: our multi-lingual multi-modal project Cross-View Language Modeling released the text of COCO+VG+SBU+CC3M and Object And Region Annotations in six languages. You can use english text for X-VLM pre-training.


All datasets we utilized are publicly available. We cannot re-distribute the data. So, please prepare the pre-training data by yourself. Here, we provide some data examples. Read the code dataset/pretrain_dataset.py/ImageTextJsonDataset & RegionTextJsonDataset for details.

# image-captions pairs, providing 'binary' or 'image_rpath' 
{'caption': 'dog on bike in harajuku', 
 'binary': binary_encoding_of_the_image, 
 'image_rpath': local_rpath_of_the_image
}


# object/region annotations, providing 'binary' or 'image_rpath' 
{'elems': [{'caption': 'lady sitting at table that has pizza on it',  # str or list of str  
            'bb': [155, 0, 205, 131]   # (x, y, w, h)
            }, 
           {'caption': 'window',  
            'attributes': 'closed',  # str or list of str 
            'bb': [20, 130, 335, 185]
            },
          ]
 'caption': if_exist,  # str or list of str 
 'binary': binary_encoding_of_the_image, 
 'image_rpath': local_rpath_of_the_image
}

Checkpoints

X-VLM (4M, 200K steps)
X-VLM (16M, 200K steps)

Finetune

Datasets for finetuning and checkpoints of X-VLM (4M/16M) can be downloaded in following links.

Data

download json files

Checkpoints and Logs (16M)

retrieval-mscoco
retrieval-flickr
vqa
nlvr2
refcoco
refcoco-weak
captioning-coco

Checkpoints and Logs (4M)

4m-all-ft-ckpts.tar

Examples

# train
python3 run.py --task "vqa" --dist "1" --output_dir "output/vqa" --checkpoint "4m_base_model_state_step_199999.th"

# train: if using >2 nodes for fine-tuning, specify --output_hdfs to save some tmp results; it is only required by vqa & refcoco 
python3 run.py --task "vqa" --dist "all" --output_dir "output/vqa" --output_hdfs "hdfs://xxx/vqa_tmp" --checkpoint "4m_base_model_state_step_199999.th"  

# evaluate
python3 run.py --task "vqa" --dist "1" --evaluate --output_dir "output/vqa_eval" --checkpoint "4m_base_finetune/vqa/model_state_epoch_9.th"

Specify "--task" to finetune on image-text retrieval, nlvr2, visual grounding, or image captioning. See run.py for details.

More Examples of Captioning:

# adapt cross-modal encoder + MLM head -> lm decoder; subsequent fine-tuning is included   
python3 run.py --task "coco_capt_domain" --dist "1" --output_dir "output/coco_capt_domain" --checkpoint "4m_base_model_state_step_199999.th"

# fine-tune only; evaluate is included 
python3 run.py --task "coco_captioning" --dist "1" --output_dir "output/coco_captioning" --checkpoint "4m_base_finetune/coco_caption/lm_domain_pretrain.th"
# evaluate only
python3 run.py --task "coco_captioning" --dist "1" --output_dir "output/coco_captioning" --evaluate --checkpoint "4m_base_finetune/coco_caption/coco_capt_ft_epoch_4.th"

# further CIDEr optimization; evaluate is included 
python3 run.py --task "coco_captioning_scst" --dist "1" --output_dir "output/coco_captioning_scst" --checkpoint "4m_base_finetune/coco_caption/coco_capt_ft_epoch_4.th"
# evaluate only
python3 run.py --task "coco_captioning" --dist "1" --output_dir "output/coco_captioning_scst" --evaluate --checkpoint "4m_base_finetune/coco_caption/coco_capt_cider_step_41000.th"

To make a fair comparison, we follow the previous works for fine-tuning. So, some scripts are based on ALBEF, OSCAR, and BLIP. We thank the authors for opening source their code.

Evaluation on VLUE

VLUE is a new OOD benchmark to evaluate vision-language models, which has been accepted by ICML2022.

python3 run.py --task "eval_vlue_itr" --dist "1" --evaluate  --output_dir "output/" --checkpoint "itr_coco/checkpoint_9.pth"

python3 run.py --task "eval_vlue_vqa" --dist "1" --evaluate  --output_dir "output/" --checkpoint "vqa/model_state_epoch_9.th"

python3 run.py --task "eval_vlue_nlvr" --dist "1" --evaluate  --output_dir "output/" --checkpoint "nlvr/nlvr_ft/checkpoint_best.pth"

python3 run.py --task "eval_vlue_refcoco" --dist "1" --evaluate  --output_dir "output/" --checkpoint "refcoco_bbox/checkpoint_best.pth"

python3 run.py --task "eval_vlue_refcoco_weakly" --dist "1" --evaluate  --output_dir "output/" --checkpoint "refcoco/checkpoint_best.pth"

Citation

If you find this repository useful, please considering giving ⭐ or citing:

@article{xvlm,
  title={Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts},
  author={Zeng, Yan and Zhang, Xinsong and Li, Hang},
  journal={arXiv preprint arXiv:2111.08276},
  year={2021}
}

Contact

For issues using this code, please submit a GitHub issue.

x-vlm's People

Contributors

zengyan-97 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

x-vlm's Issues

Hi, could you provide the specific commands of finetuning on coco captioning? Thanks!

I am confused about the "lm_domain_pretrain.th" file in "4m_base_finetune/coco_caption/". If I want to reproduce the fine-tuning results on coco captioning, which pre-trained model should I load: "lm_domain_pretrain.th" or "4m_base_model_state_step_199999.th"? Maybe you could provide the specific commands of two-stage finetuning on coco captioning? Thanks!

About swin_B_480

Currently I want to train a model with 480 resolution, but I cannot find the link to download the image encoder swin-480. Can you share me the link to download it? Thank you very much.

pretrain-base-4m for the X_VLM

Dear authors:
Thanks for your great works. While pretraining the model, I don't know how to organize the open dateset. I have already tried the following yaml.

图片
图片

Fine-tuning

Hello,

I wonder how did you finetune your model for the Flickr30K dataset?
Did you freeze The Text and Vision Encoder on only fine-tuned the itm_head, or did you apply the fine-tuning to the whole model?

Code for Grad-CAM visualization

Hi,

Thanks for the great work.

In Figure 3 of your paper, you showed the Grad-CAM visualizations of your model on RefCOCO+ from text descriptions. Could you share the code for using Grad-CAM on your model?

Thanks!

Performance of different vision encoders

Thanks for your great sharing.

else: # deit, worse than clip-vit/swin...

As shown above, you mentioned in the code that initilaizing the vision encoder with deit is worser than clip-vit and swin.

Do you have some supporting results? For example, the performance on Image-Text Retrieval with deit or swin

Finetuning On NLVR2

Hi! @zengyan-97

I can not find the check point that can be finetuned.

Thank you for providing these check points:

  • nlvr_domain_pretrain.th
  • 16m_base_model_state_step_199999.th
  • 4m_base_model_state_step_199999.th

Also there is a ckpt that you have already finetuned:

  • nlvr_ft/checkpoint_best.pth

However, can you guide me for how to finetune on NLVR2?

Loading frompretrained warnining

Hi, when i run the code, i got a lot of warninings

Position interpolate vision_encoder.layers.0.blocks.0.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.0.blocks.1.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.1.blocks.0.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.1.blocks.1.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.0.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.1.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.2.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.3.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.4.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.5.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.6.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.7.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.8.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.9.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.10.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.11.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.12.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.13.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.14.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.15.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.16.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.2.blocks.17.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.3.blocks.0.attn.relative_position_bias_table from 13x13 to 23x23
Position interpolate vision_encoder.layers.3.blocks.1.attn.relative_position_bias_table from 13x13 to 23x23
### Loading pretrained text encoder
load checkpoint from16m_base_model_state_step_199999.th
missing_keys:  []
unexpected_keys:  ['bbox_head.0.weight', 'bbox_head.0.bias', 'bbox_head.1.weight', 'bbox_head.1.bias', 'bbox_head.3.weight', 'bbox_head.3.bias', 'text_encoder.cls.predictions.bias', 'text_encoder.cls.predictions.transform.dense.weight', 'text_encoder.cls.predictions.transform.dense.bias', 'text_encoder.cls.predictions.transform.LayerNorm.weight', 'text_encoder.cls.predictions.transform.LayerNorm.bias', 'text_encoder.cls.predictions.decoder.weight', 'text_encoder.cls.predictions.decoder.bias']```
will this be a problem for fine-tuning?

add web demo/models/datasets to ICML organization on Hugging Face

Hi, congrats for the acceptance at ICML 2022. We are having an event on Hugging Face for ICML 2022, where you can submit spaces(web demos), models, and datasets for papers for a chance to win prizes. Hugging Hub works similar to github where you can push to user profiles or organization accounts, you can add the models/datasets and spaces to this organization: https://huggingface.co/ICML2022, after joining the organization using this link https://huggingface.co/organizations/ICML2022/share/BpynfJtfsOTktlmXYoKNqqCnyufKLFXuay, let me know if you need any help with the above steps, thanks

Distributed mode for single GPU

Is it possibile to run itr_flickr as not distributed but on a single gpu?

When running:
python run.py --task "itr_flickr" --dist "gpu0" --output_dir "output/itr_flickr" --checkpoint "4m_base_finetune/itr_flickr/checkpoint_best.pth"

I get:

Training Retrieval Flickr

| distributed init (rank 0): env://
Traceback (most recent call last):
File "Retrieval.py", line 381, in
main(args, config)
File "Retrieval.py", line 215, in main
utils.init_distributed_mode(args)
File "C:\Users..\X-VLM-master\utils_init_.py", line 357, in init_distributed_mode
world_size=args.world_size, rank=args.rank)
File "C:\Users..\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\distributed\distributed_c10d.py", line 434, in init_process_group
init_method, rank, world_size, timeout=timeout
File "C:\Users..\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\distributed\rendezvous.py", line 82, in rendezvous
raise RuntimeError("No rendezvous handler for {}://".format(result.scheme))
RuntimeError: No rendezvous handler for env://

VQA: Limitations in questions and answers

I want my Finetuned VQA model to be able to answer questions is was not trained on before and similarly provides answers that does not exist in the original answer list (test json file answers in a list).

Is there a limitation to the kind of questions i can the model? If yes, how can I tweak the code to meet my needs?

Will data leakage happen for bounding box prediction?

As the output of ViT also contains position information, if we directly feed embeddings of visual concept region into MLP to prediction bounding box, will model just learn to output trivial position transformation?

The code saves the best testing results on Image-Text Retrieval

Hi, the following code is inappropriate:

X-VLM/Retrieval.py

Lines 310 to 333 in e7b9602

val_result = itm_eval(score_val_i2t, score_val_t2i, val_loader.dataset.txt2img, val_loader.dataset.img2txt)
print(val_result)
test_result = itm_eval(score_test_i2t, score_test_t2i, test_loader.dataset.txt2img, test_loader.dataset.img2txt)
print(test_result)
log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},
**{f'val_{k}': v for k, v in val_result.items()},
**{f'test_{k}': v for k, v in test_result.items()},
'epoch': epoch}
with open(os.path.join(args.output_dir, "log.txt"), "a") as f:
f.write(json.dumps(log_stats) + "\n")
if test_result['r_mean'] > best:
save_obj = {
'model': model_without_ddp.state_dict(),
# 'optimizer': optimizer.state_dict(),
# 'lr_scheduler': lr_scheduler.state_dict(),
'config': config,
# 'epoch': epoch,
}
torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth'))
best = test_result['r_mean']
best_epoch = epoch

Training log for the pretrain stage

Hi,

Thank you for releasing the code! It's an interesting project!

I'd like to know is it available to also release the pretraining logs? Or some milestones to verify the training process?

Thanks!
Blakey

About training data

Hi, thank you for your great work! I have a question about the training data of your model. As is said in your paper,

we have the following data for training: 1) the image caption describing the whole image; 2) region annotations such as “man wearing backpack” each of which has been related to a region in the image, while previous approaches roughly align the region descriptions with the whole image; 3) object labels such as “backpack” which are utilized by previous methods to train object detectors.

I know the first is normal image captioning data, and the third is object detection data. However, where can you get the second part? Where are the region annotations come from?

About batch sampling `iter_perc`

Thanks for your code.

I note that in your paper, you said "We sample the data by making half of the images in a batch containing bounding box annotations".

But the code is:

X-VLM/Pretrain.py

Lines 82 to 121 in e7b9602

if random.random() < config['regions']['iter_perc']:
try:
region_batch = next(subarea_iter)
except StopIteration:
subarea_iter = iter(region_loader)
region_batch = next(subarea_iter)
image, region_batch = region_batch[0].to(device, non_blocking=True), [
t.to(device) if t is not None else None for t in region_batch[1:]]
idx_to_group_img, text_ids, text_atts, text_ids_masked, masked_pos, masked_ids, \
image_atts, target_bbox, is_image = region_batch
if config['calc_image_bbox_loss']:
is_image = None
optimizer.zero_grad()
loss_itc, loss_itm, loss_mlm, loss_bbox, loss_giou = \
model(image, text_ids, text_atts, text_ids_masked=text_ids_masked, masked_pos=masked_pos, masked_ids=masked_ids,
image_atts=image_atts, idx_to_group_img=idx_to_group_img, target_bbox=target_bbox, is_image=is_image, ret_bbox_loss=True)
loss = loss_itc + loss_itm + loss_mlm + loss_bbox + loss_giou
accelerator.backward_step(loss, optimizer)
accelerator_clip_grad_norm = float(config['accelerator']['CLIP_GRAD_NORM'])
if accelerator_clip_grad_norm > 0:
accelerator.optimizer_step(optimizer, model, accelerator_clip_grad_norm)
optimizer.step()
metric_logger.update(loss_bbox=loss_bbox.item())
metric_logger.update(loss_giou=loss_giou.item())
else:
# fix it
metric_logger.update(loss_bbox=0.5)
metric_logger.update(loss_giou=0.5)
image, batch = batch[0].to(device, non_blocking=True), [t.to(device) if t is not None else None for t in batch[1:]]
text_ids, text_atts, text_ids_masked, masked_pos, masked_ids = batch

The iter_perc you used is 0.5, which means only 50% of time, the model takes a batch of image-text-box data and a batch of image-text data as input; and otherwise, the model only takes a batch of image-text data as input.

Therefore, it seems that iter_perc = 1.0 fits your statement in the paper.

According to the ablation study results in Table 4, you have certainly tested the impact of iter_perc = 0.0 (corresponding to the model, X-VLM w/o all).

So, have you tested more values of iter_perc (e.g., 1.0)?

The torch version out of date

I can't setup this repo. When i try to run !pip3 install -r requirements.txt it return that ERROR: No matching distribution found for torch==1.7.1
Please help me to resolve it!

Fine-tune on VQA

Traceback (most recent call last):
File "VQA.py", line 283, in
main(args, config)
File "VQA.py", line 134, in main
train_dataset, vqa_test_dataset = create_dataset('vqa', config, args.evaluate)
File "/home/deer/X-VLM/dataset/init.py", line 84, in create_dataset
vqa_test_dataset = vqa_dataset(config['test_file'], test_transform, config['vqa_root'], config['vg_root'],
File "/home/deer/X-VLM/dataset/vqa_dataset.py", line 31, in init
tokenizer = RobertaTokenizer.from_pretrained(text_encoder) if use_roberta else BertTokenizer.from_pretrained(text_encoder)
File "/home/deer/anaconda3/envs/xvlm/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1651, in from_pretrained
fast_tokenizer_file = get_fast_tokenizer_file(
File "/home/deer/anaconda3/envs/xvlm/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3467, in get_fast_tokenizer_file
all_files = get_list_of_files(
File "/home/deer/anaconda3/envs/xvlm/lib/python3.8/site-packages/transformers/file_utils.py", line 1818, in get_list_of_files
model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(
File "/home/deer/anaconda3/envs/xvlm/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 94, in _inner_fn
return fn(*args, **kwargs)
File "/home/deer/anaconda3/envs/xvlm/lib/python3.8/site-packages/huggingface_hub/utils/_deprecation.py", line 98, in inner_f
return f(*args, **kwargs)
File "/home/deer/anaconda3/envs/xvlm/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1289, in model_info
hf_raise_for_status(r)
File "/home/deer/anaconda3/envs/xvlm/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 242, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Tpjmanp6E_vsjeLwXkyXg)

Repository Not Found for url: https://huggingface.co/api/models/data/bert-base-uncased.
Please make sure you specified the correct repo_id and repo_type.
If the repo is private, make sure you are authenticated.

Hi, have you ever had this problem? how to deal with this problem.

Script to generate RegionTextJsonDataset?

HI, I understand that the data cannot redistributed, but could you share the code to generate RegionTextJsonDataset from the official COCO, VG datasets so we can follow the pretraining method?

apply an entire BERT as text encoder

Can I apply an entire BERT as text encoder ? Not just 6 layer.
What should I do to apply an entire BERT as text encoder ? And another BERT as the cross modal encoder?

An error in the `Retrieval.py`

Dear authors,
Thanks for sharing the code.
I just noticed that in the Retrieval.py, in the evaluation process, the model_without_ddp is not updated with the new model parameters as in the other finetuning tasks. Please let me know if I'm wrong. Thank you!

VQA: Understanding how the model provides us an answer? Need of answer list?

This is taken from the VQA.py file, where the evaluation of the test data is taking place

answer_list = [answer for answer in data_loader.dataset.answer_list]
    
for n, (image, question, question_id,answer_list) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):     
    answer_input = tokenizer(answer_list, padding='longest', return_tensors='pt').to(device)    
    image = image.to(device, non_blocking=True)
    question_input = tokenizer(question, padding='longest', return_tensors="pt").to(device)        
    topk_ids, topk_probs = model(image, question_input, answer_input, train=False, k=config['k_test'])      
    
    for ques_id, topk_id, topk_prob in zip(question_id, topk_ids, topk_probs):
        ques_id = int(ques_id.item())          
        _, pred = topk_prob.max(dim=0)
        result.append({"question_id":ques_id, "answer":answer_list[topk_id[pred]]})

I would like to understand what is the need of "asnwer list", and which answers are these exactly, are these the config['test_file'] answers in order or what?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.