Coder Social home page Coder Social logo

graphvqa's Introduction

GraphVQA: Language-Guided Graph Neural Networks for Scene Graph Question Answering

PWC License

This repo provides the source code of our paper: GraphVQA: Language-Guided Graph Neural Networks for Scene Graph Question Answering (NAACL 2021 MAI Workshop) [PDF].

@inproceedings{2021graphvqa,
  author    = {Weixin Liang and
               Yanhao Jiang and
               Zixuan Liu},
  title     = {{GraghVQA}: Language-Guided Graph Neural Networks for Graph-based Visual
               Question Answering},
    booktitle = "Proceedings of the Third Workshop on Multimodal Artificial Intelligence",
    month = jun,
    year = "2021",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2021.maiworkshop-1.12",
    doi = "10.18653/v1/2021.maiworkshop-1.12",
    pages = "79--86"
}

Related Paper

LRTA: A Transparent Neural-Symbolic Reasoning Framework with Modular Supervision for Visual Question Answering (NeurIPS KR2ML 2020). Weixin Liang, Feiyang Niu, Aishwarya Reganti, Govind Thattai and Gokhan Tur. [PDF] [Lightning Talk] [Blog] [Github] [Poster] [NeurIPS KR2ML 2020]

Abstract

Images are more than a collection of objects or attributes --- they represent a web of relationships among interconnected objects. Scene Graph has emerged as a new modality for a structured graphical representation of images. Scene Graph encodes objects as nodes connected via pairwise relations as edges. To support question answering on scene graphs, we propose GraphVQA, a language-guided graph neural network framework that translates and executes a natural language question as multiple iterations of message passing among graph nodes. We explore the design space of GraphVQA framework, and discuss the trade-off of different design choices. Our experiments on GQA dataset show that GraphVQA outperforms the state-of-the-art model by a large margin (88.43% vs. 94.78%). Our code is available at https://github.com/codexxxl/GraphVQA

Usage

0. Dependencies

Create a conda environment with python version = 3.6

0.1. Install torchtext, spacy

Run following commands in the created conda environment (Note: torchtext requires version: torchtext<0.9.0)

conda install -c pytorch torchtext
conda install -c conda-forge spacy
conda install -c conda-forge cupy
python -m spacy download en_core_web_sm
conda install -c anaconda nltk

Excute python and run following:

import nltk
nltk.download('wordnet')

0.2. Install PyTorch Geometric

Follow the link below to install PyTorch Geometric via binaries: https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html#installation-via-binaries

Example installation commands with PyTorch 1.4.0 and CUDA 10.0: (Note you need to replace torch-1.4.0+cu100 field with your own installed PyTorch and CUDA versions.)

pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.4.0+cu100.html
pip install --no-index torch-sparse -f https://pytorch-geometric.com/whl/torch-1.4.0+cu100.html
pip install --no-index torch-cluster -f https://pytorch-geometric.com/whl/torch-1.4.0+cu100.html
pip install --no-index torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.4.0+cu100.html
pip install torch-geometric

1. Download Data

Download scene graphs raw data from: https://nlp.stanford.edu/data/gqa/sceneGraphs.zip
Download questions raw data from: https://nlp.stanford.edu/data/gqa/questions1.2.zip

Put sceneGraph json files: train_sceneGraphs.json, val_sceneGraphs.json into sceneGraphs/

Put questions json files: train_balanced_questions.json, val_balanced_questions.json, test_balanced_questions.json, testdev_balanced_questions.json into questions/original/

After this step, the data file structure should look like this:

GraphVQA
    questions/
        original/
            train_balanced_questions.json
            val_balanced_questions.json
            test_balanced_questions.json
            testdev_balanced_questions.json
    sceneGraphs/
        train_sceneGraphs.json
        val_sceneGraphs.json

2. Modify Root Directory

Replace line 13 in Constants.py with your own root directory that contains this source code folder:
ROOT_DIR = pathlib.Path('/Users/yanhaojiang/Desktop/cs224w_final/')

Note ROOT_DIR does not contain the repo name GraphVQA. E.g. for the ROOT_DIR above, my source code folder would be /Users/yanhaojiang/Desktop/cs224w_final/GraphVQA .

3. Preprocess Question Files (just need to run once)

Run command:

python preprocess.py

4. Test Installations and Data Preparations

Following commands should run without error:

python pipeline_model_gat.py 
python gqa_dataset_entry.py 

5. Training

5.1. Main Model: GraphVQA-GAT

Single GPU training:
CUDA_VISIBLE_DEVICES=0 python mainExplain_gat.py --log-name debug.log --batch-size=200 --lr_drop=90

Distributed training:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --use_env mainExplain_gat.py --workers=4 --batch-size=200 --lr_drop=90

To kill a distributed training:
kill $(ps aux | grep mainExplain_gat.py | grep -v grep | awk '{print $2}')

5.2. Baseline and Test Models

Baseline and other test models are trained in similar ways with corresponding mainExplain_{lcgn, gcn, gine}.py file excuted. Their related files are appended in \baseline_and_test_models. (Note move them out of this folder to train).
Corresponding to GraphVQA-GAT's model and training files: gat_skip.py, pipeline_model_gat.py, and mainExplain_gat.py, those model files are:

  1. Baseline LCGN: lcgn.py, pipeline_model_lcgn.py, mainExplain_lcgn.py
  2. GraphVQA-GCN: pipeline_model_gcn.py, mainExplain_gcn.py
  3. GraphVQA-GINE: pipeline_model_gine.py, mainExplain_gine.py

6. Evaluation

We re-organize the evaluation script provided by GQA official, the original script and evaluation data can be found at https://cs.stanford.edu/people/dorarad/gqa/evaluate.html Step 1: Generate evaluation dataset To evaluate your model, there are two options:

  1. Use validation_balanced set of programs.
  2. Use validation_all set provided by GQA official.

6.1 Data Preparation

First download evaluation data from: https://nlp.stanford.edu/data/gqa/eval.zip. then unzip the file and move val_all_question.json to expainableGQA/questions/original/ now we will have

GraphVQA
    questions/
        original/
            val_all_questions.json

6.2 Evaluation

Option 1: Since after running Step 3(preprocess.py), we already have

GraphVQA
    questions/
        val_balanced_programs.json

then, run commands

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --use_env mainExplain_gat.py --workers=4 --batch-size=4000 --evaluate --resume=outputdir/your_checkpoint.pth --evaluate_sets='val_balanced --output_dir='./your_outputdir/' --evaluate_sets='val_unbiased'

you should get results json file located in './your_outputdir/dump_result.json'

then, run python eval.py --predictions=./your_outputdir/dump_results.json --consistency

Option 2: If you want to use validation_all set, then, run commands

python preprocess.py --val-all=True

we should get

GraphVQA
    questions/
        val_all_programs.json

then, run commands

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --use_env mainExplain_gat.py --workers=4 --batch-size=4000 --evaluate --resume=outputdir/your_checkpoint.pth --evaluate_sets='val_balanced --output_dir='./your_outputdir/' --evaluate_sets='val_all'

you should get results json file located in './your_outputdir/dump_results.json'

then, run

python eval.py --predictions=./your_outputdir/dump_results.json --consistency

graphvqa's People

Contributors

codexxxl avatar coldmanck avatar weixin-liang avatar zucksliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

graphvqa's Issues

Class/attribute/relation hierarchy

Hi @codexxxl @Weixin-Liang I note that class_hierarchy.json and attribute_hierarchy.json are included in the meta_info folder. I am wondering if they correspond to the original GQA dataset's ontology. If yes, I can't find some of the categories, for example, part-body part/vehicle part, as shown in Figure 8 of the GQA paper.

While the original GQA dataset paper (appendix) claims that they constructed a huge hierarchy comprising the objects, attributes, and relations, quoted as follows: "Our final ontology contains 1740 objects, 620 attributes and 330 relations, grouped into a hierarchy that consists of 60 different categories and subcategories.", I couldn't find the hierarchy neither on the official website nor any other websites. Your repository has class_hierarchy.json and attribute_hierarchy.json, but it seems the hierarchy for relationships is missing? Could you please let me know where you obtained the ontologies, or how you generated the ontologies if you generated them yourselves?

Thank you very much for your help and I look forward to your reply :)

The result is 0.0?

Dear scholar, I run the 70th epoch ,I found all "Test:" print log is always 0.0 .

Generated Program (637): computer monitor sitting on sitting on jumping sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on younger Ground Truth Program (637): select ( ball )
Generated Program (638): computer monitor sitting on sitting on jumping sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on younger Ground Truth Program (638): exist ( [2] )
Generated Program (639): computer monitor sitting on sitting on jumping sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on sitting on younger Ground Truth Program (639): or ( [1], [3] )
Test: [ 0/661] Time 4.763 ( 4.763) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 94.50 ( 94.50)
Test: [ 50/661] Time 0.293 ( 0.383) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 90.50 ( 93.85)
Test: [100/661] Time 0.294 ( 0.340) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 95.50 ( 94.14)
Test: [150/661] Time 0.308 ( 0.325) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 95.00 ( 93.98)
Test: [200/661] Time 0.300 ( 0.318) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 92.50 ( 94.00)
Test: [250/661] Time 0.294 ( 0.313) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 96.00 ( 94.00)
Test: [300/661] Time 0.298 ( 0.310) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 95.50 ( 94.01)
Test: [350/661] Time 0.298 ( 0.308) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 94.00 ( 93.99)
Test: [400/661] Time 0.299 ( 0.307) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 94.00 ( 93.98)
Test: [450/661] Time 0.286 ( 0.305) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 95.00 ( 93.96)
Test: [500/661] Time 0.299 ( 0.304) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 95.00 ( 93.96)
Test: [550/661] Time 0.293 ( 0.304) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 94.00 ( 93.92)
Test: [600/661] Time 0.293 ( 0.303) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 94.50 ( 93.92)
Test: [650/661] Time 0.296 ( 0.302) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 97.50 ( 93.96)
Test: [660/661] Time 0.098 ( 0.302) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 96.77 ( 93.95)
Test: [661/661] Time 0.098 ( 0.302) Acc@Program 0.00 ( 0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty 0.00 (0.00) Acc@Short 96.77 ( 93.95)
Epoch: [70][ 0/4715] Time 4.50 (4.50) Data 4.12 (4.12) Loss 1.02e-01 (1.02e-01) Acc@Program 0.00 (0.00) Acc@ProgramGroup 0.00 (0.00) Acc@ProgramNonEmpty

Evaluation Issues

Hi @codexxxl @Weixin-Liang I appreciate your help on my previously opened issues. I have other questions regarding how the evaluation is done:

  • Why would you need this check & prediction padding mechanism?

    GraphVQA/eval.py

    Lines 152 to 157 in 8f1c749

    for qid in questions:
    if (qid not in predictions) and (args.consistency or questions[qid]["isBalanced"]):
    predictions[qid] = 'yes'
    # print("no prediction for question {}. Please add prediction for all questions.".format(qid))
    num += 1
    # raise Exception("missing predictions")

    According to the official eval.py there's no this check here. If there's really any missing answer, I don't think it make sense to fill with some default answers and the reported numbers would also be invalid.
  • According to the following two code snippets:

    GraphVQA/eval.py

    Lines 139 to 146 in 8f1c749

    pred = {}
    qq = {}
    for p, data in predictions.items():
    pred[p] = data['prediction']
    if p in questions:
    qq[p] = questions[p]
    predictions = pred
    questions = qq

    GraphVQA/eval.py

    Lines 250 to 266 in 8f1c749

    def updateConsistency(questionId, question, questions):
    inferredQuestions = [eid for eid in question["entailed"] if eid != questionId]
    t_FLAG = False
    if correct and len(inferredQuestions) > 0:
    cosnsitencyScores = []
    for eid in inferredQuestions:
    if eid not in questions:
    t_FLAG = True
    continue
    gold = questions[eid]["answer"]
    predicted = predictions[eid]
    score = toScore(predicted == gold)
    cosnsitencyScores.append(score)
    if not (t_FLAG == True and len(cosnsitencyScores) == 0):
    scores["consistency"].append(avg(cosnsitencyScores))

    It seems your final output predictions miss some answers for the original validation "inferred" questions (val_all_quesitons.json). Am I right? Why does it happen, and why did you handle the missing like this?

Many thanks, and I look forward to your reply!

preprocess.py output?

Dear scholar, I want to know if the following output result is right when I run the script" python preprocess.py"

(pytorch) byd@sbz:~/GraphVQA$ python preprocess.py
False
total 12578 programs
Neural Execution Engine Annotations: empty_buffer_counter 0 multi_buffer_counter 0 total_buffer_counter 0 multi_2_buffer_count 0 max_full_anwer_len 0 max_instr_len 0 max_new_programs_decoder_len 58
finished 12578/0
total 132062 programs
EXE Buffer Referring Empty Object! sg_obj_id_list [522188] imageId 2385163 question Who is wearing jeans?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '522167'}, 'question': {'3': '522188'}, 'fullAnswer': {'1': '522167', '4': '522188'}} imageId 2385163 question Who is wearing jeans?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '522167'}, 'question': {'3': '522188'}, 'fullAnswer': {'1': '522167', '4': '522188'}} imageId 2385163 question Who is wearing jeans?
EXE Buffer Referring Empty Object! sg_obj_id_list [598411] imageId 2371329 question What is on the brick building?
EXE Buffer Referring Empty Object! sg_obj_id_list [598411] imageId 2371329 question What is on the brick building?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '598407'}, 'question': {'4:6': '598411'}, 'fullAnswer': {'1': '598407', '5': '598411'}} imageId 2371329 question What is on the brick building?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '598407'}, 'question': {'4:6': '598411'}, 'fullAnswer': {'1': '598407', '5': '598411'}} imageId 2371329 question What is on the brick building?
Neural Execution Engine Annotations: empty_buffer_counter 51407 multi_buffer_counter 10325 total_buffer_counter 413705 multi_2_buffer_count 10177 max_full_anwer_len 15 max_instr_len 10 max_new_programs_decoder_len 58
finished 132062/0
total 943000 programs
EXE Buffer Referring Empty Object! sg_obj_id_list [1293422] imageId 2385742 question Is the box to the left or to the right of the standing person on the sidewalk?
Ptr Annotations Referring Empty Object! annotations {'answer': {}, 'question': {'2': '1293429', '12:14': '1293421', '16': '1293422'}, 'fullAnswer': {'1': '1293429', '8': '1293421'}} imageId 2385742 question Is the box to the left or to the right of the standing person on the sidewalk?
EXE Buffer Referring Empty Object! sg_obj_id_list [267528] imageId 2408131 question Is the green tree behind or in front of the building the fruits are in front of?
Ptr Annotations Referring Empty Object! annotations {'answer': {}, 'question': {'10': '267514', '12': '267528', '2:4': '267508'}, 'fullAnswer': {'1': '267508', '5': '267514'}} imageId 2408131 question Is the green tree behind or in front of the building the fruits are in front of?
EXE Buffer Referring Empty Object! sg_obj_id_list [152851] imageId 2414741 question The man that is to the left of the train is wearing what?
EXE Buffer Referring Empty Object! sg_obj_id_list [152851] imageId 2414741 question The man that is to the left of the train is wearing what?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '152851'}, 'question': {'1': '152844', '9': '152849'}, 'fullAnswer': {'1': '152844', '4': '152851'}} imageId 2414741 question The man that is to the left of the train is wearing what?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '152851'}, 'question': {'1': '152844', '9': '152849'}, 'fullAnswer': {'1': '152844', '4': '152851'}} imageId 2414741 question The man that is to the left of the train is wearing what?
EXE Buffer Referring Empty Object! sg_obj_id_list [1749668] imageId 2355048 question The backpack is on what?
EXE Buffer Referring Empty Object! sg_obj_id_list [1749668] imageId 2355048 question The backpack is on what?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '1749668'}, 'question': {'1': '2061958'}, 'fullAnswer': {'1': '2061958', '5': '1749668'}} imageId 2355048 question The backpack is on what?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '1749668'}, 'question': {'1': '2061958'}, 'fullAnswer': {'1': '2061958', '5': '1749668'}} imageId 2355048 question The backpack is on what?
EXE Buffer Referring Empty Object! sg_obj_id_list [1749668] imageId 2355048 question What's on the table?
Ptr Annotations Referring Empty Object! annotations {'answer': {'0': '2061958'}, 'question': {}, 'fullAnswer': {'1': '2061958', '5': '1749668'}} imageId 2355048 question What's on the table?
EXE Buffer Referring Empty Object! sg_obj_id_list [267528] imageId 2408131 question Is the green tree in front of the building that is behind the fruits?
Ptr Annotations Referring Empty Object! annotations {'answer': {}, 'question': {'8': '267514', '13': '267528', '2:4': '267508'}, 'fullAnswer': {'2': '267508', '6': '267514'}} imageId 2408131 question Is the green tree in front of the building that is behind the fruits?
Neural Execution Engine Annotations: empty_buffer_counter 368088 multi_buffer_counter 72233 total_buffer_counter 2952194 multi_2_buffer_count 71053 max_full_anwer_len 16 max_instr_len 10 max_new_programs_decoder_len 58
finished 943000/0

Are the weights of GraphVQA released?

I could not find trained weights in this repository. I was writing to inquire about whether the weights of GraphVQA are released or we just have to train the model and get the weights by ourselves?

python pipeline_model_gat.py :the relations between pickle and toechtext

when I run
python pipeline_model_gat.py
I get a error :
Traceback (most recent call last): File "pipeline_model_gat.py", line 850, in <module> load_vocab_flag=True File "/home/tangting/repo/GraphVQA/gqa_dataset_entry.py", line 468, in __init__ self.load_qa_vocab() File "/home/tangting/repo/GraphVQA/gqa_dataset_entry.py", line 578, in load_qa_vocab text_reloaded = pickle.load(f) ModuleNotFoundError: No module named 'torchtext.data.field'

but found not 'torch.data.field' in this position ?

What is the point assuming ground truth scene graphs are given?

Hi @codexxxl thank you for your work & the code! I'd like to know that, most models for GQA assume no information other than raw images are available; however, you seem to have used ground truth scene graphs in the experiments. Shouldn't we try to generate scene graphs using, say, some existing scene graph generation methods to as part of the pipeline?

Answer & Program Supervision

Hi @codexxxl thank you for your great work! Could you please help to clarify the following questions:

  1. In addition to question-answer supervision, you seem to also supervise program generation during training, as shown in the following code snippet:

    loss = program_loss + short_answer_loss

    However, according to the paper: "We note that GraphVQA does not require any explicit supervision on how to solve the question step-by-step, and we only supervise on the final answer prediction." Is there any misunderstanding here?

  2. If you're really supervising program generation, what is the detail of the program generation module being used, as shown in the following code snippet?:

    class TransformerProgramDecoder(torch.nn.Module):

    Could you please provide any reference for this?

  3. What are the programs_input, programs_output, full_answers_input and full_answers_output here? I also do not understand why programs & full answers inputs and targets are divided as follows:

    programs_input = programs[:-1]
    programs_target = programs[1:]
    full_answers_input = full_answers[:-1]
    full_answers_target = full_answers[1:]

    It seems that inputs are almost equal to targets and only shift by 1 row. Could you please elaborate on this? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.