Coder Social home page Coder Social logo

grover's People

Contributors

tencent-ailab avatar xuty-007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

grover's Issues

reporting error while trying to directly train Gtransformer on finetune datasets from scratch

Hi,

I find your work really interesting and I want to implement the codes myself. I want to see the performance of the Gtransformer without pretraining on bbbp. So I remove the check point path input.

python main.py finetune --data_path exampledata/finetune/bbbp.csv
--features_path exampledata/finetune/bbbp.npz
--save_dir model/finetune/bbbp/
--dataset_type classification
--split_type scaffold_balanced
--ensemble_size 1
--num_folds 3
--no_features_scaling
--ffn_hidden_size 200
--batch_size 32
--epochs 10
--init_lr 0.00015
However, it reports error:
image

The incorrect implementation of multi-head attention!

Assuming the number of attention heads is 4. I find that self-attention is computed between different heads rather than different atoms. The attention scores shape is (num_atoms, 4, 4, 4) which should be (batch_size, max_num_atoms, max_num_atoms). The flattened atom features (num_atoms, node_fdim) should be processed into padded batch data(batch_size, max_num_atoms, node_fdim).

For details, I only extract the main codes there to illustrate why the self-attention is computed between different heads rather than different atoms.

For MTBlock class:

# in the __init__ function
  self.attn = MultiHeadedAttention(h=num_attn_head,
                                   d_model=self.hidden_size,
                                   bias=bias,
                                   dropout=dropout)
for _ in range(num_attn_head):
            self.heads.append(Head(args, hidden_size=hidden_size, atom_messages=atom_messages))

# in the forward function
for head in self.heads:
            q, k, v = head(f_atoms, f_bonds, a2b, a2a, b2a, b2revb)
            queries.append(q.unsqueeze(1))
            keys.append(k.unsqueeze(1))
            values.append(v.unsqueeze(1))

queries = torch.cat(queries, dim=1) # (num_atoms, 4, hidden_size)
keys = torch.cat(keys, dim=1) # (num_atoms, 4, hidden_size)
values = torch.cat(values, dim=1) # (num_atoms, 4, hidden_size)

x_out = self.attn(queries, keys, values, past_key_value)  # multi-headed attention

Now, the queries, keys and values will be fed into multi-head attention to get new results.
For MultiHeadedAttention class:

# in the __init__ function
self.attention = Attention()
self.d_k = d_model // h # equals hidden_size // num_attn_head
self.linear_layers = nn.ModuleList([nn.Linear(d_model, d_model) for _ in range(3)])  # why 3: query, key, value

# in the forward function
    # 1) Do all the linear projections in batch from d_model => h x d_k
query, key, value = [l(x).view(batch_size, -1, self.h, self.d_k).transpose(1, 2)
                     for l, x in zip(self.linear_layers, (query, key, value))] # q, k, v 's shape will be (num_bonds, 4, 4, d_k)
x, _ = self.attention(query, key, value, mask=mask, dropout=self.dropout)

For the Attention class:

class Attention(nn.Module):
    """
    Compute 'Scaled Dot Product SelfAttention
    """

    def forward(self, query, key, value, mask=None, dropout=None):
        scores = torch.matmul(query, key.transpose(-2, -1)) \
                 / math.sqrt(query.size(-1)) # scores shape is (num_atoms, 4, 4, 4)

        if mask is not None:
            scores = scores.masked_fill(mask == 0, -1e9)

        p_attn = F.softmax(scores, dim=-1)

        return torch.matmul(p_attn, value), p_attn # the new output is (num_atoms, 4, 4, d_k) which will be processed into (num_atoms, 4, hidden_size)

As you can see, the scores shape is (num_atoms, 4, 4, 4) which is computed between different heads rather than different atoms. That is, each atom's representation is the combination of different heads' information which is meaningless.

How to install env with python3.7, cuda 11.1

install.sh

#Installing env with python3.7

conda create --name envgrover37 python=3.7
conda activate env grover37

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

(cuda version: 11.1)

conda install -c conda-forge boost
conda install -c conda-forge boost-cpp
conda install -c rmg descriptastorus
conda install -c acellera rdkit=2019.03.4.0
conda install -c conda-forge tqdm
conda install -c anaconda typing
conda install -c anaconda scipy=1.3.0
conda install -c anaconda scikit-learn=0.21.2

how to fix 'module grover not found' error

Add
import sys sys.path.append('/user/grover/')
to build_vocab.py and save_features.py split_data.py

Package not found error

Sorry I'm new to this. But can I install grover on my gpu-supported windows laptop?

I do add conda channels in advance but it still gives me package not found error.

conda create --name pretrain --file requirements.txt

Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed

PackagesNotFoundError: The following packages are not available from current channels:

  - torchvision==0.3.0=py36_cu9.0.176_1
  - python==3.6.8=h0371630_0
  - numpy-base==1.16.4=py36hde5b4d6_0
  - boost==1.68.0=py36h8619c78_1001
  - numpy==1.16.4=py36h7e9f1db_0
  - rdkit==2019.03.4.0=py36hc20afe1_1
  - pandas==0.25.0=py36hb3f55d8_0
  - pytorch==1.1.0=py3.6_cuda9.0.176_cudnn7.5.1_0
  - readline==7.0=h7b6447c_5
  - boost-cpp==1.68.0=h11c811c_1000
  - scikit-learn==0.21.2=py36hcdab131_1
  - scipy==1.3.0=py36h921218d_1

Current channels:

  - https://conda.anaconda.org/rmg/win-64
  - https://conda.anaconda.org/rmg/noarch
  - https://conda.anaconda.org/conda-forge/win-64
  - https://conda.anaconda.org/conda-forge/noarch
  - https://conda.anaconda.org/rdkit/win-64
  - https://conda.anaconda.org/rdkit/noarch
  - https://conda.anaconda.org/pytorch/win-64
  - https://conda.anaconda.org/pytorch/noarch
  - http://conda.anaconda.org/gurobi/win-64
  - http://conda.anaconda.org/gurobi/noarch
  - https://repo.anaconda.com/pkgs/main/win-64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/win-64
  - https://repo.anaconda.com/pkgs/r/noarch
  - https://repo.anaconda.com/pkgs/msys2/win-64
  - https://repo.anaconda.com/pkgs/msys2/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.

Only random splitting?

image
However, the paper claimed that the validation of the model adopts commonly-used scaffold splitting?

QM9

Hi, did you try the QM9 dataset? Thanks:)

File call issues

When I use

python scripts/build_vocab.py --data_path exampledata/pretrain/tryout.csv \

                         --vocab_save_folder exampledata/pretrain  \
                         --dataset_name tryout

Here an error will be reported
Traceback (most recent call last):
File "scripts/build_vocab.py", line 6, in
from grover.data.torchvocab import MolVocab
ModuleNotFoundError: No module named 'grover'

I couldn't install horovod in grover conda environment.

I wanna use grover in multi-gpu environment so I've tried to install horovod in my grover conda env. but I got this error message.

Import Error

Traceback (most recent call last):
File "", line 1, in
File "/home/eung0/miniconda3/envs/grover_1/lib/python3.6/site-packages/horovod/torch/init.py", line 44, in
from horovod.torch.mpi_ops import allreduce, allreduce_async, allreduce_, allreduce_async_
File "/home/eung0/miniconda3/envs/grover_1/lib/python3.6/site-packages/horovod/torch/mpi_ops.py", line 31, in
from horovod.torch import mpi_lib_v2 as mpi_lib
ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.30' not found (required by /home/eung0/miniconda3/envs/grover_1/lib/python3.6/site-packages/horovod/torch/mpi_lib_v2.cpython-36m-x86_64-linux-gnu.so)`

command

HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_GPU_OPERATIONS=NCCL pip install --no-cache-dir --force-reinstall horovod[pytorch]==0.19.5

grover conda env

_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_kmp_llvm conda-forge
blas 1.0 mkl conda-forge
boost 1.68.0 py36h8619c78_1001 conda-forge
boost-cpp 1.68.0 h11c811c_1000 conda-forge
bzip2 1.0.8 h7f98852_4 conda-forge
ca-certificates 2020.6.20 hecda079_0 rmg
cairo 1.16.0 h18b612c_1001 conda-forge
descriptastorus 2.2.0 py_0 rmg
expat 2.4.8 h27087fc_0 conda-forge
fontconfig 2.14.0 h8e229c2_0 conda-forge
freetype 2.10.4 h0708190_1 conda-forge
gettext 0.19.8.1 hf34092f_1004 conda-forge
glib 2.66.3 h58526e2_0 conda-forge
icu 58.2 hf484d3e_1000 conda-forge
joblib 1.1.0 pyhd8ed1ab_0 conda-forge
jpeg 9e h166bdaf_1 conda-forge
lcms2 2.12 hddcbb42_0 conda-forge
lerc 3.0 h9c3ff4c_0 conda-forge
libblas 3.9.0 8_mkl conda-forge
libboost 1.67.0 h46d08c1_4
libcblas 3.9.0 8_mkl conda-forge
libdeflate 1.10 h7f98852_0 conda-forge
libffi 3.2.1 he1b5a44_1007 conda-forge
libgcc-ng 12.1.0 h8d9b700_16 conda-forge
libgfortran-ng 7.5.0 h14aa051_20 conda-forge
libgfortran4 7.5.0 h14aa051_20 conda-forge
libglib 2.66.3 hbe7bbb4_0 conda-forge
libiconv 1.16 h516909a_0 conda-forge
liblapack 3.9.0 8_mkl conda-forge
libpng 1.6.37 h21135ba_2 conda-forge
libstdcxx-ng 12.1.0 ha89aaad_16 conda-forge
libtiff 4.3.0 h0fcbabc_4 conda-forge
libuuid 2.32.1 h7f98852_1000 conda-forge
libwebp-base 1.2.2 h7f98852_1 conda-forge
libxcb 1.13 h7f98852_1004 conda-forge
libzlib 1.2.12 h166bdaf_0 conda-forge
llvm-openmp 14.0.4 he0ac6c6_0 conda-forge
lz4-c 1.9.3 h9c3ff4c_1 conda-forge
mkl 2020.4 h726a3e6_304 conda-forge
mkl_fft 1.0.10 py36_0 conda-forge
mkl_random 1.1.1 py36h830a2c2_0 conda-forge
ncurses 6.3 h27087fc_1 conda-forge
numpy 1.16.4 py36h7e9f1db_0
numpy-base 1.16.4 py36hde5b4d6_0
olefile 0.46 pyh9f0ad1d_1 conda-forge
openjpeg 2.4.0 hb52868f_1 conda-forge
openssl 1.1.1g h516909a_0 rmg
pandas 0.25.0 py36hb3f55d8_0 conda-forge
pcre 8.45 h9c3ff4c_0 conda-forge
pillow 8.3.2 py36h676a545_0 conda-forge
pip 21.3.1 pyhd8ed1ab_0 conda-forge
pixman 0.38.0 h516909a_1003 conda-forge
pthread-stubs 0.4 h36c2ea0_1001 conda-forge
py-boost 1.67.0 py36h04863e7_4
python 3.6.8 h0371630_0
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python_abi 3.6 2_cp36m conda-forge
pytz 2022.1 pyhd8ed1ab_0 conda-forge
rdkit 2020.03.3.0 py36hc20afe1_1 rmg
readline 7.0 h7b6447c_5
scikit-learn 0.21.2 py36hcdab131_1 conda-forge
scipy 1.3.0 py36h921218d_1 conda-forge
setuptools 58.0.4 py36h5fab9bb_2 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlite 3.28.0 h8b20d00_0 conda-forge
tk 8.6.12 h27826a3_0 conda-forge
torch 1.7.1+cu101 pypi_0 pypi
torchaudio 0.7.2 pypi_0 pypi
torchvision 0.8.2+cu101 pypi_0 pypi
tqdm 4.32.1 py_0 conda-forge
typing 3.6.4 py36_0 conda-forge
typing_extensions 3.10.0.2 pyha770c72_0 conda-forge
wheel 0.37.1 pyhd8ed1ab_0 conda-forge
xorg-kbproto 1.0.7 h7f98852_1002 conda-forge
xorg-libice 1.0.10 h7f98852_0 conda-forge
xorg-libsm 1.2.3 hd9c2040_1000 conda-forge
xorg-libx11 1.7.2 h7f98852_0 conda-forge
xorg-libxau 1.0.9 h7f98852_0 conda-forge
xorg-libxdmcp 1.1.3 h7f98852_0 conda-forge
xorg-libxext 1.3.4 h7f98852_1 conda-forge
xorg-libxrender 0.9.10 h7f98852_1003 conda-forge
xorg-renderproto 0.11.1 h7f98852_1002 conda-forge
xorg-xextproto 7.3.0 h7f98852_1002 conda-forge
xorg-xproto 7.0.31 h7f98852_1007 conda-forge
xz 5.2.5 h516909a_1 conda-forge
zlib 1.2.12 h166bdaf_0 conda-forge
zstd 1.5.2 h8a70e8d_1 conda-forge

which version should I install?

Masking implementation confusion

In the paper, it mentions that you apply random masking on atom and bonds. It also says that "GROVER randomly masks a local subgraph". However, in the GroverCollator class, the 'atom_random_mask' and 'bond_random_mask' are only designed for selecting atom/bond context without applying any masking on the input molecule representation. I am wondering if there is anything I missed from the code since I do not see any molecule masking implementation elsewhere.

pretraining dataset

Hi,
Do you have the entire pre training dataset or did you randomly sample ZINC15 and ChemBl ? If so what tranches?

dropout not available in the pretrained models

When using the pre-trained models (both grover_base.pt and grover_large.pg) and trying to predict fingerprints for bbbp.csv example dataset the following error is raised saying that the args.dropout field is not found in the state['args']:

image

Reproduce BBBP result

Hi, thanks for your great work and clear documentation! I'm trying to reproduce your result on BBBP. However, while I followed the exact setting in the README, there seems to be a huge gap between my result (89.4) and the reported number (93.6). I listed all the steps that are fully reproducible. Could you check if there is anything wrong with my side? Thanks a lot for your help in advance!

  1. Create Conda environment:
git clone [email protected]:tencent-ailab/grover.git
cd grover
conda create --name chem --file requirements.txt
conda activate chem
  1. Download the model:
wget https://ai.tencent.com/ailab/ml/ml-data/grover-models/pretrain/grover_base.tar.gz
tar -xvf grover_base.tar.gz
  1. Feature extraction and fine-tuning:
python scripts/save_features.py --data_path exampledata/finetune/bbbp.csv \
                                --save_path exampledata/finetune/bbbp.npz \
                                --features_generator rdkit_2d_normalized \
                                --restart 

python main.py finetune --data_path exampledata/finetune/bbbp.csv \
                        --features_path exampledata/finetune/bbbp.npz \
                        --save_dir model/finetune/bbbp/ \
                        --checkpoint_path grover_base.pt \
                        --dataset_type classification \
                        --split_type scaffold_balanced \
                        --ensemble_size 1 \
                        --num_folds 3 \
                        --no_features_scaling \
                        --ffn_hidden_size 200 \
                        --batch_size 32 \
                        --epochs 10 \
                        --init_lr 0.00015

The training log (quiet.log) is:

Fold 0
Model 0 best val loss = 0.470996 on epoch 9
Model 0 test auc = 0.887339
Ensemble test auc = 0.887339
Fold 1
Model 0 best val loss = 0.476553 on epoch 7
Model 0 test auc = 0.891758
Ensemble test auc = 0.891758
Fold 2
Model 0 best val loss = 0.488360 on epoch 9
Model 0 test auc = 0.904175
Ensemble test auc = 0.904175
3-fold cross validation
Seed 0 ==> test auc = 0.887339
Seed 1 ==> test auc = 0.891758
Seed 2 ==> test auc = 0.904175
overall_scaffold_balanced_test_auc=0.894424
std=0.007127

Questions regarding datasets in the paper

Hi, I notice you tested your model in both QM7 and QM8 but QM9. I am curious about your model's performance in QM9 since it is a more popular and commonly-used dataset.

OSX compatability & CPU training

Not sure if Mac/CPU support is a priority, but it would be nice to be able to run the example training using CPU on my Mac. Might be worth noting in the README that linux use is only currently supported (it says in requirements.txt but I was unsure if it mattered as most linux specific things work on Mac as well). Docker allowed me to set up the environment correctly, but I ran into issues due to not having a nvidia GPU:

c9072f79bbca:python -u /opt/.pycharm_helpers/pydev/pydevd.py --multiprocess --qt-support=auto --port 50115 --file /opt/project/main.py pretrain --data_path exampledata/pretrain/tryout --save_dir model/tryout --atom_vocab_path exampledata/pretrain/tryout_atom_vocab.pkl --bond_vocab_path exampledata/pretrain/tryout_bond_vocab.pkl --batch_size 32 --dropout 0.1 --depth 5 --num_attn_head 1 --hidden_size 100 --epochs 3 --init_lr 0.0002 --max_lr 0.0004 --final_lr 0.0001 --weight_decay 0.0000001 --activation PReLU --backbone gtrans --embedding_output_type both
Connected to pydev debugger (build 203.7148.72)
[WARNING] Horovod cannot be imported; multi-GPU training is unsupported
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1556653099582/work/torch/csrc/cuda/Module.cpp line=33 error=35 : CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
  File "/opt/.pycharm_helpers/pydev/pydevd.py", line 1477, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/opt/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/opt/project/main.py", line 43, in <module>
    pretrain_model(args, logger)
  File "/opt/project/task/pretrain.py", line 33, in pretrain_model
    run_training(args=args, logger=logger)
  File "/opt/project/task/pretrain.py", line 81, in run_training
    torch.cuda.set_device(local_gpu_idx)
  File "/softwares/miniconda3/lib/python3.6/site-packages/torch/cuda/__init__.py", line 265, in set_device
    torch._C._cuda_setDevice(device)
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at /opt/conda/conda-bld/pytorch_1556653099582/work/torch/csrc/cuda/Module.cpp:33
python-BaseException

It seems the torch version required in requirements.txt is the CUDA version, maybe a separate docker file or requirements.txt can be created for CPU based training?

As a side issue, the current requirements.txt does not seem to work correctly on Mac. I added the four additional channels specified in the README and still had the following errors (using conda 4.9.2):


ResolvePackageNotFound: 
  - boost-cpp==1.68.0=h11c811c_1000
  - python==3.6.8=h0371630_0
  - pytorch==1.1.0=py3.6_cuda9.0.176_cudnn7.5.1_0
  - scipy==1.3.0=py36h921218d_1
  - boost==1.68.0=py36h8619c78_1001
  - readline==7.0=h7b6447c_5
  - rdkit==2019.03.4.0=py36hc20afe1_1
  - numpy-base==1.16.4=py36hde5b4d6_0
  - scikit-learn==0.21.2=py36hcdab131_1
  - numpy==1.16.4=py36h7e9f1db_0
  - pandas==0.25.0=py36hb3f55d8_0
  - torchvision==0.3.0=py36_cu9.0.176_1

I did find a workaround by removing the strings after the version numbers (e.g., py3.6_cuda9.0.176_cudnn7.5.1_0), but this also led to issues down the line saying the numpy installation was broken. Here is the modified requirements.txt which was able to build successfully on conda for Mac:

boost-cpp=1.68.0
descriptastorus=2.2.0
numpy=1.16.4
numpy-base=1.16.4
pandas=0.25.0
python=3.6.8
pytorch=1.1.0
tensorboard=1.13.1
torchvision=0.3.0
rdkit=2019.03.4.0
readline=7.0
scikit-learn=0.21.2
scipy=1.3.0
tqdm=4.32.1
typing=3.6.4

I tried cleaning and reinstalling the environment, but still ran into the same problem. Here's the error showing broken installation:


IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!

Importing the multiarray numpy extension module failed.  Most
likely you are trying to import a failed build of numpy.
Here is how to proceed:
- If you're working with a numpy git repository, try `git clean -xdf`
  (removes all files not under version control) and rebuild numpy.
- If you are simply trying to use the numpy version that you have installed:
  your installation is broken - please reinstall numpy.
- If you have already reinstalled and that did not fix the problem, then:
  1. Check that you are using the Python you expect (you're using /Users/elliottower/anaconda3/envs/grover/bin/python3),
     and that you have no directories in your PATH or PYTHONPATH that can
     interfere with the Python and numpy versions you're trying to use.
  2. If (1) looks fine, you can open a new issue at
     https://github.com/numpy/numpy/issues.  Please include details on:
     - how you installed Python
     - how you installed numpy
     - your operating system
     - whether or not you have multiple versions of Python installed
     - if you built from source, your compiler versions and ideally a build log

     Note: this error has many possible causes, so please don't comment on
     an existing issue about this - open a new one instead.

Original error was: dlopen(/Users/elliottower/anaconda3/envs/grover/lib/python3.6/site-packages/numpy/core/_multiarray_umath.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libopenblas.dylib
  Referenced from: /Users/elliottower/anaconda3/envs/grover/lib/python3.6/site-packages/numpy/core/_multiarray_umath.cpython-36m-darwin.so
  Reason: image not found

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.