Coder Social home page Coder Social logo

edisonleeeee / greatx Goto Github PK

View Code? Open in Web Editor NEW
81.0 8.0 11.0 8.34 MB

A graph reliability toolbox based on PyTorch and PyTorch Geometric (PyG).

License: MIT License

Python 100.00%
adversarial-attacks graph-convolutional-networks graph-neural-networks pytorch graph-reliability-toolbox distribution-shift inherent-noise pytorch-geometric

greatx's Introduction

GreatX: Graph Reliability Toolbox

banner

GreatX is great!

[Documentation] | [Examples]

Python pytorch pypi license Contrib docs

❓ What is "Reliability" on Graphs?

threats

"Reliability" on graphs refers to robustness against the following threats:

  • Inherent noise
  • Distribution Shift
  • Adversarial Attacks

For more details, please kindly refer to our paper Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack

πŸ’¨ News

  • November 2, 2022: We are planning to release GreatX 0.1.0 this month, stay tuned!
  • June 30, 2022: GraphWar has been renamed to GreatX.
  • June 9, 2022: GraphWar v0.1.0 has been released. We also provide the documentation along with numerous examples .
  • May 27, 2022: GraphWar has been refactored with PyTorch Geometric (PyG), old code based on DGL can be found here. We will soon release the first version of GreatX, stay tuned!

NOTE: GreatX is still in the early stages and the API will likely continue to change. If you are interested in this project, don't hesitate to contact me or make a PR directly.

πŸš€ Installation

Please make sure you have installed PyTorch and PyTorch Geometric (PyG).

# Coming soon
pip install -U greatx

or

# Recommended
git clone https://github.com/EdisonLeeeee/GreatX.git && cd GreatX
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

⚑ Get Started

Assume that you have a torch_geometric.data.Data instance data that describes your graph.

How fast can we train and evaluate your own GNN?

Take GCN as an example:

from greatx.nn.models import GCN
from greatx.training import Trainer
from torch_geometric.datasets import Planetoid
# Any PyG dataset is available!
dataset = Planetoid(root='.', name='Cora')
data = dataset[0]
model = GCN(dataset.num_features, dataset.num_classes)
trainer = Trainer(model, device='cuda:0') # or 'cpu'
trainer.fit(data, mask=data.train_mask)
trainer.evaluate(data, mask=data.test_mask)

A simple targeted manipulation attack

from greatx.attack.targeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()

A simple untargeted (non-targeted) manipulation attack

from greatx.attack.untargeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()

πŸ‘€ Implementations

In detail, the following methods are currently implemented:

βš” Adversarial Attack

Graph Manipulation Attack (GMA)

Targeted Attack

Methods Descriptions Examples
RandomAttack A simple random method that chooses edges to flip randomly. [Example]
DICEAttack Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 [Example]
Nettack ZΓΌgner et al. Adversarial Attacks on Neural Networks for Graph Data, KDD'18 [Example]
FGAttack Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 [Example]
GFAttack Chang et al. A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20 [Example]
IGAttack Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
SGAttack Li et al. Adversarial Attack on Large Scale Graph, TKDE'21 [Example]
PGDAttack Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 [Example]

Untargeted Attack

Methods Descriptions Examples
RandomAttack A simple random method that chooses edges to flip randomly [Example]
DICEAttack Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 [Example]
FGAttack Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 [Example]
Metattack ZΓΌgner et al. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19 [Example]
IGAttack Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
PGDAttack Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 [Example]

Graph Injection Attack (GIA)

Methods Descriptions Examples
RandomInjection A simple random method that chooses nodes to inject randomly. [Example]
AdvInjection The 2nd place solution of KDD Cup 2020, team: ADVERSARIES. [Example]

Graph Universal Attack (GUA)

Graph Backdoor Attack (GBA)

Methods Descriptions Examples
LGCBackdoor Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 [Example]
FGBackdoor Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 [Example]

Enhancing Techniques or Corresponding Defense

Standard GNNs (without defense)

Supervised

Methods Descriptions Examples
GCN Kipf et al. Semi-Supervised Classification with Graph Convolutional Networks, ICLR'17 [Example]
SGC Wu et al. Simplifying Graph Convolutional Networks, ICLR'19 [Example]
GAT VeličkoviΔ‡ et al. Graph Attention Networks, ICLR'18 [Example]
DAGNN Liu et al. Towards Deeper Graph Neural Networks, KDD'20 [Example]
APPNP Klicpera et al. Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR'19 [Example]
JKNet Xu et al. Representation Learning on Graphs with Jumping Knowledge Networks, ICML'18 [Example]
TAGCN Du et al. Topological Adaptive Graph Convolutional Networks, arXiv'17 [Example]
SSGC Zhu et al. Simple Spectral Graph Convolution, ICLR'21 [Example]
DGC Wang et al. Dissecting the Diffusion Process in Linear Graph Convolutional Networks, NeurIPS'21 [Example]
NLGCN, NLMLP, NLGAT Liu et al. Non-Local Graph Neural Networks, TPAMI'22 [Example]
SpikingGCN Zhu et al. Spiking Graph Convolutional Networks, IJCAI'22 [Example]

Unsupervised/Self-supervise

Methods Descriptions Examples
DGI VeličkoviΔ‡ et al. Deep Graph Infomax, ICLR'19 [Example]
GRACE Zhu et al. Deep Graph Contrastive Representation Learning, ICML'20 [Example]
CCA-SSG Zhang et al. From Canonical Correlation Analysis to Self-supervised Graph Neural Networks, NeurIPS'21 [Example]
GGD Zheng et al. Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination, NeurIPS'22 [Example]

Techniques Against Adversarial Attacks

Methods Descriptions Examples
MedianGCN Chen et al. Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21 [Example]
RobustGCN Zhu et al. Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19 [Example]
SoftMedianGCN Geisler et al. Reliable Graph Neural Networks via Robust Aggregation, NeurIPS'20
Geisler et al. Robustness of Graph Neural Networks at Scale, NeurIPS'21
[Example]
ElasticGNN Liu et al. Elastic Graph Neural Networks, ICML'21 [Example]
AirGNN Liu et al. Graph Neural Networks with Adaptive Residual, NeurIPS'21 [Example]
SimPGCN Jin et al. Node Similarity Preserving Graph Convolutional Networks, WSDM'21 [Example]
SAT Li et al. Spectral Adversarial Training for Robust Graph Neural Network, arXiv'22 [Example]
JaccardPurification Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
SVDPurification Entezari et al. All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs, WSDM'20 [Example]
GNNGUARD Zhang et al. GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks, NeurIPS'20 [Example]
GUARD Li et al. GUARD: Graph Universal Adversarial Defense, arXiv'22 [Example]
RTGCN Wu et al. Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation, KDD'22 [Example]

More details of literatures and the official codes can be found at Awesome Graph Adversarial Learning.

Techniques Against Inherent Noise

Methods Descriptions Examples
DropEdge Rong et al. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification, ICLR'20 [Example]
DropNode You et al. Graph Contrastive Learning with Augmentations, NeurIPS'20 [Example]
DropPath Li et al. MaskGAE: Masked Graph Modeling Meets Graph Autoencoders, arXiv'22' [Example]
FeaturePropagation Rossi et al. On the Unreasonable Effectiveness of Feature propagation in Learning on Graphs with Missing Node Features, Log'22 [Example]

Miscellaneous

Methods Descriptions Examples
Centered Kernel Alignment (CKA) Nguyen et al. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth, ICLR'21 [Example]

❓ Known Issues

greatx's People

Contributors

edisonleeeee avatar jeongwhanchoi avatar jie-re avatar tosemml avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

greatx's Issues

SG Attack example cannot run as expected on cuda

Hello,
I got some error when I run SG Attack's example code on cuda device:

Traceback (most recent call last):
  File "src/test.py", line 50, in <module>
    attacker.attack(target)
  File "/greatx/attack/targeted/sg_attack.py", line 212, in attack
    subgraph = self.get_subgraph(target, target_label, best_wrong_label)
  File "/greatx/attack/targeted/sg_attack.py", line 124, in get_subgraph
    self.label == best_wrong_label)[0].cpu().numpy()
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!

I found self.label is on cuda device, but best_wrong_label is on cpu.

attacker_nodes = torch.where(
self.label == best_wrong_label)[0].cpu().numpy()

I remove line94 .cpu(), everything is going well and no error report

self.logits = self.surrogate(self.feat, self.edge_index,
self.edge_weight).cpu()

I found there is a commit that adds .cpu end of line 94, so I dont know it's a bug or something else🀨

problem with metattack

Thanks for this wonderful repo. However, when I run the metattack example ,the result is not promising
Here is my result when attack Cora with metattack
Training...
100/100 [====================] - Total: 520.68ms - 5ms/step- loss: 0.0713 - acc: 0.996 - val_loss: 0.574 - val_acc: 0.847
Evaluating...
1/1 [====================] - Total: 2.01ms - 2ms/step- loss: 0.522 - acc: 0.847
Before attack
╒═════════╀═══════════╕
β”‚ Names β”‚ Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss β”‚ 0.521524 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc β”‚ 0.846579 β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
Peturbing graph...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 253/253 [01:00<00:00, 4.17it/s]Evaluating...
1/1 [====================] - Total: 2.08ms - 2ms/step- loss: 0.528 - acc: 0.844
After evasion attack
╒═════════╀═══════════╕
β”‚ Names β”‚ Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss β”‚ 0.528431 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc β”‚ 0.844064 β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
Training...
32/100 [=====>..............] - ETA: 0s- loss: 0.212 - acc: 0.956 - val_loss: 0.634 - val_acc: 0.807
100/100 [====================] - Total: 407.58ms - 4ms/step- loss: 0.0601 - acc: 0.996 - val_loss: 0.704 - val_acc: 0.787
Evaluating...
1/1 [====================] - Total: 1.66ms - 1ms/step- loss: 0.711 - acc: 0.819
After poisoning attack
╒═════════╀═══════════╕
β”‚ Names β”‚ Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss β”‚ 0.710625 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc β”‚ 0.818913 β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›

TypeError

Hi.
I have tried to run the metattack.py file. But got this error.

TypeError: fit() got an unexpected keyword argument 'mask'

Questions about PGDAttack

Hi! Thanks for this great repo.
I have some questions about the implementation of PGDAttack.

  1. The learning rate of PGDAttack.
    lr = base_lr * num_budgets / math.sqrt(epoch + 1)
    In the original paper of PGDAttack and implementation of DeepRobust, the learning rate here seems to be
    lr = base_lr / math.sqrt(epoch + 1). I notice that the default value of "base_lr" is kept the same, so the final "lr" would be very different as num_budget is often large. Will this difference matter a lot?
  2. The choice of learning rate in PGDAttack.
    As suggested by the authors, PGDAttack prefers different base_lr for different loss_type. I think it would be better if this difference is included.
  3. In PGD Example, the same attacker is applied in both poison and evasion settings. In the original implementation, there is a poison version of PGDAttack specific for the poison setting (named as MinMax in the DeepRobust repo). Will this version be included as well?

By the way, it is great to see that the repo is more PyG styled. Does that mean we can attack more PyG models as surrogate or victim models? For example, we can attack GAT and APPNP using PGDAttack as long as they are written in a PyG message-passing framework through edge_index.

Benchmark Results of Attack Performance

Hi, thanks for sharing the awesome repo with us! I recently run the attack sample code but the resultpgd_attack.py and random_attack.py under examples/attack/untargeted, but the accuracies of both evasion and poison attack seem not to decrease.

I'm pretty confused by the attack results. For CV models, pgd attack easily decreases the accuracy to nearly random guesses, but the results of GreatX seem not to consent with CV models. Is it because the number of the perturbed edges is too small?

Here are the results of pgd_attack.py

Processing...
Done!
Training...
100/100 [==============================] - Total: 874.37ms - 8ms/step- loss: 0.0524 - acc: 0.996 - val_loss: 0.625 - val_acc: 0.815
Evaluating...
1/1 [==============================] - Total: 1.82ms - 1ms/step- loss: 0.597 - acc: 0.843
Before attack
 Objects in BunchDict:
╒═════════╀═══════════╕
β”‚ Names   β”‚   Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss    β”‚  0.59718  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc     β”‚  0.842555 β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
PGD training...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 200/200 [00:02<00:00, 69.74it/s]
Bernoulli sampling...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 804.86it/s]
Evaluating...
1/1 [==============================] - Total: 2.11ms - 2ms/step- loss: 0.603 - acc: 0.842
After evasion attack
 Objects in BunchDict:
╒═════════╀═══════════╕
β”‚ Names   β”‚   Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss    β”‚  0.603293 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc     β”‚  0.842052 β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
Training...
100/100 [==============================] - Total: 535.83ms - 5ms/step- loss: 0.124 - acc: 0.976 - val_loss: 0.728 - val_acc: 0.779
Evaluating...
1/1 [==============================] - Total: 1.74ms - 1ms/step- loss: 0.766 - acc: 0.827
After poisoning attack
 Objects in BunchDict:
╒═════════╀═══════════╕
β”‚ Names   β”‚   Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss    β”‚  0.76604  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc     β”‚  0.826962 β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›

Here are the results of random_attack.py

Training...
100/100 [==============================] - Total: 600.92ms - 6ms/step- loss: 0.0615 - acc: 0.984 - val_loss: 0.626 - val_acc: 0.811
Evaluating...
1/1 [==============================] - Total: 1.93ms - 1ms/step- loss: 0.564 - acc: 0.832
Before attack
 Objects in BunchDict:
╒═════════╀═══════════╕
β”‚ Names   β”‚   Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss    β”‚  0.564449 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc     β”‚  0.832495 β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
Peturbing graph...: 253it [00:00, 4588.44it/s]
Evaluating...
1/1 [==============================] - Total: 2.14ms - 2ms/step- loss: 0.585 - acc: 0.826
After evasion attack
 Objects in BunchDict:
╒═════════╀═══════════╕
β”‚ Names   β”‚   Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss    β”‚  0.584646 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc     β”‚  0.826459 β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
Training...
100/100 [==============================] - Total: 530.04ms - 5ms/step- loss: 0.0767 - acc: 0.98 - val_loss: 0.574 - val_acc: 0.791
Evaluating...
1/1 [==============================] - Total: 1.77ms - 1ms/step- loss: 0.695 - acc: 0.813
After poisoning attack
 Objects in BunchDict:
╒═════════╀═══════════╕
β”‚ Names   β”‚   Objects β”‚
β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
β”‚ loss    β”‚  0.695349 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ acc     β”‚  0.81338  β”‚
β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.