Coder Social home page Coder Social logo

pytorch / ignite Goto Github PK

View Code? Open in Web Editor NEW
4.5K 61.0 608.0 53.32 MB

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

Home Page: https://pytorch-ignite.ai

License: BSD 3-Clause "New" or "Revised" License

Python 98.79% Shell 0.59% Jupyter Notebook 0.61% Batchfile 0.02%
pytorch neural-network python machine-learning deep-learning metrics hacktoberfest closember

ignite's Introduction

image image image image image
image imageimage imageimage
image image image
image Twitter discord numfocus
image link

TL;DR

Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

PyTorch-Ignite teaser

Click on the image to see complete code

Features

  • Less code than pure PyTorch while ensuring maximum control and simplicity

  • Library approach and no program's control inversion - Use ignite where and when you need

  • Extensible API for metrics, experiment managers, and other components

Table of Contents

Why Ignite?

Ignite is a library that provides three high-level features:

  • Extremely simple engine and event system
  • Out-of-the-box metrics to easily evaluate models
  • Built-in handlers to compose training pipeline, save artifacts and log parameters and metrics

Simplified training and validation loop

No more coding for/while loops on epochs and iterations. Users instantiate engines and run them.

Example
from ignite.engine import Engine, Events, create_supervised_evaluator
from ignite.metrics import Accuracy


# Setup training engine:
def train_step(engine, batch):
    # Users can do whatever they need on a single iteration
    # Eg. forward/backward pass for any number of models, optimizers, etc
    # ...

trainer = Engine(train_step)

# Setup single model evaluation engine
evaluator = create_supervised_evaluator(model, metrics={"accuracy": Accuracy()})

def validation():
    state = evaluator.run(validation_data_loader)
    # print computed metrics
    print(trainer.state.epoch, state.metrics)

# Run model's validation at the end of each epoch
trainer.add_event_handler(Events.EPOCH_COMPLETED, validation)

# Start the training
trainer.run(training_data_loader, max_epochs=100)

Power of Events & Handlers

The cool thing with handlers is that they offer unparalleled flexibility (compared to, for example, callbacks). Handlers can be any function: e.g. lambda, simple function, class method, etc. Thus, we do not require to inherit from an interface and override its abstract methods which could unnecessarily bulk up your code and its complexity.

Execute any number of functions whenever you wish

Examples
trainer.add_event_handler(Events.STARTED, lambda _: print("Start training"))

# attach handler with args, kwargs
mydata = [1, 2, 3, 4]
logger = ...

def on_training_ended(data):
    print(f"Training is ended. mydata={data}")
    # User can use variables from another scope
    logger.info("Training is ended")


trainer.add_event_handler(Events.COMPLETED, on_training_ended, mydata)
# call any number of functions on a single event
trainer.add_event_handler(Events.COMPLETED, lambda engine: print(engine.state.times))

@trainer.on(Events.ITERATION_COMPLETED)
def log_something(engine):
    print(engine.state.output)

Built-in events filtering

Examples
# run the validation every 5 epochs
@trainer.on(Events.EPOCH_COMPLETED(every=5))
def run_validation():
    # run validation

# change some training variable once on 20th epoch
@trainer.on(Events.EPOCH_STARTED(once=20))
def change_training_variable():
    # ...

# Trigger handler with customly defined frequency
@trainer.on(Events.ITERATION_COMPLETED(event_filter=first_x_iters))
def log_gradients():
    # ...

Stack events to share some actions

Examples

Events can be stacked together to enable multiple calls:

@trainer.on(Events.COMPLETED | Events.EPOCH_COMPLETED(every=10))
def run_validation():
    # ...

Custom events to go beyond standard events

Examples

Custom events related to backward and optimizer step calls:

from ignite.engine import EventEnum


class BackpropEvents(EventEnum):
    BACKWARD_STARTED = 'backward_started'
    BACKWARD_COMPLETED = 'backward_completed'
    OPTIM_STEP_COMPLETED = 'optim_step_completed'

def update(engine, batch):
    # ...
    loss = criterion(y_pred, y)
    engine.fire_event(BackpropEvents.BACKWARD_STARTED)
    loss.backward()
    engine.fire_event(BackpropEvents.BACKWARD_COMPLETED)
    optimizer.step()
    engine.fire_event(BackpropEvents.OPTIM_STEP_COMPLETED)
    # ...

trainer = Engine(update)
trainer.register_events(*BackpropEvents)

@trainer.on(BackpropEvents.BACKWARD_STARTED)
def function_before_backprop(engine):
    # ...

Out-of-the-box metrics

Example
precision = Precision(average=False)
recall = Recall(average=False)
F1_per_class = (precision * recall * 2 / (precision + recall))
F1_mean = F1_per_class.mean()  # torch mean method
F1_mean.attach(engine, "F1")

Installation

From pip:

pip install pytorch-ignite

From conda:

conda install ignite -c pytorch

From source:

pip install git+https://github.com/pytorch/ignite

Nightly releases

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

conda install ignite -c pytorch-nightly

Docker Images

Using pre-built images

Pull a pre-built docker image from our Docker Hub and run it with docker v19.03+.

docker run --gpus all -it -v $PWD:/workspace/project --network=host --shm-size 16G pytorchignite/base:latest /bin/bash
List of available pre-built images

Base

  • pytorchignite/base:latest
  • pytorchignite/apex:latest
  • pytorchignite/hvd-base:latest
  • pytorchignite/hvd-apex:latest
  • pytorchignite/msdp-apex:latest

Vision:

  • pytorchignite/vision:latest
  • pytorchignite/hvd-vision:latest
  • pytorchignite/apex-vision:latest
  • pytorchignite/hvd-apex-vision:latest
  • pytorchignite/msdp-apex-vision:latest

NLP:

  • pytorchignite/nlp:latest
  • pytorchignite/hvd-nlp:latest
  • pytorchignite/apex-nlp:latest
  • pytorchignite/hvd-apex-nlp:latest
  • pytorchignite/msdp-apex-nlp:latest

For more details, see here.

Getting Started

Few pointers to get you started:

Documentation

Additional Materials

Examples

Tutorials

Reproducible Training Examples

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

  • ImageNet - logs on Ignite Trains server coming soon ...
  • Pascal VOC2012 - logs on Ignite Trains server coming soon ...

Features:

Code-Generator application

The easiest way to create your training scripts with PyTorch-Ignite:

Communication

User feedback

We have created a form for "user feedback". We appreciate any type of feedback, and this is how we would like to see our community:

  • If you like the project and want to say thanks, this the right place.
  • If you do not like something, please, share it with us, and we can see how to improve it.

Thank you!

Contributing

Please see the contribution guidelines for more information.

As always, PRs are welcome :)

Projects using Ignite

Research papers
Blog articles, tutorials, books
Toolkits
Others

See other projects at "Used by"

If your project implements a paper, represents other use-cases not covered in our official tutorials, Kaggle competition's code, or just your code presents interesting results and uses Ignite. We would like to add your project to this list, so please send a PR with brief description of the project.

Citing Ignite

If you use PyTorch-Ignite in a scientific publication, we would appreciate citations to our project.

@misc{pytorch-ignite,
  author = {V. Fomin and J. Anmol and S. Desroziers and J. Kriss and A. Tejani},
  title = {High-level library to help with training neural networks in PyTorch},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/pytorch/ignite}},
}

About the team & Disclaimer

PyTorch-Ignite is a NumFOCUS Affiliated Project, operated and maintained by volunteers in the PyTorch community in their capacities as individuals (and not as representatives of their employers). See the "About us" page for a list of core contributors. For usage questions and issues, please see the various channels here. For all other questions and inquiries, please send an email to [email protected].

ignite's People

Contributors

alykhantejani avatar anmolsjoshi avatar bibhabasumohapatra avatar devpranjal avatar erip avatar fco-dv avatar gruebel avatar gucifer avatar guptaaryan16 avatar ishan-kumar2 avatar jasonkriss avatar justusschock avatar kamalojasv181 avatar kickitlikeshika avatar kzkadc avatar leej3 avatar louis-she avatar moh-yakoub avatar priyansi avatar puhuk avatar sadra-barikbin avatar sdesrozis avatar toxa23 avatar trsvchn avatar uribgp avatar vfdev-5 avatar wrran avatar ydcjeff avatar ykumards avatar zasdfgbnm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ignite's Issues

License?

What is the license for this library?

[Feature Request] Timing data loading time

As data loading and preprocessing in DataLoader can be the bottleneck of training in many cases, it is always helpful to measure the time for data preparation and optimize when necessary. Also, since the data loading time may improve as the page cache warms up during training, it may be worthwhile to keep track of the data preparation time for a few epochs before drawing the conclusion.

Although in the current version of ignite we can perform the measurement manually either through event handler or update function, I think it will make the life much easier if ignite just keeps track of and logs the loading time for each batch/epoch in the trainer, especially considering it is already keeping track of the overall time for each epoch.

Timer Exception: "reset() takes 1 positional argument but 2 were given"

I've just checked out the new timer handler and attached it to the trainer engine following the example. However, the above exception is produced when the engine fires the triggering event (which is EPOCH_STARTED in the example).

After looking into the codes I guess the problem is that in the _fire_event method of engine it calls the handler method by passing the engine instance as argument:

func(self, *(event_args + args), **kwargs)

which leads to two arguments when calling Timer bound methods like reset as
reset(timer_instance, trainer_instance) and hence the above exception.

Compare validation result between predictions and true labels

First of all thanks for the great framework!

For some cases such as machine translation we may want to see the comparison between our model and the real label. However, currently I can't get the result from Trainer class since the true label is not stored on History and neither it passed on the hook function. Do you have some idea how I can implement this?

Examples are not working

Tried to run scripts from /examples, but failed.
I got 2 errors:

  1. ImportError: No module named 'ignite.handlers.logging'
  2. ImportError: cannot import name 'Evaluate'

Tried to find source code for them and didn't manage to find anything. This functionality is just not implemented yet or I'm doing something wrong?

Communication between callbacks?

Hi developers, thanks for this nice library! I was wondering if you have ideas about how to communicate some state between callbacks?
For my specific use-case, I have 2 callbacks attached to the COMPLETED event of my Evaluator. One computes an average-precision value from the data gathered in the Evaluator's state during the epoch. The other callback saves a checkpoint. I would like to have this average-precision information available to the second callback, so that it can only checkpoint if the AP is the best so far.

Right now my checkpoint callback decides whether to checkpoint based on the validation loss, which is something I can include in the Evaluator state through the output of the function run in every iteration. But I don't know how to include the AP information in the Evaluator state, since it is calculated in a callback.

Hopefully I've explained my situation properly. I am also open to suggestions about changing my code organization, if it solves my problem.

Log handlers are not up to date.

functions like log_training_simple_moving_average uses training_data attribute of Trainer which does not exist.

...
iterations_per_epoch = len(trainer.training_data)
...

How to resume the best model saved and evaluate on the test dataset

Thanks for this excellent project. Since I'm new to Ignite, I have encountered a problem when I try to load the best saved model and run on the test dataset at the end of training. I don't know how can this be achieved under current framework. Can anyone help with this.

Add transform function arguments to create_supervised_evaluator

I just checked out the new API and found the new create_supervised_trainer function really helpful. For most supervised learning task, this function makes trainer definition an one-liner. However, the evaluator counterpart create_supervised_evaluator doesn't seem to be as helpful. Since in most cases during validation we only care about the finall loss/accuracy/distance rather than the raw predictions and it is also much more memory efficient to keep the metrics than the raw predictions, I suggest adding a transform function argument in the create_supervised_evaluator function, which takes both the targets and predictions as input and the create_supervised_evalutor function will return its output as output for the given batch, just like how create_supervised_trainer returning the loss data.

BTW, I also suggest using the two factory functions in the examples so that new users can make use of them more easily without having to dig into the codes.

[Feature Request] Optional Epoch Size arg for Engine.run()

Thanks for your work on the library. It contains a lot of helpful functions to get clean out boilerplate code.

I have had some difficulty using ignite with infinite data iterators (in this case coming from Torchtext), as the run will never end or fire epoch end events. It would be nice if Engine.run() implementations like Trainer and Evaluator took in an optional batches_per_epoch argument. This would be assigned on the state object. Engine._run_once_on_dataset() could check if this attribute was not None, and in that case only go through the next State.batches_per_epoch batches. Otherwise, the functionality would be the same.

Engine state object

We touched on this in #37 and #52 but I wanted to make a dedicated issue to discuss it.

Motivation

  • avoid the proliferation of current_* attributes on Engines
  • allow for handlers to pass state between each other

Proposal

  • similar to torchnet approach
  • we create a new state object at the start of Engine#run and it is updated throughout the run
  • Trainer state can have the following to start: dataloader, history, iteration, epoch, max_epochs
  • Evaluator state can have the following to start: dataloader, history, iteration
  • final state is returned by Engine.run

Open questions

  • do we still need to pass engine to event handlers or can we just pass state?
  • should state just be a dict or an actual object? (@elanmart I think you had some thoughts on this)

What do you all think?

Metrics computation during the training

Hi @alykhantejani

Do you plan to integrate some metrics computation in the trainer or this should be done on the user side with event handlers (and logging in the same way as ignite) ?
Anyway, it would be good to display actually computed loss values.

[Feature Request] Distributed training wrapper

Tensorflow has a feature called tf.estimator that nicely wrap the distributed training, so the user doesn't have to know which one is master, etc. Should we include that as well in ignite? Or that should be under pytorch repository?

Add pairwise distance to Metrics

I think in evaluation of regression task, pairwise distance, especially norm-2 distance, as in torch.nn.functional.pairwise_distance is at least as frequently used as MSE, which is actually mostly used as loss rather than evaluation metrics. Therefore, I was wondering if it is worthy of being added to Metrics package as a commonly used metrics.

Invalid cross-device link with ModelCheckpoint, atomic=True

I have an issue with ModelCheckpoint when atomic=True

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/engines/trainer.py", line 53, in run
    self._handle_exception(state, e)
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/engines/engine.py", line 138, in _handle_exception
    raise e
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/engines/trainer.py", line 40, in run
    hours, mins, secs = self._run_once_on_dataset(state)
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/engines/engine.py", line 132, in _run_once_on_dataset
    self._handle_exception(state, e)
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/engines/engine.py", line 138, in _handle_exception
    raise e
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/engines/engine.py", line 123, in _run_once_on_dataset
    self._fire_event(Events.ITERATION_COMPLETED, state)
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/engines/engine.py", line 106, in _fire_event
    func(self, state, *(event_args + args), **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/handlers/checkpoint.py", line 147, in __call__
    self._save(obj=obj, path=path)
  File "/usr/local/lib/python3.5/dist-packages/ignite-0.1.0a1-py3.5.egg/ignite/handlers/checkpoint.py", line 124, in _save
    os.rename(tmp.name, path)
OSError: [Errno 18] Invalid cross-device link: '/tmp/tmpe7shyxri' -> '/home/user/output/weights/_SSD300_1.pth'

Maybe we need to use shutil.move as suggested here

[Feature Request] Engine checkpointing

When training models, I usually would like to save a checkpoint of the model every few epochs or minutes/hours, so that I can resume training from the latest checkpoint when there are errors either in the codes or hardwares, or when I find it necessary to finetune the learning rate without losing the progress.

In Tensorflow, there are some helper functions in the tf.train package to simplify checkpoint handling, like looking up for latest checkpoints As I was wondering if ignite can also include some of those features so that we don't need to write too many boilerplate codes.

Is it possible to have access to outputs?

I want the training_update_function to return the output of a network in addition to loss so that I could visualize it.
The output is a one-time thing for viz only, but it seems like it will be appended to the history, which is a waste of memory?
Engine saves the output and releases it immediately afterward. Is there a way to do this in Trainer?

Evaluation/Metrics approach

This has been discussed a bit in other issues, but I wanted to make a dedicated issue for us to discuss this as I think it's very important we get this right.

Background

Some previous discussions here and here.

Throughout, I am going to use the motivating example of training a supervised model where you periodically want to compute some metrics against a validation set.

Current Setup

Currently, in order to accomplish this, you need to do the following:

  1. create an Evaluator
  2. register the Evaluate handler to run the Evaluator on the validation set and store the predictions in the history
  3. add another event handler to actually use this history to compute the metrics you care about
  4. log/plot these metrics however you choose

In code, this looks something like this:

model = ...
validation_loader = ...
trainer = ...
evaluator = create_supervised_evaluator(model, cuda=True)
trainer.add_event_handler(Events.EPOCH_COMPLETED, Evaluate(evaluator, validation_loader, epoch_interval=1))
@trainer.on(Events.EPOCH_COMPLETED)
def log(engine):
    print(engine.current_epoch, categorical_accuracy(evaluator.history))

Pros

  • keeps library code cleanly separated with minimal implicit dependencies
  • user doesn't have to write much code

Cons

  • can be confusing what happens where
    • we have both an Evaluator and an Evaluate and yet neither one computes any sort of metrics
  • there are a lot of ways this could go wrong
    • you have to understand the contract between what gets stored in the Evaluator's history and the metrics functions
    • you have to make sure you attach the Evaluate handler and any logging handlers to the same event

Goals

Evaluating a model is gonna be something that (essentially) everyone does so I think we need to have a good story here. Imo, we should make the supervised case be super easy while still making it possible for non-supervised cases. That being said, I think we want to accomplish this without removing flexibility and without adding a ton of code.

Ideas

Working backward from what I would like the api to be, it might be nice if you could just do something like this:

model = ...
validation_loader = ...
trainer = ...
@trainer.on(Events.EPOCH_COMPLETED)
def evaluate(engine):
  results = evaluate(model, {'acc': categorical_accuracy})
  # do something with those results

It'd be even nicer if I could do something like this:

model = ...
validation_loader = ...
trainer = ...
trainer.add_event_handler(Events.EPOCH_COMPLETED, Evaluate(model, {'acc': categorical_accuracy}))

But without making assumptions about how users want to plot/log their evaluation results, this isn't possible.

What do you all think? Anything here you take issue with? Any ideas on how we can best accomplish this? Do we need to make plotting/logging part of this discussion as well?

Road to 0.1.0...

@alykhantejani I wanted to start the discussion of what work is left to do before we can cut an official release (with conda and pypi packages as well). We've used this alpha phase well to work out the fundamental APIs. But I think its time to give users some guarantees of a stable API so they can have confidence that things aren't gonna change/break underneath them.

Here is a rough sketch of what I feel is blocking us from cutting a 0.1.0 release. It's mostly documentation related.

Todo

  • convert all docstrings to google style
  • flesh out remaining docstrings
  • update README
    • I'd prefer to take the torchvision approach here and make the README very minimal. Just installation, link to docs, etc...
  • sphinx docs generation
  • sphinx docs hosting
  • release plan
    • where (pypi and conda?)
    • name (looks like ignite is taken on pypi)

What do you think? Do you agree with the overall thoughts? Any disagreements on the todo list or anything missing in your eyes?

Better name for events enum

Since TrainingEvents are not only used in training, but in the whole loop, I suggest using more general term Events.

I assume that Events is going to be used quite often.
Having a shorter name would be more convenient.

General Roadmap?

Hey guys,

Loving the project so far. I have a handful of extensions, utilities that I'd love to discuss either adding to this project or putting in a separate library. Would it be possible to get some insight into the future directions of ignite? How are decisions being made? I'm happy to start opening PRs, but obviously wanted to discuss them first.

Some of the things I'm curious about:

  • adding some common update and inference callables (e.g. a supervised updater that essentially just does what tnt did by default)
  • support for multiple validation sets
  • a different approach to metrics that doesn't require coordination between the update/inference function and the event handlers
  • considering using a "state" based approach (like in tnt) rather than a "history" based approach (in order to minimize memory consumption for use cases like #20)
  • allow validation to happen every n iterations instead of being epoch based (similar to what is suggested here
  • a "callback" abstraction for cases where multiple event handlers need to coordinate and pass state around

If this is better discussed offline over email or something, just let me know.

Thanks a lot!

CONTRIBUTING questions

@alykhantejani before we start merging a bunch of new features, how do you feel about adding a CONTRIBUTING.md doc that spells things out a lil bit more? The main thing I care about right now is settling on a docstring format. I think you said you were okay with going with the google style (ala pytorch) but it'd be nice to get that down somewhere. I can then go back and update the existing docstrings.

[Feature Request] Templates for logging/plotting

I think although right now the logging/plotting are straightforward to add using the event handler as in the examples, it can potentially be made even easier by adding some templates or some level of abstractions as logging/plotting are usually pretty standard and most people simply plot mostly the same types of loss, accuracy, learning rates, weights, etc. for the same type of task.

For example, there can be a logging/plotting class which uses the low-level event handler API underlyingly but abstracts away implementation details like type of logger (file/tensorboard/wisdom), logging formating and provide standard implementation for certain common logging/ploting handlers like in the ignite examples or even task like classification/detection/regression by wrapping up logging/plotting handlers as a package, while still allows users to customize the behaviors of certain handlers or add extra handlers.

remove docs from README

reduce to a simple README like torchvisions.

Notes will move into the docs folder and be available via a website

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.