Coder Social home page Coder Social logo

alleninstitute / argschema Goto Github PK

View Code? Open in Web Editor NEW
24.0 9.0 12.0 597 KB

This python module simplifies the development of modules that would like to define and check a particular set of input parameters, but be able to flexibly define those inputs in different ways in different contexts.

License: Other

Python 99.79% Dockerfile 0.21%

argschema's Introduction

Actions Status codecov.io Documentation Status Gitter chat

argschema

This python module simplifies the development of modules that would like to define and check a particular set of input parameters, but be able to flexibly define those inputs in different ways in different contexts.

It will allow you to

Pass a command line argument to a location of a input_json file which contains the input parameters

OR pass a json_dictionary directly into the module with the parameters defined

AND/OR pass parameters via the command line, in a way that will override the input_json or the json_dictionary given.

Upgrading to version 2.0

The major change in argschema 2.0 is becoming compatible with marshmallow 3, which changes many of the ways your schemas and schema modifications work. Some noteable differences are that schemas are strict now by default, so tossing keys in your outputs or inputs that were ignored and stripped before now throw errors unless
def Please read this document for more guidance https://marshmallow.readthedocs.io/en/stable/upgrading.html

Level of Support

We are planning on occasional updating this tool with no fixed schedule. Community involvement is encouraged through both issues and pull requests. Please make pull requests against the dev branch, as we will test changes there before merging into master.

Documentation

Continually built docs can be found here http://argschema.readthedocs.io/en/master/

What does it do

argschema defines two basic classes, ArgSchemaParser and ArgSchema. ArgSchemaParser takes ArgSchema as an input which is simply an extension of the marshmallow Schema class (http://marshmallow.readthedocs.io/en/latest/).

ArgSchemaParser then takes that schema, and builds a argparse parser from the schema using a standard pattern to convert the schema. Nested elements of the schema are specified with a "."

so the json

{
    "nested":{
        "a":5
    },
    "b":"a"
}

would map to the command line arguments

$ python mymodule.py --nested.a 5 --b a

ArgSchemaParser then parses the command line arguments into a dictionary using argparse, and then reformatting the dictionary to have the proper nested structure to match the schema it was provided.

Next, ArgSchemaParser reads either the input_data if it was passed, or takes the path of the input_json and reads that in as a dictionary.

Given that input dictionary and the command line dictionary, ArgSchemaParser then merges the two dictionaries, where the command line dictionary takes precendence.

Next, that dictionary is parsed and validated using marshmallow to convert the raw dictionary into the types defined by the marshmallow fields.

The resulting dictionary is then stored in self.args available for use.

After that the module does some standard things, such as parsing the parameter args['log_level'] to configure a logging module at self.logger.

How should I use it

subclass schemas.ArgSchema using the pattern found in template_module.py to define your module parameters, defining default values if you want and help statements that will be displayed by argparse as help statements, and maybe provide some example parameters for someone who is trying to figure out how to user your module (which is also a good way to rapidly test your module as you are developing it)

Look at the set of fields to understand how to build custom fields, or use the default Marshmallow fields to construct your json Schema. Note the use of InputDir and InputFile, two example custom marshmallow validators that are included in argschema.fields. They will insure that these directory exist, or files exist before trying to run your module and provide errors to the user. Also of note, fields.NumpyArray, which will convert Lists of Lists directly into numpy arrays. More useful Fields can be found in argschema.fields.

You can use the power of marshmallow to produce custom validators for any data type, and serialize/deserialize methods that will make loading complex parameters as python objects easy and repeatable.

For instance, this could allow you to have a parameter that is simply a string in json, but the deserialization routine loads uses that string to look something up in a database and return a numpy array, or have a string which is actually a filepath to an image file, and deserializes that as a numpy array of the image. This is the basic power of marshmallow.

Why did we make this

You should consider using this module if this pattern seems familar to you.

You start building some code in an ipython notebook to play around with a new idea where you define some variables as you go that make your code work, you fiddle with things for awhile and eventually you get it to a point that works. You then immediately want to use it over and over again, and are scrolling through your ipython notebook, changing variables, making copies of your notebook for different runs. Several times you make mistakes typing in the exact filepath of some input file, and your notebook breaks on cell 53, but no big deal, you just fix the filename variable and rerun it.

It's a mess, and you know you should migrate your code over to a module that you can call from other programs or notebooks. You start collecting your input variables to the top of the notebook and make yourself a wrapper function that you can call. However, now your mistake in filename typing is a disaster because the file doesn't exist, and your code doesn't check for the existence of the file until quite late. You start implementing some input validation checks to avoid this problem.

Now you start wanting to integrate this code with other things, including elements that aren't in python. You decide that you need to have a command line module that executes the code, because then you can use other tools to stitch together your processing, like maybe some shell scripts or docker run commands. You implement an argparse set of inputs and default values that make your python program a self-contained program, with some help documentation. Along the way, you have to refactor the parsed argparse variables into your function and strip out your old hacky validation code to avoid maintaining two versions of validation in the future.

This module starts becoming useful enough that you want to integrate it into more complex modules. You end up copying and pasting various argparse lines over to other modules, and then 5 other modules. Later you decide to change your original module a little bit, and you have a nightmare of code replace to fix up the other modules to mirror this phenomenon.. you kick yourself for not having thought this through more clearly.

Your code is now really useful, but its so useful you start running it on larger and larger jobs, and you want to deploy it on a cluster in your groups pipeline workflow. Your pipeline framework needs to dynamically define and control the parameters, and so it would like to simply write all the inputs to a file and pass your program that file, rather than having to parse out the inputs into your argparse format. You have to refactor your inputs again to deal with this new pattern, by setting up a validation framework to work on json. Now what do you do with your argparse validators? Throw them away so you don't have to maintain them? If you do that you've lost the ability to run this code on the command line and run test cases easily when things inevitably break. To avoid this, you decide to maintain two wrapper programs that call the same underlying function and they basically do the same thing, just one does it with argparse and the other one for json inputs. You are now stuck maintaining both versions of validation and it feels pretty silly.

If you had only designed things from the beginning to allow for each of these use cases over the lifetime of your module.

This is what argschema is designed to do.

Copyright 2017 Allen Institute

argschema's People

Contributors

12at7 avatar djkapner avatar dyf avatar fcollman avatar jfperkins avatar matthewaitken avatar nilegraddis avatar rhytnen avatar russtorres avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

argschema's Issues

python 2/3 compatibility

argschema.fields.Str() seems to return unicode instead of str in python 2.7.13

script:

import sys
from argschema import ArgSchema, ArgSchemaParser, fields

class MySchema(ArgSchema):
    string = fields.Str()

def test_string(string):
    if isinstance(string, str):
        print "PASSED!\ntype :: %s" % type(string)
    else:
        print "FAILED!\ntype :: %s" % type(string)

if __name__ == "__main__":
    print "python ::\n%s\n" % sys.version
    print "Test\n--------"
    mod = ArgSchemaParser(schema_type=MySchema)
    test_string(mod.args["string"])

output:

$ python argschema_issue.py --string test
python ::
2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:09:15) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]

Test
--------
FAILED!
type :: <type 'unicode'>

post_load error with marshmallow update 2.20.5 -> 3.0.0rc6

code that follows produces:

marhsmallow 2.20.5
testing MyClass1
{'xid': 1, 'log_level': 'ERROR'}
MyClass1 passed
testing MyClass2
{'xid': 1, 'log_level': 'ERROR'}
MyClass2 passed

and:

marhsmallow 3.0.0rc6
testing MyClass1
{'log_level': 'ERROR', 'xid': 1}
MyClass1 passed
testing MyClass2
MyClass2 failed
'NoneType' object has no attribute 'get'
import argschema
import marshmallow as mm


example1 = {
        'xid': 1,
        }


class MySchema1(argschema.ArgSchema):
    xid = argschema.fields.Int(required=True)


class MySchema2(argschema.ArgSchema):
    xid = argschema.fields.Int(required=True)
    
    @mm.post_load
    def my_post(self, data):
        pass


class MyClass1(argschema.ArgSchemaParser):
    default_schema = MySchema1
    def run(self):
        print(self.args)
    

class MyClass2(argschema.ArgSchemaParser):
    default_schema = MySchema2
    def run(self):
        print(self.args)
    

if __name__ == "__main__":
    print("marhsmallow {}".format(mm.__version__))
    for myclass in [MyClass1, MyClass2]:
        print("testing {}".format(myclass.__name__))
        try:
            mb = myclass(input_data=example1, args=[])
            mb.run()
        except Exception as e:
            print("{} failed".format(myclass.__name__))
            print(e)
        else:
            print("{} passed".format(myclass.__name__))

Question about argschema logger

I'm finding argschema super useful! But I've had one question/issue with regards to logging come up.

Because the argschema_parser is one of the first things called in most main scripts I was wondering if it was possible to prevent argschema from instantiating a logger? I couldn't find anything about it in the documentation pages.

The argschema logger caused another logger that I instantiated later in my main with basicConfig() to fail silently.
(see: https://docs.python.org/2/library/logging.html#logging.basicConfig)

I unfortunately, couldn't move my logger instantiation before invoking the argschema parser since the log filename and some other relevant variables needed to be obtained from argschema.

I was eventually able to find this workaround that worked:

# Remove root handler instantiated by argschema
for handler in logging.root.handlers[:]:
    logging.root.removeHandler(handler)

Do let me know if there's a better way for me to handle the issue. I'm very new to using both the logger and argschema...

Test suite doesn't run in Windows

Currently the tests are configured to run with --boxed which explicitly calls os.fork() and results in pytest throwing an error because fork is not supported on Windows.

Removing the --boxed flag, test_files.test_output_file_no_write failed and test_files.test_output_dir_bad_permission actually just hung indefinitely.

output validation

We have setup the default arguments to include --output_json, but we haven't actually done anything with it by default.

I propose that we make an extension of ArgSchemaParser, ArgSchemaOutputParser which standardizes how this would be done, but makes it's use optional for those who want it. Maybe even the output_json should do with the schema_type, so that output_json isn't a default part of the ArgSchema unless you are using ArgSchemaOutputParser... but i'm a bit confused on that point.

class ArgSchemaOutputParser(ArgSchemaParser):

    def __init__(self,output_schema_type = None, *args, **kwargs):
        self.output_schema_type = output_schema_type
        super(ArgSchemaOutputParser,self).__init__(*args,**kwargs)
    
    def output(self,d):
        """outputs a dictionary to the output_json file path after
        validating it through the output_schema_type

        Parameters
        ----------
        d:dict
            output dictionary to output to self.mod['output_json'] location
        
        Raises:
        mm.ValidationError
        """

        schema = self.output_schema_type()
        (output_json,errors)=schema.dump(d)
        if len(errors)>0:
            raise mm.ValidationError(json.dumps(errors))
        with open(self.args['output_json'],'w') as fp:
            json.dump(output_json,fp)

Command line overrides for boolean arguments will always result in a value of True

The command line options for booleans expect an argument like any other type (they are not store_true or store_false flags). Right now the type specification given to the argument parser for a boolean is bool. This results in the parser evaluating the argument by simply doing bool(value), which always results to true since calling bool on any non-empty string evaluates to True.

smart_merge fails with NumpyArray

I'm getting an error on validation when I've added a NumpyArray to an existing schema.

  File "/allen/programs/celltypes/workgroups/em-connectomics/danielk/conda/rm_production_mod/lib/python2.7/site-packages/argschema/argschema_parser.py", line 171, in __init__
    args = utils.smart_merge(jsonargs, argsdict)
  File "/allen/programs/celltypes/workgroups/em-connectomics/danielk/conda/rm_production_mod/lib/python2.7/site-packages/argschema/utils.py", line 210, in smart_merge
    elif a[key] == b[key]:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

my schema has this new entry:

corner_mask_radii = NumpyArray(
     dtype = np.int, 
     required=False,
     default=[0, 0, 0, 0],
     missing=[0, 0, 0, 0],
     description="radius of image mask corners, "
     "order (0, 0), (w, 0), (w, h), (0, h)")

elif a[key] == b[key]:

allow short name alternatives for setting fields from command line

Seems like it would be fairly straightforward to add the option of setting short names for fields to make them easier to specify from the command line ("-i" vs "--input"), unless I'm missing something and this is already possible?
If this would make sense to include I'd be glad to test something out and file a PR at some point.

Compatibility for marshmallow >=3.0.0b7

Should be an easy fix, just wanted to get this up here in case anyone else gets stuck before a fix gets in....

Marshmallow changed the interface for the Schema.load() and Schema.dump() methods so argschema_parser's __init__ is broken. From mm documentation: "Changed in version 3.0.0b7: This method returns the deserialized data rather than a (data, errors) duple. A ValidationError is raised if invalid data are passed."

 line 178, in __init__
    if len(result.errors) > 0:
AttributeError: 'dict' object has no attribute 'errors'

`input_json` no longer an arg in 2.0.0a1 ?

considering the code that follows...

argschema-1.17.5 outputs:
{'input_json': 'example.json', 'myfile': 'tmp2.txt', 'log_level': 'ERROR'}
argschema-2.0.0a1 outputs:
{'myfile': 'tmp2.txt', 'log_level': 'ERROR'}

is this an intentional breaking change?

import argschema
import json


def write_example_json():
    example = {
            'myfile': "tmp2.txt"
            }
    json_fname = "example.json"
    with open(json_fname, "w") as f:
        json.dump(example, f)
    return json_fname


class MySchema(argschema.ArgSchema):
    myfile = argschema.fields.OutputFile(
        required=True,
        description="")


class MyClass(argschema.ArgSchemaParser):
    default_schema = MySchema

    def run(self):
        print(self.args)


if __name__ == "__main__":
    inj = write_example_json()
    args = {
            'input_json': inj
            }
    m = MyClass(args=['--input_json', inj])
    m.run()

Having NumpyArray in schema results in warning logs about invalid type.

NumpyArray inherits from marshmallow.fields.List, and sets the list type as marshmallow.fields.Field. When the argument parser is being built, the list handling logs a warning if it can't find a type to pass to argparse which means that all NumpyArrays result in a warning since Field isn't in the type map.

Marshmallow 3.0.0 just released and it breaks argschema

i just installed version 3.0.0 and it breaks argschema. After downgrading to 2.20.1 it works fine.

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-36-b32a4e809482> in <module>
      1 # compute transformation (only need to run once)
----> 2 s2 = s3.Solve3D(input_data=s3.example2, args=[])
      3 # s2.run()

/usr/local/lib/python3.6/dist-packages/argschema/argschema_parser.py in __init__(self, input_data, schema_type, output_schema_type, args, logger_name)
    173 
    174         # validate with load!
--> 175         result = self.load_schema_with_defaults(self.schema, args)
    176 
    177         self.args = result

/usr/local/lib/python3.6/dist-packages/argschema/argschema_parser.py in load_schema_with_defaults(self, schema, args)
    272 
    273         # load the dictionary via the schema
--> 274         result = utils.load(schema, args)
    275 
    276         return result

/usr/local/lib/python3.6/dist-packages/argschema/utils.py in load(schema, d)
    416     """
    417 
--> 418     results = schema.load(d)
    419     if isinstance(results, tuple):
    420         (results, errors) = results

/usr/local/lib/python3.6/dist-packages/marshmallow/schema.py in load(self, data, many, partial, unknown)
    682         """
    683         return self._do_load(
--> 684             data, many=many, partial=partial, unknown=unknown, postprocess=True
    685         )
    686 

/usr/local/lib/python3.6/dist-packages/marshmallow/schema.py in _do_load(self, data, many, partial, unknown, postprocess)
    783             try:
    784                 processed_data = self._invoke_load_processors(
--> 785                     PRE_LOAD, data, many=many, original_data=data, partial=partial
    786                 )
    787             except ValidationError as err:

/usr/local/lib/python3.6/dist-packages/marshmallow/schema.py in _invoke_load_processors(self, tag, data, many, original_data, partial)
   1012             many=many,
   1013             original_data=original_data,
-> 1014             partial=partial,
   1015         )
   1016         return data

/usr/local/lib/python3.6/dist-packages/marshmallow/schema.py in _invoke_processors(self, tag, pass_many, data, many, original_data, **kwargs)
   1133                     data = processor(data, original_data, many=many, **kwargs)
   1134                 else:
-> 1135                     data = processor(data, many=many, **kwargs)
   1136         return data
   1137 

TypeError: make_object() got an unexpected keyword argument 'many'

auto documentation

My adventures in documenting argschema led me to the thought that it should be possible to automate the writing of Sphinx style documentation for any module that uses argschema. It can describe the format of input parameters completely with human readable descriptions if provided in the Schemas. I think this can be implemented by using hooks available in sphinx-autodoc to catch all the Schema, ArgSchema and ArgSchemaParser schemas which pass through it.

CLI option --output_json is not an override flag

ArgSchemaParser.output(d, <path>) will write to <path> even if --output_json <different-path> is passed at the command line. This seems inconsistent with the way inputs are handled, where a command-line override will take precedence over the input dictionary that is passed to ArgSchemaParser, or even the inputs in a json file.

Unable to add additional parser arguments:

I have an entry-point that I want additional arguments, beyond those used by argschema:

> foo --version
> foo --input_json blah.json

Because the parse argument is instantiated in utils, and parsed immediately when ArgSchemaParse is constructed, I have no option to inject this arg.

nested schemas broken in latest version

import argschema as ags

class ModelFit(ags.schemas.DefaultSchema):
fit_type = ags.fields.Str(description="")
hof_fit = ags.fields.InputFile(description="")
hof = ags.fields.InputFile(description="")

class PopulationSelectionPaths(ags.schemas.DefaultSchema):
fits = ags.fields.Nested(ModelFit, description="", many=True)

class PopulationSelectionParameters(ags.ArgSchema):
paths = ags.fields.Nested(PopulationSelectionPaths)

for example can't be read in my --input_json in latest version

Change how Lists and NumpyArrays are handled at the command line

Currently, the way that lists and arrays are handled at the command line is to set nargs to *, allowing 1 or more arguments for the variable at the command line. If we had a schema with an item mylist which was a list of ints, we would invoke it like so at the command line:

example_program --mylist 1 2 3 4 5 6

This handling is nice and pretty simple, and allows (requires) spaces between elements of the argument. The downside is that for any more complicated arrangement of lists or arrays (for example a list of lists or a multidimensional array), it is impossible to set them at the command line. For example, if I had a schema with a list of lists of ints, all of the following attempts at invoking result in validation errors because the elements are not lists:

example_program --mylist 1 2 3 4 5 6
example_program --mylist [1 2 3] [4 5 6]
example_program --mylist [[1 2 3] [4 5 6]]
example_program --mylist [[1,2,3],[4,5,6]]

I'd like to propose using ast.literal eval for handling lists and arrays in the argument parser. This would result in setting arrays and lists at the command line like the last of the above examples.

argschema not handling non-required Nested schemas with required fields well

import argschema
import marshmallow as mm


class MyNested(argschema.schemas.DefaultSchema):
    a = argschema.fields.Int(required=True)
    b = argschema.fields.Str(required=True)
    c = argschema.fields.Str(required=False,default='c')

class MySchema(argschema.ArgSchema):
    nested = argschema.fields.Nested(MyNested, only=['a', 'b'],
        required=False, default=mm.missing)

def test_nested_example():
    mod = argschema.ArgSchemaParser(schema_type=MySchema,
                                     args = [])
    assert(not 'nested' in mod.args.keys)

def test_nested_marshmallow_example():
    schema = MySchema()
    (result,errors)=schema.load({})
    assert(len(errors)==0)

test_nested_example does not pass, but test_nested_marshmallow_example does. If the user wants to specify a Nested schema which is optional, but if filled in has some required fields, we aren't handling that well now.

command line overrides not working for all field types in all version of python

see PR #71 for example. This is due the the FIELD_TYPE_MAPPING being one>many so when it's inverted it's many>one and some of the field_types aren't valid argparse parsing functions in python3 (i.e. bytes).

The solution is to likely systematically test all the field types and their command line overrides and make a mapping that makes sense in a python version specific manner.

better argparse sorting/description

Argparse arguments currently stay together based upon nested level, but their ordering within a nested level remain somewhat random. I'd suggest that non-required arguments get pushed to the bottom, and Nested arguments get pushed to the top.

so if i had

    import argschema

    class NestedSchema(argschema.schemas.DefaultSchema):
        aint = argschema.fields.Int(required=False, default =5, description="a integer")
        bstr = argschema.fields.Str(required=True, description = "a string")

    class MySchema(argschema.ArgSchema):
        nest = argschema.fields.Nested(NestedSchema, required=True, description="nested schema")
        topint = argschema.fields.Int(required=True,description="a top integer")
        topintb = argschema.fields.Int(default=5,required=False,description="a top integer b")
        topintc = argschema.fields.Int(required=True,description="a top integer c")
        topintd = argschema.fields.Int(default=7,required=False,description="a top integer d")

    if __name__ == '__main__':
        mod = argschema.ArgSchemaParser(schema_type=MySchema)

presently i get

    $ python test_schema.py --help
    usage: test_schema.py [-h] [--log_level LOG_LEVEL] [--nest.aint NEST.AINT]
                        [--nest.bstr NEST.BSTR] [--input_json INPUT_JSON]
                        [--topintc TOPINTC] [--topintb TOPINTB]
                        [--topintd TOPINTD] [--topint TOPINT]
                        [--output_json OUTPUT_JSON]

    optional arguments:
    -h, --help            show this help message and exit
    --log_level LOG_LEVEL
                            set the logging level of the module
    --nest.aint NEST.AINT
                            a integer
    --nest.bstr NEST.BSTR
                            a string
    --input_json INPUT_JSON
                            file path of input json file
    --topintc TOPINTC     a top integer c
    --topintb TOPINTB     a top integer b
    --topintd TOPINTD     a top integer d
    --topint TOPINT       a top integer
    --output_json OUTPUT_JSON
                            file path to output json file

where I think i would prefer if the output were something like this

    $ python test_schema.py --help
        usage: test_schema.py [-h] [--log_level LOG_LEVEL] [--nest.aint NEST.AINT]
                            [--nest.bstr NEST.BSTR] [--input_json INPUT_JSON]
                            [--topintc TOPINTC] [--topintb TOPINTB]
                            [--topintd TOPINTD] [--topint TOPINT]
                            [--output_json OUTPUT_JSON]

        optional arguments:                       
        --nest.aint NEST.AINT      a integer (default = 5)                        
        --nest.bstr NEST.BSTR      a string (required)       
        --topint TOPINT            a top integer (required)                   
        --topintc TOPINTC          a top integer c (required)
        --topintb TOPINTB          a top integer b (default=5)
        --topintd TOPINTD          a top integer d (default=7)
        --input_json INPUT_JSON    file path of input json file 
        --output_json OUTPUT_JSON  file path to output json file
        --log_level LOG_LEVEL      set the logging level of the module (default=WARNING)
        -h, --help                 show this help message and exit 

argschema/argparse doesn't complain about unexpected arguments

In particular I just got stuck for a little bit because i was sending in
python my_script.py path_to_json
rather than
python my_script.py --input_json path_to_json

and argschema was happy to continue because i had my script falling back to an example input, but i wasn't getting the right behavior. Seems to me that if the user is putting in extra arguments that argparse and/or argschema isn't handling, that it should at least throw a warning if not an error.

What do other people think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.