Coder Social home page Coder Social logo

genno's Introduction

genno: efficient, transparent calculation on N-D data

PyPI version Documentation build Build status Test coverage

genno is a Python package for describing and executing complex calculations on labelled, multi-dimensional data. It aims to make these calculations efficient, transparent, modular, and easily validated as part of scientific research.

genno is built on high-quality Python data packages including dask, xarray, pandas, and pint; and provides (current or planned) compatibility with packages including plotnine, matplotlib, sdmx1, ixmp, and pyam.

A 玄能 (genno or gennoh) is a type of hammer used in Japanese woodworking. The package name is warning, by reference, to the adage “When you hold a hammer, every problem looks like a nail”: you shouldn't hit everything with genno, but it is still a useful and versatile tool.

License

Copyright © 2018–2024 genno contributors.

Licensed under the GNU General Public License, version 3.0.

genno's People

Contributors

behnam-zakeri avatar dependabot[bot] avatar francescolovat avatar gidden avatar jihoon avatar jkikstra avatar khaeru avatar lauwien avatar zikolach avatar

Watchers

 avatar  avatar

genno's Issues

Adjust for pytest 8.0

Pytest 8.0.0 was released 2024-01-27.

Since then, some tests that use pytest.warns() have begun to fail, for instance here:

test_computer.py::TestComputer::test_deprecated_aggregate - Failed: DID NOT WARN.
test_computer.py::TestComputer::test_deprecated_disaggregate - Failed: DID NOT WARN.
test_key.py::TestKey::test_from_str_or_key0[:-None] - Failed: DID NOT WARN.
test_key.py::TestKey::test_from_str_or_key0[::-None] - Failed: DID NOT WARN.
test_key.py::TestKey::test_from_str_or_key0[::bar-None] - Failed: DID NOT WARN.
test_key.py::TestKey::test_from_str_or_key0[:a-b:bar-None] - Failed: DID NOT WARN.
test_key.py::TestKey::test_from_str_or_key0[foo:a-b--None] - Failed: DID NOT WARN.
test_key.py::TestKey::test_from_str_or_key0[42.1-None] - Failed: DID NOT WARN.

This appears not to be due to any change in code behaviour, but only in how Pytest handles the warnings.

  • Read the Pytest changelog.
  • Identify the specific change.
  • Adjust the genno test suite.

Strict vs. permissive handling of missing/dimensionless units

Consider these cases:

>>> from genno import Quantity, computations
# Case A
>>> computations.add(Quantity(1.0, units="kg"), Quantity(2.0, units="tonne), Quantity(3.0))
ValueError: Units 'kg' and '' are incompatible
# Case B
>>> computations.add(Quantity(1.0, units="kg"), Quantity(2.0, units="tonne"), Quantity(3.0, units=""))
ValueError: Units 'kg' and '' are incompatible

In (A) collect_units() assigns dimensionless to the last operand. In (B), it is explicitly dimensionless. This arose in iiasa/message_ix#441, where computations.add() is applied to two quantities, one with units, the other dimensionless (because the ixmp parameter handled by ixmp.reporting.computations.data_for_quantity() was empty).

What should the behaviour be?

Some possibilities:

  • In (A), infer that operand(s) with missing units is in the same units as the first/others. Maybe only if the units are consistent?
  • In (B), infer that explicitly dimensionless operand(s) are in the same units as the first/others.
  • Add a (global?) configuration setting to toggle between different behaviours. (What should be the default?)

Transfer & refactor initial code from ixmp.reporting

  • Filter ixmp commits for only those that affect reporting code —done in #2
  • Set up packaging —#3
    • setup.{py,cfg}
    • Documentation using Sphinx
  • Set up CI using GitHub Actions
    • lint.yml —#3
    • pytest.yml —#7
    • Add badge
  • Ensure tests all pass —#3
  • Reorganized code into a coherent structure —#3
  • Set up RTD and add badge —#7
  • Set up Codecov and add badge —#7

Add SDMX input/output

This would add to .compat.sdmx operators like…

  • Convert sdmx.model.DataSet into Quantity.
  • Perform a specific SDMX query to retrieve data.
  • Convert Quantity into sdmx.model.DataSet.

Some issues to resolve here:

  1. Quantity.attrs map well to SDMX attributes attached at the level of an entire data set. However, one powerful feature of SDMX is the ability to attach attributes to individual observations. This does not have a natural analogue in the xarray (thus genno) data model.

Change term ‘computations’?

The dask graph specification uses ‘computation’ for any dict value in the graph. A ‘task’—tuple with a callable first element—is one of four kinds of ‘computation’.

In contrast, genno uses ‘computation’ for callables used as those first elements of tasks. This is a little inconsistent; also it's a long word.

Consider alternatives.

Advertise or remove .config.CALLBACKS

#16 added this code, adapted from message_data:

genno/genno/config.py

Lines 102 to 103 in 91d906d

# Also add the callbacks to the queue
queue.extend((("apply", cb), {}) for cb in CALLBACKS)

These "callbacks" are essentially the same as "handlers", simply without any arguments.
Perhaps the two can be merged, and the handles() decorator updated/renamed to cover both use-cases.

Also: add documentation!

Add tip re: keys with different dimensionality

With keys like:

  1. <foo:a-b-c:bar>
  2. <foo:a-d-e:bar>

—that is, distinct full dimensionality with ≥1 overlapping dimension and same tags—it becomes ambiguous whether <foo:a:bar> refers to:

  1. Partial sum over dimensions (b, c) of (1).
  2. Partial sum over dimensions (d, e) of (2).

Some remedies:

  • Add a tip to the docs in an appropriate location to advise using different keys like <foo:a-b-c:bar> and <foo:a-d-e:baz>.
  • Actually check new keys against existing in Computer.add() and similar methods; warn users or raise an exception.

Update for xarray 2022.6.0

Nightly tests began to fail with the release of xarray 2022.6.0 e.g. here.

  • The failing tests are:
    genno/tests/test_computations.py::test_broadcast_map[SparseDataArray-map_values0-kwarg0]
    genno/tests/test_computations.py::test_index_to[SparseDataArray]
    genno/tests/test_computations.py::test_pow[SparseDataArray]
    genno/tests/test_computations.py::test_product0[SparseDataArray]
    genno/tests/test_computations.py::test_product[SparseDataArray-dims0-64]
    genno/tests/test_computations.py::test_product[SparseDataArray-dims1-8]
    genno/tests/test_computations.py::test_product[SparseDataArray-dims2-4]
    
  • These all appear to fail on the f-string formatting of a log message in genno.util.collect_units():
    log.debug(f"{arg} lacks units; assume dimensionless")
    which raises: “RuntimeError: Cannot convert a sparse array to dense automatically. To manually densify, use the todense method.”
  • This is an upstream regression: pydata/xarray#6822

As mitigation:

  • SparseDataArray is not the default genno.Quantity class currently. If using AttrSeries (the default), genno remains usable.
  • If using SparseDataArray, use xarray < 2022.6.0

To resolve:

  • Follow the response to the upstream issue.
  • Make any adjustment necessary in genno itself.

Make Quantity a full class

Currently Quantity() is a function with a name that makes it seem like a class.

This means it's not possible to do:

if isinstance(foo, Quantity)

…or to use it in type annotations for computation functions.

Using a metaclass like QuantityMeta should make it possible to do this.

Document Computer.visualize()

Include ≥1 example in the built documentation.

A separate issue is to use these extensively to illustrate graphs.

Adjust for pyam 1.7.0

pyam 1.7.0 was released on 2022-12-19. Per IAMconsortium/pyam#708, specifically here, keyword arguments to IamDataFrame are directly fed to pandas.DataFrame.to_excel(). (See also the blame for this method. It appears at some point pyam forced engine="openpyxl" and accepted but ignored the keyword arguments.)

This causes failures in genno.compat.pyam.write_report(), e.g. here:

 genno/compat/pyam/computations.py:109: in write_report
    obj.to_excel(path, merge_cells=False)
/opt/hostedtoolcache/Python/3.10.9/x64/lib/python3.10/site-packages/pyam/core.py:2382: in to_excel
    excel_writer = pd.ExcelWriter(excel_writer, **kwargs)

(snip)

>       self._book = Workbook(self._handles.handle, **engine_kwargs)
E       TypeError: Workbook.__init__() got an unexpected keyword argument 'merge_cells'

/opt/hostedtoolcache/Python/3.10.9/x64/lib/python3.10/site-packages/pandas/io/excel/_xlsxwriter.py:216: TypeError

This is because pyam is now allowing pandas to select xlsxwriter as the engine, and the merge_cells keyword argument is not understood by this engine.

The fix is likely to (a) remove and (b) specify a minimum version of pyam to avoid the need for genno to handle the shift(s) in behaviour.

Slim down AttrSeries, SparseDataArray

Since genno was created, some of the upstream packages have seen enhancements that may obviate some of the compatibility code in AttrSeries and SparseDataArray.

To do: investigate each of the following and, where possible, adjust to rely on the upstream functionality / slim down genno itself.

  • pandas.Series.attrs exists since possibly as far back as pandas 1.0 but as an "experimental feature that may change without warning".
  • pint-pandas, which is still in a beta version (0.6).
  • sparse occasionally gets support for additional ufuncs.

Cache using Parquet, Feather, etc.

Reportedly, serializing pandas.Series to/from the parquet or feather formats using PyArrow can be much more performant; see e.g. here.

  • Adjust genno.caching to use these formats where possible, especially for Quantity objects.
    • Continue to use pickle for other objects.
  • Allow configuration/selection of whether to use these vs. Pickle.

Improve typing

This issue is to collect type errors seen in downstream code that uses genno. These can be addressed by changes like those in #53, with reference to the typing and mypy docs.

Addressed in #55:

error: "Quantity" has no attribute "shift"
error: Unsupported operand types for * ("float" and "Quantity")
error: Unsupported operand types for - ("int" and "Quantity")

Others:

  • error: "Quantity" has no attribute "ffill"

Add a `Computation` abstract class

This can be the location for:

  • add_task(c: Computer) or similar for describing computations in c.
  • __call__(): the actual callable to be executed.
  • __repr__(): a more readable string representation for Computer.describe().
  • etc.

These should be easier to maintain if they are collected, instead of the separate pair of e.g. Computer.convert_pyam (for adding task(s)) and .compat.pyam.computations.as_pyam (the actual callable)

This will also alow to reduce complexity of this code in Computer.add():

elif isinstance(data, str) and self.get_comp(data):
# *data* is the name of a pre-defined computation
name = data
if hasattr(self, f"add_{name}"):
# Use a method on the current class to add. This invokes any
# argument-handling conveniences, e.g. Computer.add_product()
# instead of using the bare product() computation directly.
return getattr(self, f"add_{name}")(*args, **kwargs)
else:
# Get the function directly
func = self.get_comp(name)
# Rearrange arguments: key, computation function, args, …
func, kwargs = partial_split(func, kwargs)
return self.add(args[0], func, *args[1:], **kwargs)
elif isinstance(data, str) and data in dir(self):
# Name of another method, e.g. 'apply'
return getattr(self, data)(*args, **kwargs)

The Computer can:

  • Look up the Computation class in Computer.modules.
  • If it has an add_task() method, call that directly; else, simply instantiate.

`load_file()` ignores "Unit: …" comment

iiasa/message_data#529 appears to be due to a file that contains a "# Unit: …" comment in its header (here, related to iiasa/message_data#522), but which is ignored by a direct call to genno.operator.load_file(). The Quantity that results is dimensionless instead of having the stated units.

@ravitby and I suspect this could be due to some non-printing characters or other issues in the file which prevent the unit line from being recognized or parsed correctly.

As a mitigation, we gave the explicit keyword argument load_file(…, units="Gp km /a"), which seemed to resolve the error due to the missing units.

Cache based on function code / document caching based on file contents

(Transferred from the discussion of iiasa/message-ix-models#25.)

The main question is whether genno covers the following two features in its caching option which are covered by an implementation I recently did using joblib.Memory.

  1. joblib.Memory doesn't only cache the input values but also the function code itself. This way if your function code change but the input stays the same it won't be tricked into wrongly thinking that it has the results cached already.
  2. My usecase involved reading data from a file, doing some computation and providing the result as a pandas dataframe. I provide the function with a filename in form of a pathlib.Path or a str. That means my function now looks like this: read_and_compute_some_data(file: Union[str, pathlib.Path], ...) -> pd.DataFrame. Here the joblib.Memory caching decorator would simply save a hash of the name of the input file. In some way that's a problem since I'm not actually interested in the name of my datafile but the contents. For this I have created a small wrapper class InputFile for the filename which stores a hash of the files contents. As joblib uses pickle to serialize the data to binary I have modified the way InputFile is serialized by just considering the contents of the file and not the name.

Minimum working example of caching of the content of input files using joblib.Memory:

from joblib import Memory
from pathlib import Path
import hashlib
import pandas as pd

# setting the directory of the cache in the parent folder of the file
memory = Memory(Path(__file__).parent / ".joblib_cache") 

class InputFile:
    def __init__(self, file) -> None:
        self.file = file
        self.hash = self.calc_hash()

    def calc_hash(self) -> str:
        """Generate a hash from the contents of a file

        Parameters
        ----------
        file : str
            File to be hashed

        Returns
        -------
        str:
            Hexadecimal representation of the file hash

        Notes
        -----
        For details refer to https://stackoverflow.com/questions/1131220/get-md5-hash-of-big-files-in-python
        """
        with open(self.file, "rb") as f:
            file_hash = hashlib.md5()
            # we read the file at a rate of 8192 bytes a chunk
            # this takes advantage of the digest size of 128 bytes
            while chunk := f.read(8192):
                file_hash.update(chunk)
        return file_hash.hexdigest()

    def __getstate__(self) -> dict:
        """Custom __getstate__ function for using with Memory.cache from joblib.Memory

        Returns
        -------
        dict
            __dict__ minus the file name

        Notes
        -----
        """
        # this is to 'trick' pickle into only considering the hash of the contents
        # of the file and not the filename itself when checking if the have 
        # cached results. Of course you could also change it to include the filename
        # as well. A good way might be use both the filename (just the name
        # and not the entire path) and the hash of the contents. This would possibly
        # also make the cache independent of the user as it would no longer hash
        # the directory structure where the file is stored

        state = self.__dict__.copy()
        # remove the file from the state as we are just interested in the contents
        del state["file"]
        return state

    def __repr__(self) -> str:
        # this is just so that we get a nice representation of the class since 
        # joblib.memory also writes a json with with the input parameters of
        # the function call
        return f"{self.__class__}: {self.__dict__}"

# adding the decorator to make read_from_file cache-able 
# also caching this might be a bit pointless but I think it illustrates 
# the general layout of such a function
@memory.cache
def read_from_file(input_file):
    return pd.read_csv(input_file.file)

if __name__ == "__main__":
    # in the current configuration the second call read_from_file would hit the cache if the contents of file1.csv and file2.csv
    # are the same even though they have different names. 
    read_from_file(InputFile("file1.csv"))
    read_from_file(InputFile("file2.csv"))

Additionally, joblib.Memory also saves json files where the input values are stored which is a nice feature for book keeping.

Add file-based caching

A caching pattern/task would:

  • Understand a configured cache directory.
  • Compute a hash of the arguments and inputs to a particular task.
  • If the corresponding cache file exists, load and return it.
  • Otherwise:
    • Execute the task that generates the data,
    • Cache the result, and
    • Return it.

Existing code, from e.g. khaeru/data or transportenergy/ipcc-wg4-ar6-ch10 could be adapted for this.

Extend/override `dask.visualize()`

Because dask.visualize() is intended for use with dask's own collections/classes, it tries to generate labels suitable for that use-case. These end up being uninformative (e.g. blank) for genno graphs, e.g.:

visualize

This could be addressed by some combination of:

  1. Extend genno's classes and objects (cf #30) to present the information expected by dask's labeling and other utilities.
  2. Monkeypatch dask.base.* as necessary to get the desired behaviour.
  3. Copy and modify to get the desired behaviour.

Edit documentation

  • Ensure it is self-contained/standalone.
  • Incorporate text from message_ix reporting tutorial.

Handle symbol characters in `.visualize()` node labels

In trying to use #92 I find that node labels, for example from iiasa/message-ix-models WorkflowStep.__repr__(), that contain special characters like "->/" may lead to errors in graphviz/dot.

In principle, any string returned by .describe.label() should be valid for dot.

To resolve:

  • Test this behaviour.
  • Enforce quoting to avoid the errors.

Document .add_queue()

Currently this is used internally by .config.parse_config(), but it could be further demonstrated on a documentation page.

Switch default Quantity: AttrSeries → SparseDataArray

Inherited from iiasa/ixmp#191:

xarray 0.13 includes support for converting pd.DataFrame to a pydata/sparse data structure.
This should mostly obviate the need for the custom AttrSeries class.
A PR should be opened to make the change, test performance, and make any necessary adjustments.

Resources:

As of genno 1.0, all code is tested with both AttrSeries and SparseDataArray to minimize surprises on switching.

#27 should probably be done first.

Add 'replace', 'splice', and/or 'insert' operations

These would:

  • Duplicate an existing computation identified by a key k using a new key, k_new.
  • Replace k with a new computation that receives k_new as input.

This basic feature would support operations like:

  • Insert a pass-through step that logs a particular Quantity, makes an assertion, etc.

Support consistent ordering in `Computer.add_queue()`

Discussing MESSAGEix-Transport with @measrainsey revealed that the semantics of Computer.add_queue() are unnecessarily confusing:

def func_x(input_1, input_2): ...

c.add_queue(
    [
       ("X:a-b-c:tag", func_x, "input 1:a-b", "input 2:b-c"),
       ("func_y", "Y:a-b-c:tag", "input 1:a-b", "input 2:b-c"),
    ]
)

In the first entry, the reference to the callable func_x is provided explicitly, i.e. a name in the current namespace. "X:a-b-c:tag" is the key assigned to the output of func_x.

In the second case, "func_y" is a string which identifies a function that the Computer can locate in one of the modules known to it (Computer.modules); this is a convenience via Computer.add(). The key ("Y:a-b-c:tag") assigned to the output of func_y appears in the second position.

Mixing the two makes for awkward user code in which the order of the first two elements is not consistent.

We should:

  • Allow for a consistent order, e.g. ("Y:a-b-c:tag", "func_y", "input 1:a-b", "input 2:b-c"), in the second case.
  • Guide users to migrate code.
  • Possibly deprecate the current behaviour.
  • Improve documentation to make clear the possible usages, both before and after migration.

Test failures with sparse 0.15.0

Comparing run A with run B, the update to sparse 0.15.0 caused numerous failures due to:

AttributeError: module 'sparse' has no attribute 'astype'
AttributeError: module 'sparse' has no attribute 'broadcast_to'
AttributeError: module 'sparse' has no attribute 'concat'
AttributeError: module 'sparse' has no attribute 'isnan'

…all via xarray.core.duck_array_ops.

These appear to be noted at pydata/xarray#8602 and pydata/sparse#622.

Fix
Upstream: pydata/sparse#623, which I guess will get released with the next version of sparse, for instance 0.15.1

Mitigation

  • Don't use genno with sparse 0.15.0; maybe add this to pyproject.toml.
  • Temporarily exclude sparse 0.15.0 from genno CI.

v1.22.0 is incompatible with latest version of iiasa/message_ix/main

The recent release of v1.22.0 broke some CI tests for message-ix-models, but curiously only those that make use of the latest versions of iiasa/ixmp/main and iiasa/message_ix/main. Unfortunately, I can't quite pinpoint the exact origin of this error.

By comparing the last working test with the first failed one and the ones that continue to be successful, I found the following:

genno message_ix prompt_toolkit using ixmp/message_ix version
last success 1.21.0 2d7b3f538d7a99edb8539fdc8831e5230e2ee23d 3.0.42 main
first fail 1.22.0 6b4f6304df99ed8aeef5102f894aa3ca64daca43 3.0.43 main
still successful 1.22.0 3.7.0 3.0.43 3.7.0

The commits for ixmp and message-ix-models remained exactly the same: 6cab755 for ixmp and 5afc9330c334dfbfa3c4fa2cea9b0061cc93f074 for message-ix-models.

I only include prompt_toolkit for completeness here, I don't think it's likely to be the culprit of the error, but it's version also did change between the scenarios, while all other packages remained fixed.

In my view, the current genno version is not compatible with the latest changes of message_ix/main, but of course, you can flip this view around. Either way, here's the traceback:

=================================== FAILURES ===================================
_________________________________ test_compat __________________________________
[gw1] linux -- Python 3.11.7 /opt/hostedtoolcache/Python/3.11.7/x64/bin/python

tmp_path = PosixPath('/tmp/pytest-of-runner/pytest-0/popen-gw1/test_compat0')
test_context = <Context object at 139701973667856 with 3 keys>

    @to_simulate.minimum_version
    def test_compat(tmp_path, test_context):
        import numpy.testing as npt
    
        rep = ss_reporter()
        prepare_reporter(test_context, reporter=rep)
    
        rep.add("scenario", ScenarioInfo(model="Model name", scenario="Scenario name"))
    
        # Tasks can be added to the reporter
        callback(rep, test_context)
    
        # Select a key
        key = (
            "transport emissions full::iamc"  # IAMC structure
            # "Transport"  # Top level
            # "Hydrogen_trp"  # Second level
            # "inp_nonccs_gas_tecs_wo_CCSRETRO"  # Third level
            # "_26"  # Fourth level
        )
    
        # commented: Show what would be done
        # print(rep.describe(key))
        # rep.visualize(tmp_path.joinpath("visualize.svg"), key)
    
        # Calculation runs
>       result = rep.get(key)

message_ix_models/tests/report/test_compat.py:46: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <message_ix.report.Reporter object at 0x7f0ee5616a90>
key = <transport emissions full::iamc>

    def get(self, key=None):
        """Execute and return the result of the computation `key`.
    
        Only `key` and its dependencies are computed.
    
        Parameters
        ----------
        key : str, optional
            If not provided, :attr:`default_key` is used.
    
        Raises
        ------
        ValueError
            If `key` and :attr:`default_key` are both :obj:`None`.
        """
        if key is None:
            if self.default_key is not None:
                key = self.default_key
            else:
                raise ValueError("no default reporting key set")
        else:
            key = self.check_keys(key)[0]
    
        # Protect 'config' dict, so that dask schedulers do not try to interpret its
        # contents as further tasks. Workaround for
        # https://github.com/dask/dask/issues/3523
        self.graph["config"] = dask.core.quote(self.graph.get("config", dict()))
    
        # Cull the graph, leaving only those needed to compute *key*
        dsk, _ = cull(self.graph, key)
        log.debug(f"Cull {len(self.graph)} -> {len(dsk)} keys")
    
        try:
            result = dask_get(dsk, key)
        except Exception as exc:
>           raise ComputationError(exc) from None
E           genno.core.exceptions.ComputationError: computing <_Hydrogen_tot:nl-ya-m-yv-h> using:
E           
E           (functools.partial(<operator sum>, dimensions=['r', 'nr', 'yr', 't'], weights=None), <_Hydrogen_tot:r-nr-yr-nl-t-ya-m-yv-h>)
E           
E           Use Computer.describe(...) to trace the computation.
E           
E           Computation traceback:
E             File "/opt/hostedtoolcache/Python/3.11.7/x64/lib/python3.11/site-packages/genno/core/operator.py", line 50, in __call__
E               return self.func(*args, **kwargs)
E                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
E             File "/opt/hostedtoolcache/Python/3.11.7/x64/lib/python3.11/site-packages/genno/operator.py", line 987, in sum
E               "name", div(mul(quantity, _w).sum(dim=dimensions), w_total), quantity
E                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E             File "/opt/hostedtoolcache/Python/3.11.7/x64/lib/python3.11/site-packages/genno/core/attrseries.py", line 503, in sum
E               raise ValueError(
E           ValueError: {'r'} not found in array dimensions ['nr', 'yr', 'nl', 't', 'ya', 'm', 'yv', 'h']

/opt/hostedtoolcache/Python/3.11.7/x64/lib/python3.11/site-packages/genno/core/computer.py:646: ComputationError

It looks like somewhere in the definition of the dimensions of _Hydrogen_tot either a dimension r was erroneously added or it got dropped somewhere by mistake. Unfortunately, I can't find the exact location where either is happening.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.