Coder Social home page Coder Social logo

bvobart / mllint Goto Github PK

View Code? Open in Web Editor NEW
72.0 1.0 3.0 1.01 MB

`mllint` is a command-line utility to evaluate the technical quality of Python Machine Learning (ML) projects by means of static analysis of the project's repository.

Home Page: https://bvobart.github.io/mllint/

License: GNU General Public License v3.0

Go 97.36% Python 2.12% Shell 0.43% Dockerfile 0.09%
machine-learning machinelearning python machinelearning-python ai artificial-intelligence golang-application pip static-analysis best-practices

mllint's Introduction

GitHub Workflow Status GitHub go.mod Go version Go Reference Code coverage GoReportCard Platform

PyPI PyPI - Status PyPI - Downloads - Daily PyPI - Downloads - Monthly PyPI - Python Version

Attention! This tool is no longer maintained

As detailed below, I wrote mllint during my MSc thesis in Computer Science between February and October of 2021. I have since graduated and am now no longer developing or actively maintaining this package.

mllint does still work, so feel free to use it! If you find any bugs, feel free to create an issue, I still receive notifications of new issues and there's a good chance that I'll look at them in my free time, but I won't guarantee a timely response or a fix for your issue.

For those interested in the research output produced in my MSc thesis:

mllint is a command-line utility to evaluate the technical quality of Machine Learning (ML) and Artificial Intelligence (AI) projects written in Python by analysing the project's source code, data and configuration of supporting tools. mllint aims to ...

  • ... help data scientists and ML engineers in creating and maintaining production-grade ML and AI projects, both on their own personal computers as well as on CI.
  • ... help ML practitioners inexperienced with Software Engineering (SE) techniques explore and make effective use of battle-hardended SE for ML tools in the Python ecosystem.
  • ... help ML project managers assess the quality of their ML and AI projects and receive recommendations on what aspects of their projects they should focus on improving.

mllint does this by measuring the project's adherence to ML best practices, as collected and deduced from SE4ML and Google's Rules for ML. Note that these best practices are rather high-level, while mllint aims to give practical, down-to-earth advice to its users. mllint may therefore be somewhat opinionated, as it tries to advocate specific tools to best fit these best practices. However, mllint aims to only recommend open-source tooling and publically verifiable practices. Feedback is of course always welcome!

mllint was created during my MSc thesis in Computer Science at the Software Engineering Research Group (SERG) at TU Delft and ING's AI for FinTech Research Lab on the topic of Code Smells & Software Quality in Machine Learning projects.

See docs/example-report.md for the full report generated for this example project.

See also the mllint-example-projects repository to explore the reports of an example project using mllint to measure and improve its project quality over several iterations.

See also mllint's website for online documentation of all of its linting rules and categories.


Installation

mllint is compiled for Linux, MacOS and Windows, both 64 and 32 bit x86 (MacOS 64-bit only), as well as 64-bit ARM on Linux and MacOS (Apple M1).

mllint is published to PyPI, so it can be installed globally or in your current environment using pip:

pip install --upgrade mllint

Alternatively, to add mllint to an existing project, if your project uses Poetry for its dependencies:

poetry add --dev mllint

Or if your project uses Pipenv:

pipenv install --dev mllint

Tools

mllint has a soft dependency on several Python tools that it uses for its analysis. While mllint will recommend that you place these tools in your project's development dependencies, these tools are listed as optional dependencies of mllint and can be installed along with mllint using:

pip install --upgrade mllint[tools]

Docker

There are also mllint Docker containers available on Docker Hub at bvobart/mllint for Python 3.6, 3.7, 3.8 and 3.9. These may particularly be helpful when running mllint in CI environments, such as Gitlab CI or Github Actions. See the Docker Hub for a full list of available tags that can be used.

The Docker containers require that you mount the folder with your project onto the container as a volume on /app. Here is an example of how to use this Docker container, assuming that your project is in the current folder. Replace $(pwd) with the full path to your project folder if it is somewhere else.

docker run -it --rm -v $(pwd):/app bvobart/mllint:latest

Usage

mllint is designed to be used both on your personal computer as well as on CI systems. So, open a terminal in your project folder and run one of the following commands, or add it to your project's CI script.

To run mllint on the project in the current folder, simply run:

mllint

To run mllint on a project in another folder, simply run:

mllint path/to/my-ml-project

mllint will analyse your project and create a Markdown-formatted report of its analysis. By default, this will be pretty printed to your terminal.

If you instead prefer to export the raw Markdown text to a file, which may be particularly useful when running on CI, the --output or -o flag and provide a filename. mllint does not overwrite the destination file if it already exists, unless --force or -f is used. For example:

mllint --output report.md

Using - (a dash) as the filename prints the raw Markdown directly to your terminal:

mllint -o -

In CI scripts, such raw markdown output (whether as a file or printed to the standard output) can be used to e.g. make comments on pull/merge requests or create Wiki pages on your repository.

See docs/example-report.md for an example of a report that mllint generates, or explore those generated for the example projects.

Of course, feel free to explore mllint help for more information about its commands and to discover additional flags that can be used.

Linters, Categories and Rules

mllint analyses your project by evaluating several categories of linting rules. Each category, as well as each rule, has a 'slug', i.e., a lowercased piece of text with dashes or slashes for spaces, e.g., code-quality/pylint/no-issues. This slug identifies a rule and is often (if not always) displayed next to the category or rule that it references.

Command-line

To list all available (implemented) categories and linting rules, run:

mllint list all

To list all enabled linting rules, run (optionally providing the path to the project's folder):

mllint list enabled

By default, all of mllint's rules are enabled. See Configuration to learn how to selectively disable certain rules.

To learn more about a certain rule or category, use mllint describe along with the slug of the category or rule:

# Describe the Version Control category. This will also list the rules that it checks.
mllint describe version-control

# Use the exact slug of a rule to describe one rule,
# e.g., the rule on DVC usage in the Version Control category
mllint describe version-control/data/dvc

# Use a partial slug to describe all rules whose slug starts with this snippet, 
# e.g., all rules about version controlling data
mllint describe version-control/data

Online Documentation

Alternatively, visit the Categories and Rules pages on mllint's website to view the latest online documentation of these rules.

Custom linting rules

It is also possible to define your own custom linting rules by implementing a script or program that mllint will run while performing its analysis. These custom rules need to be defined in mllint's configuration. For more information on how to do this, see mllint describe custom or view the documentation online here.


Configuration

mllint can be configured either using a .mllint.yml file or through the project's pyproject.toml. This allows you to:

  • selectively disable specific linting rules or categories using their slug
  • define custom linting rules
  • configure specific settings for various linting rules.

See the code snippets and commands provided below for examples of such configuration files.

Commands

To print mllint's current configuration in YAML format, run (optionally providing the path to the project's folder):

mllint config

To print mllint's default configuration in YAML format, run (unless there is a folder called default in the current directory):

mllint config default

To create a .mllint.yml file from mllint's default configuration, run:

mllint config default -q > .mllint.yml

YAML

An example .mllint.yml that disables some rules looks as follows:

rules:
  disabled:
    - version-control/code/git
    - dependency-management/single

Similar to the describe command, this also matches partial slugs. So, to disable all rules regarding version controlling data, use version-control/data.

TOML

If no .mllint.yml is found, mllint searches the project's pyproject.toml for a [tool.mllint] section. TOML has a slightly different syntax, but the structure is otherwise the same as the config in the YAML file.

An example pyproject.toml configuration of mllint is as follows. Note that it is identical to the YAML example above.

[tool.mllint.rules]
disabled = ["version-control/code/git", "dependency-management/single"]

Getting Started (development)

While mllint is a tool for the Python ML ecosystem and distributed through PyPI, it is actually written in Go, compiled to a static binary and published as platform-specific Python wheels.

To run mllint from source, install the latest version of Go for your operating system, then clone this repository and run go run . in the root of this repository. Use go test ./... or execute test.sh to run all of mllint's tests.

To test compiling and packaging mllint into a Python wheel for your current platform, run test.package.sh.

mllint's People

Contributors

bvobart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

mllint's Issues

Reduce the amount of console output in reports with lots of linter warnings

Intermediate results from the mllint survey mentioned that mllint sometimes produces an overwhelming amount of console output, especially when the report contains loads of linter warnings. One line is generated for each warning, so that can make a report hundreds of lines long, which all get printed to the terminal at the same time when analysis finishes.

We can use dynamic hide/show boxes for all the code quality linter warnings with <details> sections in the Markdown file output, but in terminal-rendered Markdown HTML elements are simply ignored, so we need to come up with a substitute for that.

The simplest option would be to omit the linter warnings and only print the amount of messages with perhaps a summary of the severity of the warnings. A flag to re-enable full linter output would then also be useful.

Another option would be to have mllints output open in a pager such as less (or some Golang equivalent like https://github.com/tigrawap/slit for an output that has more lines than the current terminal height. This would reduce the shock of the output suddenly appearing on screen. Though it would still need to be disabled if stdout is not a TTY, such as on CI (in which case it still just prints the entire output). We want to prevent opening a pager in an automated script, as it causes the pager to endlessly wait for user input, blocking the script.

Allow users to specify custom linting rules

Let's face it, neither I, nor all developers who will ever work on this project, won't have the time to implement linting rules for every single tool under the sun. Also, for closed source tools or business-specific practice enforcement, users may want to build their own custom checks to enforce on their projects through mllint configurations.

The api.Linter interface is only three methods (Name, Rules and LintProject), which all return serialisable data, so it should be pretty easy to allow a user to define a custom linter in the mllint config, with a simple console command for LintProject. This will then provide the project details (and possibly mllint config?) to the command via stdin. The command is then expected to print a YAML or JSON linter result to the standard output. This linter result consists of what LintProject usually returns, i.e. a report and an error if there was any.

What is probably easier to configure though, is to have each custom rule define a script to lint it, which must then only return YAML or JSON score and details (e.g. { score: 100, details: "" }). Any non-YAML or JSON output is interpreted as an error, obviously automatically failing the rule. Having a linter that checks a single rule is different from api.Linter in mllint, as that may (and often should) check multiple rules in one run. However, this is probably easier to define.

Idea for how it will look in the config:

rules:
  custom:
    - name: Project complies to custom rule 1
      slug: custom/rule-1
      details: |
        Go all out with a full `markdown` description of this rule, what it checks, why it's there and how to implement it for a project.
        Multiline too.
      weight: 1 # importance within the Custom category of rules.
      run: python check_project.py # can be any command that can be passed to Golang's `os/exec`

To implement it all:

  • Determine and implement proper config structure for implementing custom linter
  • Create a Custom (custom) category and implement a linter to run all these custom linting rules and aggregate the results.
  • Test this custom linter runner linter runner linter thing.
  • Display the output of these custom linters in the report nicely.
  • Write some documentation and perhaps some examples as to how users should set up these custom rules.

Including and excluding files and/or folders

Context / Problem to solve

Who

The developer

As a developer I want to be able to configure mllint instead of the underlying linters.

What

Currently, by default the mllint tool includes all files/folders, even virtual environments.
To exclude the virtual environment you have to create a configuration setting for each specific linter.

Why

This is an issue since as a developer I want to use mllint without thinking about the underlying linters.

Feature

It would be nice if for at least some settings we can simply configure mllint instead of the underlying linters.
One such setting would be including and excluding project files.

Create Wiki with documentation of rules and categories

Currently, you have to run mllint list all and mllint describe some-slug to get information about a rule. It would be better if mllint's CI would auto-generate some content for a website on this repo at every release.

Idea:
- Implement generating Markdown version of Wiki page containing docs on every rule and category in mllint.
- Use this Wiki page creator action to generate a Wiki page from the mllint generated Markdown code.

Tasks:

  • Look into and set up Github Pages
  • Create a basic website (using Hugo)
  • Find a good theme (PaperMod)
  • Write some content for any other pages.
  • Implement a script to generate Hugo content from rules & categories descriptions
  • Set up CI pipeline to automatically deploy the website on pushes to master or on tagged releases.
  • Update ReadMe with link to website.
  • Merge to master

Formatting not working on Powershell

I'm working on Windows.
Formatting is not working properly on Powershell.
Is Powershell supported?

This is what mllint looks like on Powershell.
image

For comparison, this is what mllint looks like on git bash on the same computer (everything OK).
image

Crash (panic: runtime error) if Poetry is installed on system

If run on a system with Poetry installed, the application crashes.

» mllint -o mllint-report.md                                                                                                                                   
Linting project at  /home/hielke/repos/REMLA-project-group-7                                                                                                   
Using configuration from .mllint.yml (default: false)                                                                                                          
---                                                                                                                                                            
                                                                                                                                                               
panic: runtime error: invalid memory address or nil pointer dereference                                                                                        
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x62e632]                                                                                         
                                                                                                                                                               
goroutine 1 [running]:                                                                                                                                         
github.com/pelletier/go-toml.(*Tree).GetPath(0x0, 0xc0011af870, 0x1, 0x1, 0x0, 0x0)                                                                            
        /home/runner/go/pkg/mod/github.com/pelletier/[email protected]/toml.go:118 +0xf2                                                                          
github.com/pelletier/go-toml.(*Tree).HasPath(...)                                                                                                              
        /home/runner/go/pkg/mod/github.com/pelletier/[email protected]/toml.go:66                                                                                 
github.com/pelletier/go-toml.(*Tree).Has(0x0, 0xc5935e, 0x4, 0x0)                                                                                              
        /home/runner/go/pkg/mod/github.com/pelletier/[email protected]/toml.go:61 +0xa5                                                                           
github.com/bvobart/mllint/setools/depmanagers.Poetry.HasDevDependency(...)                                                                                     
        /home/runner/work/mllint/mllint/setools/depmanagers/poetry.go:47                                                                                       
github.com/bvobart/mllint/setools/depmanagers.Poetry.HasDependency(0xc0011af7b0, 0xc5935e, 0x4, 0xc000d334c0)                                                  
        /home/runner/work/mllint/mllint/setools/depmanagers/poetry.go:43 +0x85                                                                                 
github.com/bvobart/mllint/setools/cqlinters.DetectLinter(0x13330b8, 0x1bbb1d8, 0xc00002c034, 0x28, 0xc000d334c0, 0x36, 0xc00110c270, 0x28, 0xc001110080, 0x6, ...)
        /home/runner/work/mllint/mllint/setools/cqlinters/detect.go:32 +0xfd
github.com/bvobart/mllint/setools/cqlinters.Detect(0xc00002c034, 0x28, 0xc000d334c0, 0x36, 0xc00110c270, 0x28, 0xc001110080, 0x6, 0x0, 0xc0010c73e0, ...)
        /home/runner/work/mllint/mllint/setools/cqlinters/detect.go:15 +0x13f
github.com/bvobart/mllint/commands.(*runCommand).runPreAnalysisChecks(0xc00103fbd0, 0x1, 0x1)
        /home/runner/work/mllint/mllint/commands/run.go:57 +0x198
github.com/bvobart/mllint/commands.(*runCommand).RunLint(0xc00103fbd0, 0xc000feea00, 0xc0010b6fe0, 0x0, 0x2, 0x5, 0xc0010b3f80)
        /home/runner/work/mllint/mllint/commands/run.go:98 +0x2c5
github.com/bvobart/mllint/commands.runRoot(0xc000feea00, 0xc0010b6fe0, 0x0, 0x2, 0x0, 0x0)
        /home/runner/work/mllint/mllint/commands/root.go:51 +0x12d
github.com/spf13/cobra.(*Command).execute(0xc000feea00, 0xc00001e0a0, 0x2, 0x2, 0xc000feea00, 0xc00001e0a0)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x472
github.com/spf13/cobra.(*Command).ExecuteC(0xc000feea00, 0x2f7bf60, 0x1b8bd20, 0x7fc5cab55a98)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:960 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:897
github.com/bvobart/mllint/commands.Execute(0xb8c5c0, 0xc0000380b8)
        /home/runner/work/mllint/mllint/commands/root.go:13 +0x5b
main.main()
        /home/runner/work/mllint/mllint/main.go:10 +0x25

OS: Debian 11
Python: 3.9.2
mllint version: 0.12.1
mllint commit: 7e2a29d

New Linter - Testing practices

This linter will need to check for correct testing practices.

  • Check whether project has tests --> either *_test.py or test_*.py.
  • Check whether tests pass through JUnit compatible XML report. All must pass, score is percentage of tests passed.
  • Check whether project's tests are all in tests folder
  • Check whether project provides test coverage report and whether it passes. Default test coverage target is 80%, but can be configured.

Adjust weights for rules according to feedback from survey

When there are more results from the survey, we can start tweaking the weights on each of the rules, as these are currently all 1.

One thing was already noted in the Reddit discussion: the weight for the rule for how many code quality linters are detected / how many are being used, should be weighted 0. The rule is meant to inform users of the linters that are out there and that we recommend, but should prevent users from adding linters to their project just for the sake of adding linters. We may still score the rules for the analysis that those linters do (e.g. code-quality/pylint/no-issues) at 0% when that linter is not installed in the environment they are running mllint in.

Issue: No open issues :-)

Dear Bart,

I would like to contribute to your project, but I don't see what would be the most useful place to start. :-)

Suggestions would be highly welcome. :-)

I'm an ML engineer primarily using Python, btw. :-)

Regards,

Nemanja

Provide an aggregated score per category, as a weighted average of the scores of the rules within and their weights

This was actually one of the first objectives for mllint: to provide a quality score for the entire project, based on the score of the categories and how important each of those categories are. The scores of the categories are calculated in much the same way, as a weighted average of the rules' scores with their weights. While it is still rather difficult to say which categories weigh more than others, let's at least implement the part where we calculate a score for each category as a weighted average of the rules within.

Note that in the beginning, the score will probably be equivalent to a normal average of all of a category's rule scores, as the weights given to each rule are currently still 1.

New Linter - Data Quality

The goal for this issue is to build a linter for the Data Quality category to check whether and how a project is using tools to assert quality standards on its input data, e.g. GreatExpectations, TFDV and (maybe) Bulwark, the successor of Engarde.

Firstly, we should figure out, primarily for GreatExpectations and TFDV:

  1. How to apply these tools to a ML project? What generally needs to change about a project in order to implement such a tool? How much effort does this take? The latest branch of the basic project in mllint-example-projects repo can be used as a base for this.
  2. What constitutes effective use of these tools? What kind of checks should / would ML engineers want to implement on their data? Are there any default checks that should always be enabled, or should the user create a certain set of their own checks somehow?
  3. How could mllint measure and assess whether a project is making effective use of these tools?
  4. How could mllint measure and assess whether the checks made by these tools passed? This could entail running GreatExpectations or TFDV in a similar way to what we do for Code Quality linters (Pylint, Mypy, etc.) and parsing the output (bonus points if this output can be formatted in a machine-readable way such as JSON or YAML).

Then, to implement it:

  • Figure out the answers to the above questions.
  • Determine which linting rules mllint will use to check whether a project is using GreatExpectations correctly
  • Determine which linting rules mllint will use to check whether a project is using TFDV correctly
  • Implement the linter to check these rules (just copy the template linter and start editing that)
  • Implement tests for the linter
  • Write the documentation to go with those rules
  • Write the documentation for the category

How to ignore python virtual environment?

My virtual environment is stored in in my project in a directory called env. mllint doesn't seem to ignore this and I can't find a configuration to exclude it. This results in a very slow analysis and it gives me suggestions for all of my dependencies and drowns out anything to do with my project

No wheel package for Python 3.10

In a Python 3.10 environment, it is impossible to install mllint and pip downloads the source package only which respectively fails. Considering the project's current state, the build from source is not correctly set up for such situations.

Profiles - Support different sets of rule weights depending on the project's use-case and maturity

ML projects come in many shapes and sizes, this heterogeneous pluriformity also makes mllint an ambitious endeavour. Some rules are not equivalently applicable to every project, some rules may want to give different recommendations based on what kind of project they are looking at. Productionised or production-ready ML applications have higher requirements on SE quality standards than proof-of-concept projects. For example, proof-of-concept projects don't need to be entirely lint-warning free or have highly advanced deployment setups ready yet. Similarly, different tooling may be necessary / desired for a Tensorflow-project versus a PyTorch project versus an sklearn project.

mllint should therefore allow the user to specify a profile for the project in their mllint configuration. Such a profile will contain the maturity, i.e., current stage in the project's lifecycle (e.g. proof-of-concept, production-ready, production), as well as some way of defining the high-level kind of tooling that is used in the project (e.g. tensorflow, pytorch, ). The default would be a proof-of-concept project with auto-detected tooling.

The project's desired maturity level will be used to determine the weights of each rule. The auto-detectable high-level tooling architecture setting, can be used to steer recommendations towards or away from certain tools, e.g. TFDV instead of GreatExpectations for a tensorflow project.

Perhaps in the far future, this could also be used for linting ML projects in different languages such as R or Julia.

panic: runtime error: invalid memory address or nil pointer dereference

System Information

OS Name:                   Microsoft Windows 10 Pro
OS Version:                10.0.19043 N/A Build 19043
Python 3.7.5
mllint version: 0.12.0
commit: 400f497aaf361481c6a650464767c52afecd7081
date: 2021-08-04T20:54:27Z
took: 8.7563ms

Steps to Reproduce

I run mllint src (src is my source code directory) and get the following error

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x18 pc=0x6b28a6]

The full output is below

Linting project at  C:\Development\computer-vision-papers/src
No .mllint.yml or pyproject.toml found in project folder, using default configuration
---

✔️ Done - Custom Rules (0.00 ms)
⏳ Waiting - Version Control (0.00 ms)
✔️ Done - Dependency Management (0.56 ms)
🏃 Running - Continuous Integration (CI) (0.00 ms)
🏃 Running - Code Quality (0.00 ms)
✔️ Done - Testing (0.56 ms)
Analysing project... (4/8)

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x18 pc=0x6b28a6]

goroutine 72 [running]:
github.com/bvobart/mllint/utils.FileExists(0xc0015812c0, 0x50, 0x2)
        /home/runner/work/mllint/mllint/utils/files.go:18 +0xa6
github.com/bvobart/mllint/setools/ciproviders.Azure.Detect(0xc000fc9050, 0x29, 0x40)
        /home/runner/work/mllint/mllint/setools/ciproviders/azure.go:19 +0xb3
github.com/bvobart/mllint/setools/ciproviders.Detect(0xc000fc9050, 0x29, 0x1afe3c0, 0xc0002da708, 0xb41648)
        /home/runner/work/mllint/mllint/setools/ciproviders/providers.go:37 +0xc9
github.com/bvobart/mllint/linters/ci.(*CILinter).LintProject(0x1b57af0, 0xc000fc9050, 0x29, 0xc000a3e440, 0x3c, 0xc0000e9f80, 0x28, 0xc0000acba0, 0xb, 0x1, ...)
        /home/runner/work/mllint/mllint/linters/ci/linter.go:27 +0xaf
github.com/bvobart/mllint/commands/mllint.(*MLLintRunner).runTask.func1(0xc000504d80, 0xc000a30480)
        /home/runner/work/mllint/mllint/commands/mllint/queueworker.go:89 +0xdf
created by github.com/bvobart/mllint/commands/mllint.(*MLLintRunner).runTask
        /home/runner/work/mllint/mllint/commands/mllint/queueworker.go:83 +0x74

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.