Coder Social home page Coder Social logo

nolanlawson / optimize-js Goto Github PK

View Code? Open in Web Editor NEW
3.8K 64.0 104.0 1.79 MB

Optimize a JS file for faster parsing (UNMAINTAINED)

Home Page: https://nolanlawson.github.io/optimize-js

License: Apache License 2.0

JavaScript 99.66% HTML 0.31% Shell 0.04%

optimize-js's Introduction

optimize-js Build Status JavaScript Style Guide

Optimize a JavaScript file for faster initial execution and parsing, by wrapping all immediately-invoked functions or likely-to-be-invoked functions in parentheses.

⚠️ Maintenance note ⚠️ This project is unmaintained. I consider it an interesting experiment, but I have no intention to keep updating the benchmark results with every new browser release, or to add new features. I invite folks to keep using it, but to be aware that they should heavily benchmark their own websites to ensure it's actually a significant performance improvement in their target browsers.

Update The V8 team wrote a blog post about why you probably shouldn't use optimize-js anymore.

Install

npm install -g optimize-js

Usage

optimize-js input.js > output.js

Example input:

!function (){}()
function runIt(fun){ fun() }
runIt(function (){})

Example output:

!(function (){})()
function runIt(fun){ fun() }
runIt((function (){}))

Benchmark overview

Browser Typical speed boost/regression using optimize-js
Chrome 55 20.63%
Edge 14 13.52%
Firefox 50 8.26%
Safari 10 -1.04%

These numbers are based on a benchmark of common JS libraries. For benchmark details, see benchmarks.

To test on your own JavaScript bundle, check out test-optimize-js.

CLI

Usage: optimize-js [ options ]

Options:
  --source-map  include source map                                     [boolean]
  -h, --help    Show help                                              [boolean]

Examples:
  optimize-js input.js > output.js    optimize input.js
  optimize-js < input.js > output.js  read from stdin, write to stdout

JavaScript API

var optimizeJs = require('optimize-js');
var input = "!function() {console.log('wrap me!')}";
var output = optimizeJs(input); // "!(function() {console.log('wrap me!')})()"

You can also pass in arguments:

var optimizeJs = require('optimize-js');
var input = "!function() {console.log('wrap me!')}";
var output = optimizeJs(input, {
  sourceMap: true
}); // now the output has source maps

Why?

Modern JavaScript engines like V8, Chakra, and SpiderMonkey have a heuristic where they pre-parse most functions before doing a full parse. The pre-parse step merely checks for syntax errors while avoiding the cost of a full parse.

This heuristic is based on the assumption that, on the average web page, most JavaScript functions are never executed or are lazily executed. So a pre-parse can prevent a slower startup time by only checking for what the browser absolutely needs to know about the function (i.e. whether it's syntactically well-formed or not).

Unfortunately this assumption breaks down in the case of immediately-invoked function expressions (IIFEs), such as these:

(function () { console.log('executed!') })();
(function () { console.log('executed Crockford-style!') }());
!function () { console.log('executed UglifyJS-style!') }();

The good news is that JS engines have a further optimization, where they try to detect such IIFEs and skip the pre-parse step. Hooray!

The bad news, though, is that these heuristics don't always work, because they're based on a greedy method of checking for a '(' token immediately to the left of the function. (The parser avoids anything more intricate because it would amount to parsing the whole thing, negating the benefit of the pre-parse). In cases without the paren (which include common module formats like UMD/Browserify/Webpack/etc.), the browser will actually parse the function twice, first as a pre-parse and second as a full parse. This means that the JavaScript code runs much more slowly overall, because more time is spent parsing than needs to be. See "The cost of small modules" for an idea of how bad this can get.

Luckily, because the '(' optimization for IIFEs is so well-established, we can exploit this during our build process by parsing the entire JavaScript file in advance (a luxury the browser can't afford) and inserting parentheses in the cases where we know the function will be immediately executed (or where we have a good hunch). That's what optimize-js does.

More details on the IIFE optimization can be found in this discussion. Some of my thoughts on the virtues of compile-time optimizations can be found in this post.

FAQs

How does it work?

The current implementation is to parse to a syntax tree and check for functions that:

  1. Are immediately-invoked via any kind of call statement (function(){}(), !function(){}(), etc.)
  2. Are passed in directly as arguments to another function

The first method is an easy win – those functions are immediately executed. The second method is more of a heuristic, but tends to be a safe bet given common patterns like Node-style errbacks, Promise chains, and UMD/Browserify/Webpack module declarations.

In all such cases, optimize-js wraps the function in parentheses.

But... you're adding bytes!

Yes, optimize-js might add as many as two bytes (horror!) per function, which amounts to practically nil once you take gzip into account. To prove it, here are the gzipped sizes for the libraries I use in the benchmark:

Script Size (bytes) Difference (bytes)
benchmarks/create-react-app.min.js 160387
benchmarks/create-react-app.min.optimized.js 160824 + 437
benchmarks/immutable.min.js 56738
benchmarks/immutable.min.optimized.js 56933 + 195
benchmarks/jquery.min.js 86808
benchmarks/jquery.min.optimized.js 87109 + 301
benchmarks/lodash.min.js 71381
benchmarks/lodash.min.optimized.js 71644 + 263
benchmarks/pouchdb.min.js 140332
benchmarks/pouchdb.min.optimized.js 141231 + 899
benchmarks/three.min.js 486996
benchmarks/three.min.optimized.js 487279 + 283

Is optimize-js intended for library authors?

Sure! If you are already shipping a bundled, minified version of your library, then there's no reason not to apply optimize-js (assuming you benchmark your code to ensure it does indeed help!). However, note that optimize-js should run after Uglify, since Uglify strips extra parentheses and also negates IIFEs by default. This also means that if your users apply Uglification to your bundle, then the optimization will be undone.

Also note that because optimize-js optimizes for some patterns that are based on heuristics rather than known eagerly-invoked functions, it may actually hurt your performance in some cases. (See benchmarks below for examples.) Be sure to check that optimize-js is a help rather than a hindrance for your particular codebase, using something like:

<script>
var start = performance.now();
</script>
<script src="myscript.js"></script>
<script>
var end = performance.now();
console.log('took ' + (end - start) + 'ms');
</script>

Note that the script boundaries are actually recommended, in order to truly measure the full parse/compile time. If you'd like to avoid measuring the network overhead, you can see how we do it in our benchmarks.

You may also want to check out marky, which allows you to easily set mark/measure points that you can visually inspect in the Dev Tools timeline to ensure that the full compile time is being measured.

Also, be sure to test in multiple browsers! If you need an Edge VM, check out edge.ms.

Shouldn't this be Uglify's job?

Possibly! This is a free and open-source library, so I encourage anybody to borrow the code or the good ideas. :)

Why not paren-wrap every single function?

As described above, the pre-parsing optimization in browsers is a very good idea for the vast majority of the web, where most functions aren't immediately executed. However, since optimize-js knows when your functions are immediately executed (or can make reasonable guesses), it can be more judicious in applying the paren hack.

Does this really work for every JavaScript engine?

Based on my tests, this optimization seems to work best for V8 (Chrome), followed by Chakra (Edge), followed by SpiderMonkey (Firefox). For JavaScriptCore (Safari) it seems to be basically a wash, and may actually be a slight regression overall depending on your codebase. (Again, this is why it's important to actually measure on your own codebase, on the browsers you actually target!)

In the case of Chakra, Uglify-style IIFEs are actually already optimized, but using optimize-js doesn't hurt because a function preceded by '(' still goes into the fast path.

Benchmarks

These tests were run using a handful of popular libraries, wrapped in performance.now() measurements. Each test reported the median of 251 runs. optimize-js commit da51013 was tested. Minification was applied using uglifyjs -mc, Uglify 2.7.0.

You can also try a live version of the benchmark.

Chrome 55, Windows 10 RS1, Surfacebook i5

Script Original Optimized Improvement Minified Min+Optimized Improvement
Create React App 55.39ms 51.71ms 6.64% 26.12ms 21.09ms 19.26%
ImmutableJS 11.61ms 7.95ms 31.50% 8.50ms 5.99ms 29.55%
jQuery 22.51ms 16.62ms 26.18% 19.35ms 16.10ms 16.80%
Lodash 20.88ms 19.30ms 7.57% 20.47ms 19.86ms 3.00%
PouchDB 43.75ms 20.36ms 53.45% 36.40ms 18.78ms 48.43%
ThreeJS 71.04ms 72.98ms -2.73% 54.99ms 39.59ms 28.00%
Overall improvement: 20.63%

Edge 14, Windows 10 RS1, SurfaceBook i5

Script Original Optimized Improvement Minified Min+Optimized Improvement
Create React App 32.46ms 24.85ms 23.44% 26.49ms 20.39ms 23.03%
ImmutableJS 8.94ms 6.19ms 30.74% 7.79ms 5.41ms 30.55%
jQuery 22.56ms 14.45ms 35.94% 16.62ms 12.99ms 21.81%
Lodash 22.16ms 21.48ms 3.05% 15.77ms 15.46ms 1.96%
PouchDB 24.07ms 21.22ms 11.84% 39.76ms 52.86ms -32.98%
ThreeJS 43.77ms 39.99ms 8.65% 54.00ms 36.57ms 32.28%
Overall improvement: 13.52%

Firefox 50, Windows 10 RS1, Surfacebook i5

Script Original Optimized Improvement Minified Min+Optimized Improvement
Create React App 33.56ms 28.02ms 16.50% 24.71ms 22.05ms 10.76%
ImmutableJS 6.52ms 5.75ms 11.80% 4.96ms 4.58ms 7.47%
jQuery 15.77ms 13.97ms 11.41% 12.90ms 12.15ms 5.85%
Lodash 17.08ms 16.63ms 2.64% 13.11ms 13.22ms -0.80%
PouchDB 19.23ms 16.77ms 12.82% 13.77ms 12.89ms 6.42%
ThreeJS 38.33ms 37.36ms 2.53% 33.01ms 30.32ms 8.16%
Overall improvement: 8.26%

Safari 10, macOS Sierra, 2013 MacBook Pro i5

Script Original Optimized Improvement Minified Min+Optimized Improvement
Create React App 31.60ms 31.60ms 0.00% 23.10ms 23.50ms -1.73%
ImmutableJS 5.70ms 5.60ms 1.75% 4.50ms 4.50ms 0.00%
jQuery 12.40ms 12.60ms -1.61% 10.80ms 10.90ms -0.93%
Lodash 14.70ms 14.50ms 1.36% 11.10ms 11.30ms -1.80%
PouchDB 14.00ms 14.20ms -1.43% 11.70ms 12.10ms -3.42%
ThreeJS 35.60ms 36.30ms -1.97% 27.50ms 27.70ms -0.73%
Overall improvement: -1.04%

Note that these results may vary based on your machine, how taxed your CPU is, gremlins, etc. I ran the full suite a few times on all browsers and found these numbers to be roughly representative. In our test suite, we use a median of 151 runs to reduce variability.

Plugins

See also

Thanks

Thanks to @krisselden, @bmaurer, and @pleath for explaining these concepts in the various GitHub issues. Thanks also to astexplorer, acorn, and magic-string for making the implementation so easy.

Thanks to Sasha Aickin for generous contributions to improving this library (especially in v1.0.3) and prodding me to improve the accuracy of the benchmarks.

Contributing

Build and run tests:

npm install
npm test

Run the benchmarks:

npm run benchmark # then open localhost:9090 in a browser

Test code coverage:

npm run coverage

Changelog

  • v1.0.3
    • Much more accurate benchmark (#37)
    • Browserify-specific fixes (#29, #36, #39)
    • Webpack-specific fixes (#7, #34)
  • v1.0.2
    • Use estree-walker to properly parse ES6 (#31)
  • v1.0.1:
    • Don't call process.exit(0) on success (#11)
  • v1.0.0
    • Initial release

optimize-js's People

Contributors

aickin avatar austinkelleher avatar hhnr avatar hugoabonizio avatar jfsiii avatar lattsi avatar michelgotta avatar mlrawlings avatar nolanlawson avatar sergejmueller avatar vigneshshanmugam avatar ymichael avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

optimize-js's Issues

Not as optimized for Browserify as it could be.

In the vein of #7, I think optimize-js also doesn't work optimally for Browserify.

Whereas Webpack wraps modules in function expressions that are elements in an array, Browserify wraps modules in function expressions that are elements in an array which is in turn a value in an object with numeric keys, something like this:

!function(o){
  /* loader code */
}({
  1:[function(o,r){/*module 1 code */}, {}],
  2:[function(o,r){/*module 2 code */}, {}],
  3:[function(o,r){/*module 3 code */}, {}]
}

I noticed in making my patch for #7 and testing on The Cost of Small Modules benchmark repo that optimize-js had essentially no effect on the Browserify bundles, and I'm pretty sure this is why.

Feature Request or missing Documentation?

Hi, thanks for optimize-js. I would like to include it in my webpack workflow. Question:

  • does a Webpack plugin already exists?
  • is #7 an issue to create one if it does not already exists?

Do minifiers undo this?

I'd just like to have clarified (and preemptively warned about) using a minifier after this. I'm worried they might detect the parenthesis as unnecessary and remove them.

Clarify what is being faster

The README mentions several phrases, e.g. "for faster execution", "speed boost", "wrapped in performance.now() measurements". I think it is helpful for the reader (without the need to look at the benchmark code) to be very specific as to what timing metric is being measured and compared.

walk.js aborts with error if input contains ES6 code

Perhaps because walk.js does not support ES6?

Example: start.js.txt

TypeError: types[type] is not a function
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.skip (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:293:3)
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.exports.BinaryExpression.exports.LogicalExpression.exports.AssignmentExpression (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:243:3)
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.skip (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:293:3)
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.exports.ExpressionStatement (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:32:3)
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.skip (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:293:3)

Encoding problem

Hi,

I hava a js file which contains the following line:

var char = '×××××××'

After I use optimize-js then I received the following result:

var char = '´┐Ż´┐Ż´┐Ż´┐Ż´┐Ż´┐Ż´┐Ż'

I am using the following powershell script:

$PSDefaultParameterValues = @{ '*:Encoding' = 'utf8' } 
 <#npm install -g optimize-js #>
optimize-js 'c:\!Publish\js\test.js' > 'c:\!Publish\js\test2.js'

Example.zip

Made a Ruby gem

Hi.

I made a Ruby gem which allows to invoke optimize-js from Ruby.
Also this gem works well with sprockets and Rails.
I am interested in this project and I will maintain it until uglifyjs implement optimize-js functionality.

Maybe someone will find this useful.

Repository lives here: https://github.com/yivo/optimize-js

I made a Web UI for optimize-js

Hi,

Since I had done a React interface [1] for the package javascript-obfuscator [2], I decided to re-use the same codebase to make a similar interface to optimize-js.

You can see it running it here: https://optimizejs.herokuapp.com/ and the source-code here https://github.com/slig/optimize-js-ui

Not sure how useful it can be, but online demos are nice, I guess.

[1] https://github.com/javascript-obfuscator/javascript-obfuscator-ui
[2] https://github.com/javascript-obfuscator/javascript-obfuscator

Use benchmark.js or similar, report standard deviation

Right now I'm taking the median of 251 runs, but I'm a bit unsatisfied with this because I have not confirmed that 251 is enough to eliminate unreasonable variance, nor does it tell me the standard deviation or any of those niceties. As pointed out in #2 (comment), we should really have a more rigorous benchmark in place, especially as we start to potentially make changes to "improve" performance, which need to be proven to actually improve it.

I looked at benchmark.js but TBH couldn't figure out how to nicely integrate it with the current benchmark design; e.g. currently I request each script with a random query param to force the browser to re-parse and re-evaluate it, and the timings have to be inside the script itself, so it's very tricky to implement.

Your math is wrong

You list 9.42ms --> 5.27ms as a 128.73% improvement, but list 12.76ms --> 1.75ms as an 86.29% improvement!?

The actual correct percentages are "78.75% and 629.14% increases in speed (respectively)", or equivalently, "44.06% and 86.29% decrease in runtime".

The formulas are:
Increase in speed: old/new - 1
Decrease in runtime: 1 - new/old

Reaches parse error with async functions

Repro:

example.js:

!function (){}()
async function runIt(fun){ fun() }
runIt(function (){})

optimize-js www/example.js

{
 SyntaxError: Unexpected token (2:6)
    at Parser.pp$4.raise (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:2223:15)
    at Parser.pp.unexpected (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:605:10)
    at Parser.pp.semicolon (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:583:61)
    at Parser.pp$1.parseExpressionStatement (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:968:10)
    at Parser.pp$1.parseStatement (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:732:24)
    at Parser.pp$1.parseTopLevel (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:640:25)
    at Parser.parse (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:518:17)
    at Object.parse (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:3100:39)
    at optimizeJs (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/lib/index.js:9:19)
    at /Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/lib/bin.js:26:15 pos: 23, loc: Position { line: 2, column: 6 }, raisedAt: 31 }

make a babel-plugin ?

a babal-plugin can parse more es6 code
and can be more easy to integrate with gulp/webpack.

Bad Readme example?

Seems the before & after are the same? Or should I not report bugs at 1am? ;)

Readme

Example input:

!function (){}()
function runIt(fun){ fun() }
runIt(function (){})

Example output:

!(function (){})()
function runIt(fun){ fun() }
runIt((function (){}))

Smarter function-passed-to-function heuristics?

Good candidates for wrapping:

  • forEach(function() {})
  • map(function() {})
  • then(function() {})
  • UMD/Browserify/Webpack definitions

Less-good candidates:

  • addEventListener('foo', function() {})
  • on('foo', function() {})
  • once('foo', function() {})
  • catch('foo', function() {}) (maybe?)

We can check for the function name as a hint to avoid the paren-wrapping, but it ought to be justified by a benchmark. Maybe a UI library that adds a lot of event listeners would be a good test case for this.

Not as optimized for Webpack as it could be

Looking at the Webpack output, it appears to me that those modules that are passed in as an array to another function (and are therefore not wrapped) should probably be wrapped. As with Browserify, I'd wager it's rare for a module not to be immediately require()d, meaning that wrapping all of them would provide a perf boost.

Need a reasonably-sized Webpack bundle in order to test this and confirm, though. TODO: find some large-ish library that is built with Webpack.

Add an option to read sourcemap

Adding an option to access the source map would ease the integration with build tools.
e.g.:

const {code, map} = optimizeJs(input, {
  sourceMap: true,
  extractSourceMap: true,
})

Memory leak

In case of using optimize-js 1.0.3 it somehow affects memory leak:
capture

If i use 1.0.2 everything is ok

Thanks, NPCRUS

Benchmark unfairly excludes compilation time.

As I've looked at Chrome timelines and thought about it, I have some concerns about the benchmark. I think it's somewhat biased in favor of optimize-js because optimize-js moves some CPU time from execution to compilation, but the benchmark only measures execution.

Basically, unoptimized code does a quick parse during the compilation phase, and then does a slow/complete parse during the execution phase (along with the actual execution of the code, obviously). After optimize-js has been run, the compilation phase does a slow/complete parse, and the execution phase just runs the code. But since the benchmark measures the time between executing the first line and the last line of the script, it is measuring only the execution phase, which means that the time increase in the compilation phase gets lost. I confirm this by looking at Chrome timelines; after optimize-js runs, the compilation phase goes up considerably, but the benchmark just reports the execute time.

I think the fairest benchmark is compilation + first execution, as this is what most pages care about for first load. What I don't know is how to measure that. Here are some ideas, all problematic:

  1. Start measurement from the moment that the script element is added to the DOM. This is what the cost-of-small-modules benchmark does, and it clearly shows time moving from the execution phase to the compilation/loading phase when you use optimize-js. The downside, of course, is that it includes loading time as well in the compilation phase. If all the files are served locally, this probably isn't a huge issue, but it is a source of error in the measurements.
  2. Start measurement from the moment that the script element is added to the DOM, but subtract the time from the Resource Timing API. This is the same as 1, except that you would use the Resource Timing API to see how long it took to load the script from the network and subtract that amount from the measurement. This would reduce the network-based error in 1, but it may not work perfectly, because browsers might start the compilation phase before receiving the last byte of the script. If this is the case, then subtracting the load time of the script would hide some of the compilation phase. More conservatively, you could just subtract TTFB from the loading/compilation phase. Also, Resource Timing isn't available on Safari.
  3. Download the script with XHR/fetch, and call eval() on it. The other possibility is to download the code and then eval it. The benefit here is that you definitely are capturing compilation + execution without getting any network load time mixed in. The downside is that I could totally believe that browsers disable some perf optimizations for eval, so it's possible the numbers will be misleading.

Does this make sense? Do you have other ideas how to measure this (or other thoughts)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.