Coder Social home page Coder Social logo

nolanlawson / optimize-js Goto Github PK

View Code? Open in Web Editor NEW
3.8K 64.0 104.0 1.79 MB

Optimize a JS file for faster parsing (UNMAINTAINED)

Home Page: https://nolanlawson.github.io/optimize-js

License: Apache License 2.0

JavaScript 99.66% HTML 0.31% Shell 0.04%

optimize-js's Issues

Not as optimized for Webpack as it could be

Looking at the Webpack output, it appears to me that those modules that are passed in as an array to another function (and are therefore not wrapped) should probably be wrapped. As with Browserify, I'd wager it's rare for a module not to be immediately require()d, meaning that wrapping all of them would provide a perf boost.

Need a reasonably-sized Webpack bundle in order to test this and confirm, though. TODO: find some large-ish library that is built with Webpack.

I made a Web UI for optimize-js

Hi,

Since I had done a React interface [1] for the package javascript-obfuscator [2], I decided to re-use the same codebase to make a similar interface to optimize-js.

You can see it running it here: https://optimizejs.herokuapp.com/ and the source-code here https://github.com/slig/optimize-js-ui

Not sure how useful it can be, but online demos are nice, I guess.

[1] https://github.com/javascript-obfuscator/javascript-obfuscator-ui
[2] https://github.com/javascript-obfuscator/javascript-obfuscator

Not as optimized for Browserify as it could be.

In the vein of #7, I think optimize-js also doesn't work optimally for Browserify.

Whereas Webpack wraps modules in function expressions that are elements in an array, Browserify wraps modules in function expressions that are elements in an array which is in turn a value in an object with numeric keys, something like this:

!function(o){
  /* loader code */
}({
  1:[function(o,r){/*module 1 code */}, {}],
  2:[function(o,r){/*module 2 code */}, {}],
  3:[function(o,r){/*module 3 code */}, {}]
}

I noticed in making my patch for #7 and testing on The Cost of Small Modules benchmark repo that optimize-js had essentially no effect on the Browserify bundles, and I'm pretty sure this is why.

Do minifiers undo this?

I'd just like to have clarified (and preemptively warned about) using a minifier after this. I'm worried they might detect the parenthesis as unnecessary and remove them.

Smarter function-passed-to-function heuristics?

Good candidates for wrapping:

  • forEach(function() {})
  • map(function() {})
  • then(function() {})
  • UMD/Browserify/Webpack definitions

Less-good candidates:

  • addEventListener('foo', function() {})
  • on('foo', function() {})
  • once('foo', function() {})
  • catch('foo', function() {}) (maybe?)

We can check for the function name as a hint to avoid the paren-wrapping, but it ought to be justified by a benchmark. Maybe a UI library that adds a lot of event listeners would be a good test case for this.

Use benchmark.js or similar, report standard deviation

Right now I'm taking the median of 251 runs, but I'm a bit unsatisfied with this because I have not confirmed that 251 is enough to eliminate unreasonable variance, nor does it tell me the standard deviation or any of those niceties. As pointed out in #2 (comment), we should really have a more rigorous benchmark in place, especially as we start to potentially make changes to "improve" performance, which need to be proven to actually improve it.

I looked at benchmark.js but TBH couldn't figure out how to nicely integrate it with the current benchmark design; e.g. currently I request each script with a random query param to force the browser to re-parse and re-evaluate it, and the timings have to be inside the script itself, so it's very tricky to implement.

make a babel-plugin ?

a babal-plugin can parse more es6 code
and can be more easy to integrate with gulp/webpack.

Memory leak

In case of using optimize-js 1.0.3 it somehow affects memory leak:
capture

If i use 1.0.2 everything is ok

Thanks, NPCRUS

Benchmark unfairly excludes compilation time.

As I've looked at Chrome timelines and thought about it, I have some concerns about the benchmark. I think it's somewhat biased in favor of optimize-js because optimize-js moves some CPU time from execution to compilation, but the benchmark only measures execution.

Basically, unoptimized code does a quick parse during the compilation phase, and then does a slow/complete parse during the execution phase (along with the actual execution of the code, obviously). After optimize-js has been run, the compilation phase does a slow/complete parse, and the execution phase just runs the code. But since the benchmark measures the time between executing the first line and the last line of the script, it is measuring only the execution phase, which means that the time increase in the compilation phase gets lost. I confirm this by looking at Chrome timelines; after optimize-js runs, the compilation phase goes up considerably, but the benchmark just reports the execute time.

I think the fairest benchmark is compilation + first execution, as this is what most pages care about for first load. What I don't know is how to measure that. Here are some ideas, all problematic:

  1. Start measurement from the moment that the script element is added to the DOM. This is what the cost-of-small-modules benchmark does, and it clearly shows time moving from the execution phase to the compilation/loading phase when you use optimize-js. The downside, of course, is that it includes loading time as well in the compilation phase. If all the files are served locally, this probably isn't a huge issue, but it is a source of error in the measurements.
  2. Start measurement from the moment that the script element is added to the DOM, but subtract the time from the Resource Timing API. This is the same as 1, except that you would use the Resource Timing API to see how long it took to load the script from the network and subtract that amount from the measurement. This would reduce the network-based error in 1, but it may not work perfectly, because browsers might start the compilation phase before receiving the last byte of the script. If this is the case, then subtracting the load time of the script would hide some of the compilation phase. More conservatively, you could just subtract TTFB from the loading/compilation phase. Also, Resource Timing isn't available on Safari.
  3. Download the script with XHR/fetch, and call eval() on it. The other possibility is to download the code and then eval it. The benefit here is that you definitely are capturing compilation + execution without getting any network load time mixed in. The downside is that I could totally believe that browsers disable some perf optimizations for eval, so it's possible the numbers will be misleading.

Does this make sense? Do you have other ideas how to measure this (or other thoughts)?

Reaches parse error with async functions

Repro:

example.js:

!function (){}()
async function runIt(fun){ fun() }
runIt(function (){})

optimize-js www/example.js

{
 SyntaxError: Unexpected token (2:6)
    at Parser.pp$4.raise (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:2223:15)
    at Parser.pp.unexpected (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:605:10)
    at Parser.pp.semicolon (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:583:61)
    at Parser.pp$1.parseExpressionStatement (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:968:10)
    at Parser.pp$1.parseStatement (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:732:24)
    at Parser.pp$1.parseTopLevel (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:640:25)
    at Parser.parse (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:518:17)
    at Object.parse (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/node_modules/acorn/dist/acorn.js:3100:39)
    at optimizeJs (/Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/lib/index.js:9:19)
    at /Users/lundfall/.nvm/versions/node/v7.10.0/lib/node_modules/optimize-js/lib/bin.js:26:15 pos: 23, loc: Position { line: 2, column: 6 }, raisedAt: 31 }

Add an option to read sourcemap

Adding an option to access the source map would ease the integration with build tools.
e.g.:

const {code, map} = optimizeJs(input, {
  sourceMap: true,
  extractSourceMap: true,
})

Feature Request or missing Documentation?

Hi, thanks for optimize-js. I would like to include it in my webpack workflow. Question:

  • does a Webpack plugin already exists?
  • is #7 an issue to create one if it does not already exists?

Clarify what is being faster

The README mentions several phrases, e.g. "for faster execution", "speed boost", "wrapped in performance.now() measurements". I think it is helpful for the reader (without the need to look at the benchmark code) to be very specific as to what timing metric is being measured and compared.

Made a Ruby gem

Hi.

I made a Ruby gem which allows to invoke optimize-js from Ruby.
Also this gem works well with sprockets and Rails.
I am interested in this project and I will maintain it until uglifyjs implement optimize-js functionality.

Maybe someone will find this useful.

Repository lives here: https://github.com/yivo/optimize-js

Your math is wrong

You list 9.42ms --> 5.27ms as a 128.73% improvement, but list 12.76ms --> 1.75ms as an 86.29% improvement!?

The actual correct percentages are "78.75% and 629.14% increases in speed (respectively)", or equivalently, "44.06% and 86.29% decrease in runtime".

The formulas are:
Increase in speed: old/new - 1
Decrease in runtime: 1 - new/old

Encoding problem

Hi,

I hava a js file which contains the following line:

var char = '×××××××'

After I use optimize-js then I received the following result:

var char = '´┐Ż´┐Ż´┐Ż´┐Ż´┐Ż´┐Ż´┐Ż'

I am using the following powershell script:

$PSDefaultParameterValues = @{ '*:Encoding' = 'utf8' } 
 <#npm install -g optimize-js #>
optimize-js 'c:\!Publish\js\test.js' > 'c:\!Publish\js\test2.js'

Example.zip

Bad Readme example?

Seems the before & after are the same? Or should I not report bugs at 1am? ;)

Readme

Example input:

!function (){}()
function runIt(fun){ fun() }
runIt(function (){})

Example output:

!(function (){})()
function runIt(fun){ fun() }
runIt((function (){}))

walk.js aborts with error if input contains ES6 code

Perhaps because walk.js does not support ES6?

Example: start.js.txt

TypeError: types[type] is not a function
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.skip (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:293:3)
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.exports.BinaryExpression.exports.LogicalExpression.exports.AssignmentExpression (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:243:3)
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.skip (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:293:3)
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.exports.ExpressionStatement (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:32:3)
    at walk (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/walk.js:17:16)
    at Object.skip (/usr/local/lib/node_modules/optimize-js/node_modules/walk-ast/lib/types.js:293:3)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.