Coder Social home page Coder Social logo

Forking for every test file is slow about ava HOT 44 CLOSED

avajs avatar avajs commented on June 19, 2024 3
Forking for every test file is slow

from ava.

Comments (44)

sindresorhus avatar sindresorhus commented on June 19, 2024 5

We're not and tbh it's annoying that we'll probably have too. Why can't the system just pool the load for us instead of we doing it manually? I hate computers so much... Let's experiement with what kind of number is optimal. I'm thinking (require('os').cpus().length || 1) * 2;.

from ava.

jprichardson avatar jprichardson commented on June 19, 2024 3

I have a very big project that I wanted to try ava on. After having the Activity Monitor (Mac OS X 10.11) open and seeing 50+ node process, it brought my system to its knees. It'd be cool if there was some kind of limit.

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024 2

I think the biggest improvement will come from moving Babel to the main thread.

from ava.

kevva avatar kevva commented on June 19, 2024 1

Aren't we setting some kind of limit of forks? We should probably not create more forks than there are CPUs.

from ava.

ariporad avatar ariporad commented on June 19, 2024 1

@sindresorhus: Or, a slightly more sane idea: Make a map of every file in our dependency tree mapped to it's contents, then hook require to use those instead of the file system. Example:

var files = new Map([
    ["./lib/test.js", "'use strict';\nvar isGeneratorFn = require('is-generator-fn');\nvar maxTimeout = require('max-timeout');\nvar Promise = require('bluebird');\nvar fnName = require('fn-name');\nvar co = require('co-with-promise');\nvar observableToPromise = require('observable-to-promise');\nvar isPromise = require('is-promise');\nvar isObservable = require('is-observable');\nvar assert = require('./assert');\nvar enhanceAssert = require('./enhance-assert');\nvar globals = require('./globals');\n\nfunction Test(title, fn) {\nif (!(this instanceof Test)) {\n\t\treturn new Test(title, fn);\n\t}\n\n\tif (typeof title === 'function') {\n\t\tfn = title;\n\t\ttitle = null;\n\t}\n\n\tassert.is(typeof fn, 'function', 'you must provide a callback');\n\n\tthis.title = title || fnName(fn) || '[anonymous]';\n\tthis.fn = isGeneratorFn(fn) ? co.wrap(fn) : fn;\n\tthis.assertions = [];\n\tthis.planCount = null;\n\tthis.duration = null;\n\tthis.assertError = undefined;\n\n\tObject.defineProperty(this, 'assertCount', {\n\t\tenumerable: true,\n\t\tget: function () {\n\t\t\treturn this.assertions.length;\n\t\t}\n\t});\n\n\t// TODO(jamestalmage): make this an optional constructor arg instead of having Runner set it after the fact.\n\t// metadata should just always exist, otherwise it requires a bunch of ugly checks all over the place.\n\tthis.metadata = {};\n\n\t// store the time point before test execution\n\t// to calculate the total time spent in test\n\tthis._timeStart = null;\n\n\t// workaround for Babel giving anonymous functions a name\n\tif (this.title === 'callee$0$0') {\n\t\tthis.title = '[anonymous]';\n\t}\n}\n\nmodule.exports = Test;\n\nTest.prototype._assert = function (promise) {\n\tthis.assertions.push(promise);\n};\n\nTest.prototype._setAssertError = function (err) {\n\tif (this.assertError !== undefined) {\n\t\treturn;\n\t}\n\n\tif (err === undefined) {\n\t\terr = 'undefined';\n\t}\n\n\tthis.assertError = err;\n};\n\nTest.prototype.plan = function (count) {\n\tif (typeof count !== 'number') {\n\t\tthrow new TypeError('Expected a number');\n\t}\n\n\tthis.planCount = count;\n\n\t// in case the `planCount` doesn't match `assertCount,\n\t// we need the stack of this function to throw with a useful stack\n\tthis.planStack = new Error().stack;\n};\n\nTest.prototype.run = function () {\n\tvar self = this;\n\n\tthis.promise = Promise.pending();\n\n\t// TODO(vdemedes): refactor this to avoid storing the promise\n\tif (!this.fn) {\n\t\tthis.exit();\n\t\treturn undefined;\n\t}\n\n\tthis._timeStart = globals.now();\n\n\t// wait until all assertions are complete\n\tthis._timeout = globals.setTimeout(function () {}, maxTimeout);\n\n\tvar ret;\n\n\ttry {\n\t\tret = this.fn(this._publicApi());\n\t} catch (err) {\n\t\tthis._setAssertError(err);\n\t\tthis.exit();\n\t}\n\n\tvar asyncType = 'promises';\n\n\tif (isObservable(ret)) {\n\t\tasyncType = 'observables';\n\t\tret = observableToPromise(ret);\n\t}\n\n\tif (isPromise(ret)) {\n\t\tif (this.metadata.callback) {\n\t\t\tself._setAssertError(new Error('Do not return ' + asyncType + ' from tests declared via `test.cb(...)`, if you want to return a promise simply declare the test via `test(...)`'));\n\t\t\tthis.exit();\n\t\t\treturn this.promise.promise;\n\t\t}\n\n\t\tret\n\t\t\t.then(function () {\n\t\t\t\tself.exit();\n\t\t\t})\n\t\t\t.catch(function (err) {\n\t\t\t\tself._setAssertError(err);\n\t\t\t\tself.exit();\n\t\t\t});\n\t} else if (!this.metadata.callback) {\n\t\tthis.exit();\n\t}\n\n\treturn this.promise.promise;\n};\n\nObject.defineProperty(Test.prototype, 'end', {\n\tget: function () {\n\t\tif (this.metadata.callback) {\n\t\t\treturn this._end.bind(this);\n\t\t}\n\t\tthrow new Error('t.end is not supported in this context. To use t.end as a callback, you must use \"callback mode\" via `test.cb(testName, fn)` ');\n\t}\n});\n\nTest.prototype._end = function (err) {\n\tif (err) {\n\t\tthis._setAssertError(new assert.AssertionError({\n\t\t\tactual: err,\n\t\t\tmessage: 'Callback called with an error → ' + err,\n\t\t\toperator: 'callback'\n\t\t}));\n\n\t\tthis.exit();\n\t\treturn;\n\t}\n\n\tif (this.endCalled) {\n\t\tthrow new Error('.end() called more than once');\n\t}\n\n\tthis.endCalled = true;\n\tthis.exit();\n};\n\nTest.prototype._checkPlanCount = function () {\n\tif (this.assertError === undefined && this.planCount !== null && this.planCount !== this.assertions.length) {\n\t\tthis._setAssertError(new assert.AssertionError({\n\t\t\tactual: this.assertions.length,\n\t\t\texpected: this.planCount,\n\t\t\tmessage: 'Assertion count does not match planned',\n\t\t\toperator: 'plan'\n\t\t}));\n\n\t\t//this.assertError.stack = this.planStack;\n\t}\n};\n\nTest.prototype.exit = function () {\n\tvar self = this;\n\n\tthis._checkPlanCount();\n\n\tPromise.all(this.assertions)\n\t\t.catch(function (err) {\n\t\t\tself._setAssertError(err);\n\t\t})\n\t\t.finally(function () {\n\t\t\t// calculate total time spent in test\n\t\t\tself.duration = globals.now() - self._timeStart;\n\n\t\t\t// stop infinite timer\n\t\t\tglobals.clearTimeout(self._timeout);\n\n\t\t\tself._checkPlanCount();\n\n\t\t\tif (!self.ended) {\n\t\t\t\tself.ended = true;\n\n\t\t\t\tglobals.setImmediate(function () {\n\t\t\t\t\tif (self.assertError !== undefined) {\n\t\t\t\t\t\tself.promise.reject(self.assertError);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\n\t\t\t\t\tself.promise.resolve(self);\n\t\t\t\t});\n\t\t\t}\n\t\t});\n};\n\nTest.prototype._publicApi = function () {\n\tvar self = this;\n\tvar api = {skip: {}};\n\n\t// Getters\n\t[\n\t\t'assertCount',\n\t\t'title',\n\t\t'end'\n\t]\n\t\t.forEach(function (name) {\n\t\t\tObject.defineProperty(api, name, {\n\t\t\t\tenumerable: name === 'end' ? self.metadata.callback : true,\n\t\t\t\tget: function () {\n\t\t\t\t\treturn self[name];\n\t\t\t\t}\n\t\t\t});\n\t\t});\n\n\t// Get / Set\n\tObject.defineProperty(api, 'context', {\n\t\tenumerable: true,\n\t\tget: function () {\n\t\t\treturn self.context;\n\t\t},\n\t\tset: function (context) {\n\t\t\tself.context = context;\n\t\t}\n\t});\n\n\t// Bound Functions\n\tapi.plan = this.plan.bind(this);\n\n\tfunction skipFn() {\n\t\tself._assert(Promise.resolve());\n\t}\n\n\tfunction onAssertionEvent(event) {\n\t\tvar promise;\n\t\tif (event.assertionThrew) {\n\t\t\tevent.error.powerAssertContext = event.powerAssertContext;\n\t\t\tpromise = Promise.reject(event.error);\n\t\t} else {\n\t\t\tpromise = Promise.resolve(observableToPromise(event.returnValue));\n\t\t}\n\t\tpromise = promise\n\t\t\t.catch(function (err) {\n\t\t\t\terr.originalMessage = event.originalMessage;\n\t\t\t\treturn Promise.reject(err);\n\t\t\t});\n\t\tself._assert(promise);\n\t}\n\n\tvar enhanced = enhanceAssert({\n\t\tassert: assert,\n\t\tonSuccess: onAssertionEvent,\n\t\tonError: onAssertionEvent\n\t});\n\n\t// Patched assert methods: increase assert count and store errors.\n\tObject.keys(assert).forEach(function (el) {\n\t\tapi.skip[el] = skipFn;\n\t\tapi[el] = enhanced[el].bind(enhanced);\n\t});\n\n\tapi._capt = enhanced._capt.bind(enhanced);\n\tapi._expr = enhanced._expr.bind(enhanced);\n\n\treturn api;\n};\n"],
    // etc.
]);

var actualRequire = require._extensions['.js'];
require._extensions['.js'] = function (module, filename) {
    if (files.has(filename)) return module._compile(filename);
    return actualRequire.apply(this, arguments);
};

require('./lib/babel.js');

If we uglified the files with "whitespace only" mode, it could totally* work!

*Probobly

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024 1

@spudly - I actually have a start on this already. I will post some code soon.

Basically, it needs to help identify test.only, test.serial.only, etc.

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024 1

So have we actually profiled AVA to determine that babel is indeed the culprit? Or is it speculation?

Definitely not speculation. Moving Babel to the main thread for tests created significant speed improvements. It stands to reason the same improvements could be realized by moving source transpilation out of the forks.

Something else to consider - forking for every test at once is going to cause problems.

Already considered. See #791.

from ava.

sindresorhus avatar sindresorhus commented on June 19, 2024

We should also look into why forking is so slow. Maybe there's something we can optimize there.

from ava.

floatdrop avatar floatdrop commented on June 19, 2024

@sindresorhus forking is fast, spinning up Node and babel is quite slow.

from ava.

sindresorhus avatar sindresorhus commented on June 19, 2024

Node.js isn't inherently slow to spin up, it's requiring things that are slow. Babel caches the compilation, so that should only be an issue on the a cold load.

from ava.

sindresorhus avatar sindresorhus commented on June 19, 2024

We could use https://github.com/jaguard/time-require to measure if any of our require's are slowing down the startup. Anyone interested in looking into this?

from ava.

vadimdemedes avatar vadimdemedes commented on June 19, 2024

I checked and babel takes the most time to require. I'm planning to migrate AVA to Babel 6 (#117), will see if there are any time wins.

from ava.

sindresorhus avatar sindresorhus commented on June 19, 2024

I ran time-require on a forked test file and here are the results:

 #  module                           time  %
 1  has-color (../...lor/index.js)   20ms  ▇ 2%
 2  chalk (../node...alk/index.js)   26ms  ▇ 3%
 3  ./util.js (../...main/util.js)   12ms  ▇ 1%
 4  bluebird (../n.../bluebird.js)  107ms  ▇▇▇▇ 11%
 5  esprima (../no...a/esprima.js)   78ms  ▇▇▇ 8%
 6  array-foreach...ach/index.js)    15ms  ▇ 1%
 7  escallmatch (....tch/index.js)  109ms  ▇▇▇▇ 11%
 8  ./lib/decorato...decorator.js)  113ms  ▇▇▇▇ 11%
 9  define-propert...ies/index.js)   12ms  ▇ 1%
10  empower (../no...wer/index.js)  127ms  ▇▇▇▇ 13%
11  ./strategies (...trategies.js)   14ms  ▇ 1%
12  stringifier (....ier/index.js)   18ms  ▇ 2%
13  esprima (../no...a/esprima.js)   53ms  ▇▇ 5%
14  ./traverse (...../traverse.js)   59ms  ▇▇ 6%
15  ./javascript/d...ompressed.js)   29ms  ▇ 3%
16  googlediff (.....iff/index.js)   29ms  ▇ 3%
17  ./udiff (../no...lib/udiff.js)   31ms  ▇ 3%
18  ./lib/create (...ib/create.js)  119ms  ▇▇▇▇ 12%
19  power-assert-f...ter/index.js)  120ms  ▇▇▇▇ 12%
20  ./assert (../n...ib/assert.js)  289ms  ▇▇▇▇▇▇▇▇▇ 29%
21  ./test (../nod.../lib/test.js)  292ms  ▇▇▇▇▇▇▇▇▇ 29%
22  ./lib/runner (...ib/runner.js)  400ms  ▇▇▇▇▇▇▇▇▇▇▇▇ 40%
23  https (https)                    27ms  ▇ 3%
24  ./lib/_stream_..._readable.js)   17ms  ▇ 2%
25  readable-strea.../readable.js)   26ms  ▇ 3%
26  once (../node_...once/once.js)   13ms  ▇ 1%
27  end-of-stream...eam/index.js)    15ms  ▇ 1%
28  duplexify (../...ify/index.js)   44ms  ▇▇ 4%
29  ./_stream_read...eam_readable)   22ms  ▇ 2%
30  ./lib/_stream_...am_duplex.js)   23ms  ▇ 2%
31  readable-strea.../readable.js)   35ms  ▇▇ 3%
32  read-all-strea...eam/index.js)   39ms  ▇▇ 4%
33  node-status-co...des/index.js)   18ms  ▇ 2%
34  ../ (../index.js)               361ms  ▇▇▇▇▇▇▇▇▇▇▇ 36%
Total require(): 327
Total time: 1s

1 second is way too much for just requiring dependencies. We should look into optimizing that somehow.

from ava.

Qix- avatar Qix- commented on June 19, 2024

Nice tool, I'll add that to my toolbox. 💃

from ava.

sindresorhus avatar sindresorhus commented on June 19, 2024

We could also cache the "require resolving", so every test files doesn't have to do that.

Maybe by using https://github.com/bahmutov/cache-require-paths by @bahmutov.

from ava.

sindresorhus avatar sindresorhus commented on June 19, 2024

Node.js 5.2.0 is also ~45% faster to startup! :) https://twitter.com/rvagg/status/674775383967858688

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024

Maybe by using https://github.com/bahmutov/cache-require-paths by @bahmutov.

I'm not so sure about that. NPM 3's automatic flattening and deduping of the dependency tree really sped up resolve times. Also, I don't see if or how it decides to invalidate the the cache. That seems iffy to me, especially for a testing framework where it's fairly reasonable to expect their dependency tree will change frequently.

Node.js 5.2.0 is also ~45% faster to startup!

That is huge.

from ava.

sindresorhus avatar sindresorhus commented on June 19, 2024

I think the biggest improvement will come from moving Babel to the main thread.

Definitely. Was just looking for other minor wins.

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024

I guess it's worth a shot. I just have doubts it is going to be effective with us spawning so many processes (it doesn't coordinate across processes), and dealing with a relatively volatile dependency tree. Seems better suited to servers than dev boxes.

from ava.

sindresorhus avatar sindresorhus commented on June 19, 2024

Agree. Ignore that.

from ava.

ariporad avatar ariporad commented on June 19, 2024

@sindresorhus: I have a really stupid idea: What if we webpack'd (or something similar) AVA and all of it's dependencies (except the CLI), and have the child processes use that? Then they wouldn't have to do any requiring. If we uglified that, it would eliminate requiring, and greatly reduce parsing time.

from ava.

Qix- avatar Qix- commented on June 19, 2024

Isn't the overhead of forking the actual forking bit? Like the allocation the OS does to fork processes is what the overhead really brings, unless it's also doing babel compilation for each fork...

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024

unless it's also doing babel compilation for each fork

That is currently the case, #349 fixes it. Currently, assuming a cache miss - we spend 700ms just requiring stuff in each child process. In #349 that drops to 150ms. Those numbers are from my laptop with super speedy SSD's, people using traditional hard drives will see significantly worse performance in either case, but benefit a lot more from #349.

I don't think we should be concatenating files. That is a lot of complexity, for what I think is going to be minimal gains.

from ava.

Qix- avatar Qix- commented on June 19, 2024

Why can't the system just pool the load for us instead of we doing it manually?

Because that would put a lot of scheduling engineers out of jobs 💃 /s


Agreed with @jamestalmage. Concatenation is a fix for something that isn't broken. I wish it were that easy.

from ava.

novemberborn avatar novemberborn commented on June 19, 2024

I don't know enough about modern scheduling regarding forks and CPU cores, but presumably we want to max out CPU so tests run faster and spinning up as many forks as possible achieves that. It'd be more concerned with available RAM and causing swap. That's probably heavily dependent on the code under test though.

There might be an argument for specifying the number of concurrent forks if this is a concern, however it would limit us in other ways. Currently we gather tests from all forks before running any of them, and things like .only are supposed to apply across forks. We might have to carefully spin forks up and down to do this analysis and then start the actual test run, assuming the test files don't change in between. (I guess we could copy them to a temporary directory?)

We've worked hard to reduce Babel overhead for the test files themselves but users probably still run into this themselves using custom --require hooks. Conceivably we could provide a framework where even those hooks are loaded in the main process, similarly to the test files. An alternative approach would be to transpile in a separate build process (this is my preferred approach).

from ava.

novemberborn avatar novemberborn commented on June 19, 2024

Tracking my last suggestion in #577.

from ava.

novemberborn avatar novemberborn commented on June 19, 2024

It's become clear recently (#604 #669) that AVA's current forking behavior is problematic. React projects especially end up with a lot of test files and consequently a lot of forks. This can make tests impossible to run on dev machines, let alone CI systems.

Currently AVA forks for each test file. Stats are collected (number of tests, whether any tests are exclusive). Once all stats have been received all tests from all files are run. Within a test file the tests may execute serially, and tests may execute serially across all files. Regardless the fork persists until it's finished running its tests.

The simplest approach would be to only run tests from a few forks at a time, but still fork for all test files before any tests are run. However I doubt this will be sufficient, especially for projects with dozens of test files and a babel-register dependency.

Unfortunately we can't fork just a few test files and run their tests since we won't know whether other test files contain exclusive tests. I see two options:

  1. fork, exit, fork again: fork for each test file to gather stats, then exit that fork. Fork again to run the tests. We could keep the last X forks alive and immediately run the tests from those forks.
  2. use static analysis to detect exclusive tests. We're transpiling the test files anyway prior to forking. We can't really use this to compute other test stats but I don't think we need to know those prior to running the tests.

Static analysis seems preferable as we won't have to fork unnecessarily.

The next issue is how many forks we should run concurrently. The simplest approach would be to pick a per-core number. That number could probably be larger than 1, allowing the OS to do context switching when the processes are blocked on IO. We'll probably bump into memory limits though. And if the number gets too great then there might be too much of an overhead due to the context switching.

(Please note that I'm far from an expect on this subject!)

If we make this configurable then that config should work on CI machines and the various machines of a project's collaborators.

I wonder if instead we can observe the system impact of the forks and dynamically ensure the best utilization.

Thoughts?

from ava.

spudly avatar spudly commented on June 19, 2024

Currently we gather tests from all forks before running any of them, and things like .only are supposed to apply across forks.

I don't think you should support this behavior across forks in AVA. I think you're going to have to choose between performance and all the cross-fork magic. Personally, I would choose performance, especially because it takes me 2 mins to run my AVA tests right now. If people really need to run just one test, they can still use --match on the command-line. You could still leave the .only() API as is, but make it apply to a single fork and not the entire test suite.

As far as statistics goes, I favor the static analysis idea.

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024

Static analysis should get us all the "cross-fork magic" without a lot of downside.

from ava.

spudly avatar spudly commented on June 19, 2024

What information would the static analysis code need to gather about each test? I'm new to using AST parsers and such but this issue is becoming more and more troublesome (I have hundreds of test files), so I'm really motivated to get this fixed. If I can get a clear idea of what needs to be done, I'll take a stab at it.

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024

Technically, people don't need to use test as their variable name when importing ava:

import ava from 'ava';

ava(t => t.is(...));

Also, I have done this in the past:

// now all the tests are serial:
var test = require('ava').serial;

test(t => t.is(...));

I have since decided that is bad form, and that it's just better to do test.serial each time (clarity is more important than brevity). Still, I think we could support it.


More problematic is if use of only is wrapped in a conditional:

// fictional module that analyzes math expressions in strings:
import calculator from 'calculator-parser';
import ava from 'ava';

function test(input, expected, only) {
  const method = only ? ava.only : ava;
  method(input + ' === ' + expected, t => {
    const actual = calculator.calculate(input);
    t.is(actual, expected);
  });
}

test('3 + 4', 7);
test('5 - 2', 3);
test('6 * 3', 18, true); // causes this to be interpreted as an exclusive test
// ...

I do not see any way to reliably do static analysis on this. That is a bit of a bummer.

from ava.

novemberborn avatar novemberborn commented on June 19, 2024

Tracking the management of the forks in #718.

from ava.

ORESoftware avatar ORESoftware commented on June 19, 2024

Why not transpile tests first, then execute all child processes with plain node? :) maybe you already do that?

I am willing to bet that this would speed things up tremendously and you would catch transpile errors early too. But you'd have to have some temp directory somewhere which most people think is unsexy.

from ava.

novemberborn avatar novemberborn commented on June 19, 2024

@ORESoftware yup, that's how it works. The problem is executing all child processes at once.

from ava.

ORESoftware avatar ORESoftware commented on June 19, 2024

@novemberborn not from what I gather. On this thread the main complaint is that requiring babel is slow. TMK you wouldn't need to require babel if all test files were already transpiled. Am I right? What am I missing?

forking cps is not slow at all in my experience, it's more or less the standard ~50ms of waiting per fork, it's babel that is really slowing you down.

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024

@ORESoftware

We transpile tests in the main process, the forks don't transpile tests at all. The remaining problem is how to handle transpiling sources. Currently, you have two choices:

  1. Transpile your sources before running the tests. This results in probably the best performance, but requires a good deal of boilerplate - especially to support watch mode.
  2. Use babel-register to transpile sources on the fly. This is much easier to setup (just --require babel-register), but your per-fork cost is significant.

forking cps is not slow at all in my experience, it's more or less the standard ~50ms of waiting per fork, it's babel that is really slowing you down

Agreed, but some users have hundreds of test files, and they have reached the point where they are experiencing memory/disk thrashing that is impacting performance. #791 will address that.

Also, we intend to add support for reusing forked processes at some point (#789). That will have some major downsides (mainly that it breaks test isolation), so it will be opt-in. 50ms per file is not insignificant when dealing with hundreds of files.

from ava.

Qix- avatar Qix- commented on June 19, 2024

So have we actually profiled AVA to determine that babel is indeed the culprit? Or is it speculation?

Something else to consider - forking for every test at once is going to cause problems. Not sure if that's what's happening but that causes hella context switches between the child processes if it is.

from ava.

Qix- avatar Qix- commented on June 19, 2024

Perfect, fair enough @jamestalmage :)

from ava.

ORESoftware avatar ORESoftware commented on June 19, 2024

@jamestalmage sorry I don't know what you mean by transpiling sources vs transpiling tests - I frankly have no idea what that means - can you briefly explain? it seems like the best chance of getting towards the speed criterion is not have to require an iota of babelness in your child_processes.

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024

We use Babel to transpile your test files. We do this in the main process, then write the transpilation result out to disk. When we fork a child process, we don't actually load your test file, we load that transpilation result. (We use node_modules/.cache to store the transpilation).

Doing the same for sources is much more difficult, because we can't tell what needs to be transpiled without attempting static analysis.

from ava.

ORESoftware avatar ORESoftware commented on June 19, 2024

ok, by sources you mean the source code that the test file may require/reference? Sounds like it. Yeah, IDK.

from ava.

dcousineau avatar dcousineau commented on June 19, 2024

@jamestalmage ava could 'cheat' and use the --source flags as a hint to pre-compile everything that matches (even if it doesn't catch everything and it compiles more than it has to)

from ava.

novemberborn avatar novemberborn commented on June 19, 2024

@dcousineau yup, see #577.

from ava.

jamestalmage avatar jamestalmage commented on June 19, 2024

I am closing this, it's just too general. We are tracking specific improvements in the following issues.

  • #789 Reuse test-worker processes
  • #594 Investigate use of cacheData support to improve require times.
  • #577 Transpile sources in the main thread
  • #720 Transpile helpers in the main thread
  • #719 Cold start forks in watch mode
  • #718 Implement pool for managing forks (solved in #791)
  • Almost everything else labelled performance

from ava.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.