Coder Social home page Coder Social logo

Comments (8)

nohwnd avatar nohwnd commented on June 15, 2024

Interesting. That would a bit difficult (and inefficient) to provide, because Pester relies on execution that is "top-down", so if we mix tests that come from 2 different files we have to "restore" the state for each test. And run all setups and teardowns again, breaking some of the life cycles that people probably rely on. This would be specially painful for mocks I think.

I am not against it completely, I would just like a bit more data showing why doing this is worth it.

from pester.

fflaten avatar fflaten commented on June 15, 2024

Yes it would be limited to shuffling tests (It) inside the current block (Context/Describe). Is that enough to be useful?

Shuffling containers/files is also an option, though that can already be achieved manually.

from pester.

twhiting avatar twhiting commented on June 15, 2024

Yes google test shuffles within a "test suite" which is the behavior that @fflaten describes.

from pester.

twhiting avatar twhiting commented on June 15, 2024

It is hard to explain why it's worth it without being super abstract. I for example have a powershell module that interfaces with a windows service. It is possible that test C can cause a state within the service that test A exposes. It is VERY hard to track down these cases without a reliable randomizer.

The second part of this is needing to be able to run the tests again in the same shuffled order.

Google test can shuffle based on a seed. So say in CI a test fails when shuffled with seed X. I can then manually trigger a new run with seed X and get the same exact test run order for all tests.

from pester.

twhiting avatar twhiting commented on June 15, 2024

For a good description of the seed mechanism see the google test link I posted above.

from pester.

nohwnd avatar nohwnd commented on June 15, 2024

Makes sense. If we randomize within the same container, in a top-down way (that is we randomize on each level, but dont jump up and down), then it is hardly any change from the current way of running tests, all you need to do is shuffle the tests in the discovered tree (in a deterministic way). I think there is even an β€œorder” list already in the discovered tree.

@twhiting do you want to make a PR for this? Even if its just a proof of concept.

from pester.

twhiting avatar twhiting commented on June 15, 2024

@nohwnd i'd love to get to the feature myself but realistically it'd be weeks/months. So if this is something you are interested in feel free to jump on it!

Thanks.

from pester.

majkinetor avatar majkinetor commented on June 15, 2024

I came here to see if there is a flag to do this.

This is not hard to explain at all. I have a lot of experience with this with a big number of people of all types of seniority and thousands of pester tests. It's basically a norm, rather than exception, that people will depend on the tests within the same file. They may do it intentionally or by accident. Writing good tests is hard, for sure.

When tests execute in the same order, you are prone to behave like previous tests are seeding the environment or act like they are direct prerequisites of the subsequent tests. I have seen it all:

  1. Test A creating an object, Test A+1 doing something with that object
  2. Test A changing the fixture, test A+N depending on that change
  3. Test A editing objects in fuzzy manner, test B failing by encountering such objects because of non-specific enough filtering

This almost always happens with the test cases within the same file, although it does happen across files too.

The one discovers this, usually by accident, when some tests are changed, skipped or removed, or simply by passage of time if tests are fuzzy in nature (my typical case). To fight some of this, I have Grafana dashboard of all previous runs, and I can select a particular test and see how often it failed during previous months.

Dashboard of screenshot with one suspicious test failure in last 30 days

image

This is IMO mandatory stuff to have. Randomization will either solve those problems or make them detectable quicker. I ask from devs that before commiting tests to master they provide proof of at least hundreeds of runs by using something similar to 1..100 | % { Invoke-Pester ... }, which would work much better with randomization included.

from pester.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.