Coder Social home page Coder Social logo

continuous performance testing about jump.jl HOT 12 OPEN

jump-dev avatar jump-dev commented on May 8, 2024
continuous performance testing

from jump.jl.

Comments (12)

tkf avatar tkf commented on May 8, 2024 2

FYI, there's a setting to run the benchmark with label. Take a look at the setting with if: contains(github.event.pull_request.labels.*.name, 'run benchmark') in https://github.com/tkf/BenchmarkCI.jl#create-a-workflow-file-required (thanks to @johnnychen94; ref tkf/BenchmarkCI.jl#65)

As for my recent approach, I mostly moved to set up a benchmark suite for smoke test (e.g., take only one sample) and then invoking it from the test. It's not actually continuous performance testing but rather for just avoid breaking benchmark code. But I still find it useful.

from jump.jl.

odow avatar odow commented on May 8, 2024 2

Made progress here: https://github.com/jump-dev/benchmarks

Dashboard is available at https://jump.dev/benchmarks/

from jump.jl.

odow avatar odow commented on May 8, 2024 1

This came up on Gitter today, so I did some investigating:

I don't think we want to run the benchmarks on every commit. That'd get a bit painful. We probably just want each commit to master and the ability to run on-demand for a PR.

For the benchmarks, we probably want:

This could all sit in a new repository (JuMPBenchmarks.jl) and push to a GitHub page with plots like

So in summary, I think we have a lot of what is needed. It just needs some plumbing to put together. There is also the question of dedicated hardware for this. But I can probably be persuaded to get a small PC to sit in the corner of my office as a space-heater during winter.

from jump.jl.

IainNZ avatar IainNZ commented on May 8, 2024

Would be nice! More to detect errant Julia changes than our own, perhaps

from jump.jl.

joehuchette avatar joehuchette commented on May 8, 2024

Could we incorporate this into the Travis builds somehow?

from jump.jl.

mlubin avatar mlubin commented on May 8, 2024

Not really, travis runs on shared VMs so it will be hard to get consistent results.

from jump.jl.

mlubin avatar mlubin commented on May 8, 2024

Ping @jrevels, JuMP would benefit a lot from this

from jump.jl.

jrevels avatar jrevels commented on May 8, 2024

Literally was just talking to folks at Julia Central about CI perf testing today, going to be experimenting with writing webhooks to do this in the coming week(s). I'll definitely keep you posted.

from jump.jl.

pkofod avatar pkofod commented on May 8, 2024

pinging @mlubin @jrevels did you ever figure out how to do this in a clever way?

from jump.jl.

mlubin avatar mlubin commented on May 8, 2024

@pkofod, there was never any substantial effort put into this

from jump.jl.

ericphanson avatar ericphanson commented on May 8, 2024

https://github.com/jump-dev/Convex.jl/tree/master/benchmark

This may have bitrotted unfortunately; we used the run benchmarks in CI, but I never remembered to look at the results (hidden in the Travis logs, at the time), so I removed it (or perhaps just didn’t replace it when we switched to GitHub Actions). It also slowed down CI a lot. That code was based off of @tkf’s, and he likely has better versions these days (maybe https://github.com/JuliaFolds/Transducers.jl/tree/master/benchmark).

So I agree also with not running it per-commit. Could be useful for it to be runnable on-demand in a PR like nanosoldier for Julia Base, so if you suspect a chance could cause a regression then you can trigger it.

It might be useful to look at how SciML does their benchmarks too: https://github.com/SciML/SciMLBenchmarks.jl. It looks also like there’s some “juliaecosystem” hardware; perhaps JuMP can get access too: https://github.com/SciML/SciMLBenchmarks.jl/blob/bda2ca650fd4fbd25e3bcdc0ddb4b43535bcd7b6/.buildkite/run_benchmark.yml#L50 (I’ve got no idea though).

from jump.jl.

odow avatar odow commented on May 8, 2024

Ideally once JuMP 1.0 is released, we wouldn't have to worry about breaking any benchmarks. (And if we did, that's an indication that we've done something wrong!)

There are some Julia servers for the GPU and SciML stuff that host jobs on build kite (we use one for running the SCS GPU tests). Their benchmarks are pretty heavy though. I'm envisaging some much smaller runs, so we don't need a beefy machine.

from jump.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.