Comments (12)
FYI, there's a setting to run the benchmark with label. Take a look at the setting with if: contains(github.event.pull_request.labels.*.name, 'run benchmark')
in https://github.com/tkf/BenchmarkCI.jl#create-a-workflow-file-required (thanks to @johnnychen94; ref tkf/BenchmarkCI.jl#65)
As for my recent approach, I mostly moved to set up a benchmark suite for smoke test (e.g., take only one sample) and then invoking it from the test. It's not actually continuous performance testing but rather for just avoid breaking benchmark code. But I still find it useful.
from jump.jl.
Made progress here: https://github.com/jump-dev/benchmarks
Dashboard is available at https://jump.dev/benchmarks/
from jump.jl.
This came up on Gitter today, so I did some investigating:
- @ericphanson did an excellent job on Convex.jl benchmarks
- JuliaCI has packages to help
- PowerSimulationsDynamics use CI:
I don't think we want to run the benchmarks on every commit. That'd get a bit painful. We probably just want each commit to master and the ability to run on-demand for a PR.
For the benchmarks, we probably want:
- JuMP and MOI-specific benchmarks
- time of
using JuMP
andusing MathOptInterface
- time to build simple models
- time for various expression manipulations
- https://github.com/jump-dev/MathOptInterface.jl/blob/master/src/Benchmarks/Benchmarks.jl
- time of
- Solver integration benchmarks
- How long to build and solve an LP from scratch?
- https://github.com/jump-dev/MathOptInterface.jl/tree/master/perf/time_to_first_solve
- See also Convex.jl
- Another source
This could all sit in a new repository (JuMPBenchmarks.jl) and push to a GitHub page with plots like
So in summary, I think we have a lot of what is needed. It just needs some plumbing to put together. There is also the question of dedicated hardware for this. But I can probably be persuaded to get a small PC to sit in the corner of my office as a space-heater during winter.
from jump.jl.
Would be nice! More to detect errant Julia changes than our own, perhaps
from jump.jl.
Could we incorporate this into the Travis builds somehow?
from jump.jl.
Not really, travis runs on shared VMs so it will be hard to get consistent results.
from jump.jl.
Ping @jrevels, JuMP would benefit a lot from this
from jump.jl.
Literally was just talking to folks at Julia Central about CI perf testing today, going to be experimenting with writing webhooks to do this in the coming week(s). I'll definitely keep you posted.
from jump.jl.
pinging @mlubin @jrevels did you ever figure out how to do this in a clever way?
from jump.jl.
@pkofod, there was never any substantial effort put into this
from jump.jl.
This may have bitrotted unfortunately; we used the run benchmarks in CI, but I never remembered to look at the results (hidden in the Travis logs, at the time), so I removed it (or perhaps just didn’t replace it when we switched to GitHub Actions). It also slowed down CI a lot. That code was based off of @tkf’s, and he likely has better versions these days (maybe https://github.com/JuliaFolds/Transducers.jl/tree/master/benchmark).
So I agree also with not running it per-commit. Could be useful for it to be runnable on-demand in a PR like nanosoldier for Julia Base, so if you suspect a chance could cause a regression then you can trigger it.
It might be useful to look at how SciML does their benchmarks too: https://github.com/SciML/SciMLBenchmarks.jl. It looks also like there’s some “juliaecosystem” hardware; perhaps JuMP can get access too: https://github.com/SciML/SciMLBenchmarks.jl/blob/bda2ca650fd4fbd25e3bcdc0ddb4b43535bcd7b6/.buildkite/run_benchmark.yml#L50 (I’ve got no idea though).
from jump.jl.
Ideally once JuMP 1.0 is released, we wouldn't have to worry about breaking any benchmarks. (And if we did, that's an indication that we've done something wrong!)
There are some Julia servers for the GPU and SciML stuff that host jobs on build kite (we use one for running the SCS GPU tests). Their benchmarks are pretty heavy though. I'm envisaging some much smaller runs, so we don't need a beefy machine.
from jump.jl.
Related Issues (20)
- && and || do not short-circuit in macros HOT 3
- Add support for MOI.ScalarQuadraticCoefficientChange
- Tools to test JuMP models
- NumFOCUS: GSoC 2024 Update HOT 1
- Error vcat(::NonlinearExpr, ::VariableRef, ::Float64) HOT 5
- Increase performance of SOC constraints HOT 4
- Iterating SparseAxisArray does not preserve order HOT 4
- How to improve the speed to build a complex model? HOT 2
- Coefficients of complex variables created with a GenericModel are always Float64
- LinearAlgebra.hermitian is incorrect
- *(::Real, ::Hermitian) is not hermitian HOT 5
- Cannot `convert` an object of type Float64 to an object of type JuMP.NonlinearExpr HOT 3
- Documentation Request: List whether a solver supports Indicator Constraints HOT 4
- Bullet point alignment in bibliography HOT 9
- Multiplication of matrix expression and variables leads to stack overflow and matmul error HOT 4
- Multiple Ranges for variables HOT 3
- Y' when Y is of type ::Matrix{NonlinearExpr} HOT 1
- Poor performance in complementarity models HOT 23
- Add a method for Complex(a,b) HOT 8
- MethodError with empty summations HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jump.jl.