Comments (8)
@sbryngelson I found a workaround to make this work! It's now under /bench
on our website. Here's it up on my fork: https://henryleberre.github.io/MFC/bench/. Which cases should we include in the benchmark? The ones on the website are just there as a proof of concept.
To make the publishing a lot easier, I'm using a new GitHub feature whereby you can publish to GitHub pages from a workflow without having the need for another branch or another repository. This removes the need for the DOC_PUSH_URL
secret.
./mfc.sh bench
runs the benchmark(s) and saves the results to a JSON file (used by the workflow).
from mfc.
For this specific issue, I would consider a selection of 2D and 3D cases. Probably 2-component (num_fluids = 2
) problems will be sufficiently general and the problem size n,m,p
will mostly depend on the resources/runner. I think we can make do with just a few cases, which we can discuss.
So, where does it (or can it) run the benchmark? We probably want a CPU and GPU benchmark for each "test" case.
Also, consider making smaller pull requests so we can iterate more quickly.
from mfc.
I agree. However, in this case, it would have been difficult to make neat Pull Requests considering the scope of my changes. Adding support for Cray/CCE required significant and/or major changes to the Fortran code, the toolchain code, the workflows (GitHub & ORNL), the build system, and the documentation. They all tie together and adding Cray support is a single feature so there weren't many "discrete" steps in the process at which times the code would have been in an appropriate state for it to be upstreamed. I'll make sure none of my changes broke anything before submitting a Pull Request and finish this feature once that is done!
from mfc.
reference: https://github.com/illinois-ceesd/timing
from mfc.
Update: Now have self-hosted runners to do this properly without relying on Gitlab.
from mfc.
Adding @belericant to assignees.
The first step is to benchmark a couple of cases in the examples/
directory on the "self-hosted" runner (let's start in CPU mode) and append those numbers to a file that can be turned into a graph. This will append the graph with every CI call. Some care will be needed in selecting what examples to use and so on, but that will be easy, and let's worry about it later. Once a prototype is working, then getting the cases going will be simple.
@belericant be aware that a new call was introduced by @henryleberre called ./mfc.sh bench
that benchmarks a few cases and writes the results somewhere.
Note that the "first" call of this benchmark should go back and basically "do CI benchmarking" on many old commits, so we have a history. Subsequent calls won't need this, so this will be a one-time thing.
from mfc.
@belericant there is actually a straightforward way to submit CI batch jobs via Slurm on Phoenix. I documented it all in this repo: https://github.com/sbryngelson/test-CI
I set up a self-hosted runner on Phoenix and the key is that in the sbatch test.sh
script, test.sh
has the following option enabled #SBATCH -W
, which means that the job is submitted and then the shell waits until it's completed before proceeding.
This strategy should be enough for us, and we can use Phoenix until we have Slurm on other benchmark hardware of interest.
from mfc.
Some updates from @henryleberre and I's discussion recently:
- Want to maintain @belericant's introduction of a CI that runs and auto-comments on PRs with benchmark results.
- Want a nearly "one-script-fits-all" approach where we reuse a Slurm script for the CI and benchmarking on clusters that require Slurm submissions, but call it in different ways.
- A simple way to do this is via a
sed
program that puts the appropriate arguments into the Slurm batch script, but can probably do more nicely via existing Python in toolchain. Notably, we will need the Slurm-W
flag for "wait" so that jobs don't start before others finish.
- A simple way to do this is via a
- Want to be able to run on different hardware by passing a flag into the
./mfc.sh bench
call. - Deploy benchmark data for a set of test problems to a separate repo that the website can grab from.
- May need to be rather flexible in accommodating other Slurm batch script flags, since different setups will expect different types of flags.
from mfc.
Related Issues (20)
- Add heat transfer
- Bug in 3D characteristic BC check
- Was there a performance regression for 3D cases HOT 2
- Link-time optimization and inlining
- Sanity check on Device <--> Host memory transfers on GPU simulations HOT 4
- Old Silo output metadata
- code coverage
- [bug] doxygen not created for `fpp` files anymore
- NVHPC Compiles for unnecessary GPU hardware with default options
- Add sitemap to website
- [bug] Rebuilding HDF5 HOT 3
- More benchmarks and longer benchmarks HOT 1
- [Feature Request] Add nvidia's compute sensitizer to CI HOT 1
- Would knowing `n`, `m`, `p` at compile time speed-up the code?
- `./mfc.sh format` needs to run the indenter twice (and perhaps once again after `fprettify`)
- Added dependency `parallel` is creating more problems than it is solving.
- Update `Final time` in the print statement to match `performance_expectations.md` and add benchmark case to `examples/` HOT 2
- Sub-grid bubble GPU slowdown? HOT 1
- Add GPU profiling to CI HOT 2
- Use floating point division for benchmarking HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mfc.