currently software engineer at ๐ sourcegraph.
previously wrote code at ๐ riot games, ๐ก sumus, ๐ข rtrade, the ๐ bc genome sciences centre, and ๐๏ธ nwplus. studied mathematics at ๐ ubc, where i helped run ๐ ubc launch pad.
๐ Run Go benchmarks, publish results to an interactive web app, and check for performance regressions in your pull requests
Home Page: https://gobenchdata.bobheadxi.dev
License: MIT License
currently software engineer at ๐ sourcegraph.
previously wrote code at ๐ riot games, ๐ก sumus, ๐ข rtrade, the ๐ bc genome sciences centre, and ๐๏ธ nwplus. studied mathematics at ๐ ubc, where i helped run ๐ ubc launch pad.
I'm not sure what the best course of action is, but panic doesn't seem right in any case:
go test ./internal/langserver/handlers \
-bench=InitializeFolder_basic \
-run=^# \
-benchtime=60s \
-timeout=30m | tee /home/runner/work/_temp/benchmarks.txt
goos: linux
goarch: amd64
pkg: github.com/hashicorp/terraform-ls/internal/langserver/handlers
cpu: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz
BenchmarkInitializeFolder_basic/local-single-module-no-provider-2 2354 29887759 ns/op 1466327 B/op 23874 allocs/op
BenchmarkInitializeFolder_basic/local-single-submodule-no-provider-2 471 147021517 ns/op 2709140 B/op 40481 allocs/op
BenchmarkInitializeFolder_basic/local-single-module-random-2 510 143119210 ns/op 1911226 B/op 27656 allocs/op
BenchmarkInitializeFolder_basic/local-single-module-aws-2 51 1371407234 ns/op 105128302 B/op 745133 allocs/op
BenchmarkInitializeFolder_basic/aws-consul-2 42 1447751904 ns/op 127841282 B/op 1055052 allocs/op
BenchmarkInitializeFolder_basic/aws-eks-2 28 2296038135 ns/op 199662055 B/op 1481714 allocs/op
BenchmarkInitializeFolder_basic/aws-vpc-2 46 1530870496 ns/op 123258121 B/op 862732 allocs/op
BenchmarkInitializeFolder_basic/google-project-2 38 2780695563 ns/op 148429337 B/op 1076845 allocs/op
BenchmarkInitializeFolder_basic/google-network-2 28 2585281011 ns/op 133152685 B/op 878487 allocs/op
BenchmarkInitializeFolder_basic/google-gke-2 15 4587726688 ns/op 131472710 B/op 844753 allocs/op
BenchmarkInitializeFolder_basic/k8s-metrics-server-2 56 1301099738 ns/op 60277646 B/op 378735 allocs/op
BenchmarkInitializeFolder_basic/k8s-dashboard-2 SIGQUIT: quit
PC=0x46ae5c m=0 sigcode=0
goroutine 181042 [running]:
runtime.memclrNoHeapPointers()
See whole log at https://github.com/hashicorp/terraform-ls/runs/6137664889?check_suite_focus=true#step:7:10
BENCHDATA="go run go.bobheadxi.dev/gobenchdata"
BENCH_PATH="/home/runner/work/_temp/benchmarks.txt"
DATA_PATH="/home/runner/work/_temp/benchdata.json"
RESULTS_PATH="/home/runner/work/_temp/benchdata-results.json"
CHECKS_CONFIG_PATH="/home/runner/work/terraform-ls/terraform-ls/.github/gobenchdata-checks.yml"
cat $BENCH_PATH | $BENCHDATA --json ${DATA_PATH} -v "${GITHUB_SHA}" -t "ref=${GITHUB_REF}"
$BENCHDATA checks eval \
${DATA_PATH} \
${DATA_PATH} \
--checks.config ${CHECKS_CONFIG_PATH}
panic: BenchmarkInitializeFolder_basic/k8s-dashboard-2: could not parse run: strconv.Atoi: parsing "SIGQUIT: quit": invalid syntax (line: BenchmarkInitializeFolder_basic/k8s-dashboard-2 SIGQUIT: quit)
goroutine 1 [running]:
main.main()
/home/runner/go/pkg/mod/go.bobheadxi.dev/[email protected]/main.go:230 +0x135d
exit status 2
the only "test" is currently the github workflow: https://github.com/bobheadxi/gobenchdata/actions
If you use gobenchdata, I'd love to hear from you! Please feel free to respond to this issue or reach out to me at [email protected]
Previews are now available! See #38 for more details
bobheadxi/[email protected]
!
bobheadxi/gobenchdata@v1
)Better charts, more control over what gets charted (chart groups, more places to put descriptions, better ways to select groups of benchmarks)
Current charting library isn't really cutting it, e.g. #39
unsure if this should be done here in this repo, but the goal is to make a continuous benchmarking visualizer, ie:
https://deno.land/benchmarks.html
source:
https://github.com/denoland/deno/blob/master/website/app.js#L189
for example, zapx
has import path:
go.bobheadxi.dev/zapx
So at https://zapx.bobheadxi.dev/, all links are currently busted
Right now config is purely by command-line flags - would a yml
-based configuration file make sense to introduce more options?
as of v0.4.1
:
โฏ gobenchdata-web help
gobenchdata-web generates a simple website for visualizing gobenchdata benchmarks.
usage:
gobenchdata-web [flags]
other commands:
version show gobenchdata version
help show help text
flags:
--benchmarks-file string path to file where benchmarks are saved (default "benchmarks.json")
-i, --canonical-import string canonical import path for package, eg 'go.bobheadxi.dev/gobenchdata' (used to generate links to source code)
-c, --charts-types string additional chart types to generate (comma-separated, in addition to 'ns/op') (default "bytes/op,allocs/op")
--desc string a description to include in the generated web app
-o, --out string directory to output website in (default ".")
-s, --source string source repository for package, eg 'github.com/bobheadxi/gobenchdata'
--title string title for generated website (default "gobenchdata continuous benchmarks")
for visualizing the benchmark data in gh-pages
The per-package graphs work well for scenarios such as the one used for the demo web app however in the case where there many more benchmarks the graphs don't scale well. As shown in this screenshot:
I'd like to be able to generate a web app to display a graph for each benchmark individually. If you're happy for this to be an option I'd be happy to work on a PR.
Ah, yeah if you don't use the GitHub Actions actor (GITHUB_ACTOR
) which is available in an Action run, new commits can trigger new builds.
Unfortunately this can't really be resolved by gobenchdata - you'll have to either:
gh-pages
Originally posted by @bobheadxi in https://github.com/bobheadxi/
gobenchdata/issues/43#issuecomment-647562271
make performance trends more noticeable, and make fluctuations more readable
First of all thanks for this amazing GitHub action!
I'm trying to publish benchmarks results to a separate repository. I already fixed the problem with the standard GitHub token and I'm using a personal one with write permissions.
I'm having a problem when executing a second run of benchmarks. If there is a benchmark.json file already on the remote repository, the process fails with exit status 128: fatal: destination path '.' already exists and is not an empty directory.
These are my workflow config files:
Main benchmark.yml file:
name: run benchmarks
on:
workflow_call:
inputs:
publish:
required: true
type: boolean
test-flags:
required: true
type: string
jobs:
benchmarks:
runs-on: self-hosted
steps:
- name: checkout
uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: "1.20"
- name: "gobenchdata publish: ${{ inputs.publish }}"
run: go run go.bobheadxi.dev/gobenchdata@v1 action
env:
GITHUB_TOKEN: ${{ secrets.WRITE_TOKEN }}
INPUT_PRUNE_COUNT: 30
INPUT_GO_TEST_FLAGS: ${{ inputs.test-flags }}
INPUT_PUBLISH: ${{ inputs.publish }}
INPUT_PUBLISH_REPO: ajnavarro/gno-benchmarks
INPUT_PUBLISH_BRANCH: gh-pages
INPUT_BENCHMARKS_OUT: benchmarks.json
INPUT_CHECKS: ${{ !inputs.publish }}
INPUT_CHECKS_CONFIG: .benchmarks/gobenchdata-checks.yml
benchmark-check.yml that is running on every PR:
name: run benchmarks on every PR
on:
pull_request:
jobs:
check:
uses: ./.github/workflows/benchmark.yml
secrets: inherit
with:
publish: false
test-flags: "-short -run=^$"
benchmark-publish.yml file that will run every day with main changes:
name: run benchmarks on main branch every day
on:
workflow_dispatch:
schedule:
- cron: '0 0 * * *' # run on default branch every day
jobs:
publish:
uses: ./.github/workflows/benchmark.yml
secrets: inherit
with:
publish: true
test-flags: "-short -run=^$" # TODO: remove short flag
I really appreciate any help you can provide. Thanks!
Right now, it pulls: https://github.com/bobheadxi/gobenchdata/blob/master/Dockerfile#L14
While reading /r/Golang I noticed a new post about cob, which is a continuous benchmark tool focused on performing checks during PR or commit builds with TravisCI, GitHub Actions, and CircleCI to see if there is regression in benchmark performance outside of an acceptable range. If the regression is outside of a certain range, then the CI build will fail. It might be useful to integrate this kind of functionality within gobenchdata
. If so I can take a stab at implementing it.
So on every merge, we'll update and commit benchmarks.json
in the benchmarks
branch of the repo. It works well, I can see one new entry for every merged commit. โ
For every PR, we're running checks against the recorded benchmark.json
file. The checks run. But they always use the first recorded commit for base, which doesn't seem right to me:
๐ Running benchmarks...
detected 2 benchmark suites
successfully output results as json to '/tmp/gobenchdata/benchmarks.json'
๐ Checking out StyraInc/yadda@benchmarks...
branch 'benchmarks' set up to track 'origin/benchmarks'.
๐ Evaluating results against base runs...
report output written to /tmp/gobenchdata/checks-results.json
Warning: The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
๐ Generating checks report...
| | BASE | CURRENT | PASSED CHECKS | FAILED CHECKS | TOTAL |
|----|------------------------------------------|------------------------------------------|---------------|---------------|-------|
| โ
| 2cb3161f3907093c4a5cbd9cf2641b92243051ce | 8f83bf07bd43eb463b5488f30dea5ea5fa098c78 | 3 | 0 | 3 |
Am I holding the tool wrong? I'm quite confused about this one ๐
MBPerS is a metric according to https://github.com/golang/tools/blob/master/benchmark/parse/parse.go#L28, though I haven't seen it yet - would be good to include in Mem
machine specs somewhere (per-run? or just assume all runs equal?)
GitHub Action complains: go: cannot find GOROOT directory: /opt/hostedtoolcache/go/1.14.1/x64
.
Log:
2020-06-22T13:06:46.8623176Z ========================
2020-06-22T13:06:46.8626203Z /bin/gobenchdata
2020-06-22T13:06:46.8631710Z gobenchdata version unknown
2020-06-22T13:06:46.8654639Z INPUT_GO_BENCHMARKS=.
2020-06-22T13:06:46.8655465Z INPUT_GO_TEST_FLAGS=-cpu 1,2
2020-06-22T13:06:46.8655599Z INPUT_PUBLISH_BRANCH=dev
2020-06-22T13:06:46.8655716Z INPUT_PRUNE_COUNT=30
2020-06-22T13:06:46.8655836Z INPUT_CHECKS_CONFIG=.github/gobench.yml
2020-06-22T13:06:46.8655952Z INPUT_SUBDIRECTORY=.
2020-06-22T13:06:46.8656066Z INPUT_CHECKS=true
2020-06-22T13:06:46.8656328Z INPUT_PUBLISH_REPO=ironpeakservices/micro-domain
2020-06-22T13:06:46.8656458Z INPUT_GIT_COMMIT_MESSAGE=chore: add new benchmark
2020-06-22T13:06:46.8656576Z INPUT_PUBLISH=true
2020-06-22T13:06:46.8656689Z INPUT_GO_TEST_PKGS=./...
2020-06-22T13:06:46.8656798Z INPUT_BENCHMARKS_OUT=.github/benchmarks.json
2020-06-22T13:06:46.8658780Z GITHUB_ACTOR=hazcod
2020-06-22T13:06:46.8658913Z GITHUB_WORKSPACE=/github/workspace
2020-06-22T13:06:46.8659367Z GITHUB_REPOSITORY=ironpeakservices/micro-domain
2020-06-22T13:06:46.8659496Z GITHUB_SHA=90100fcb04f70612ac97ed374841de48e4d279df
2020-06-22T13:06:46.8659619Z GITHUB_REF=refs/pull/690/merge
2020-06-22T13:06:46.8659747Z ========================
2020-06-22T13:06:46.8701051Z
2020-06-22T13:06:46.8702245Z ๐ Running benchmarks...
2020-06-22T13:06:46.8754991Z go: cannot find GOROOT directory: /opt/hostedtoolcache/go/1.14.1/x64
2020-06-22T13:06:46.8786763Z detected 0 benchmark suites
2020-06-22T13:06:46.8791656Z successfully output results as json to '/tmp/gobenchdata/benchmarks.json'
2020-06-22T13:06:46.8801012Z
2020-06-22T13:06:46.8803094Z ๐ Checking out ironpeakservices/micro-domain@dev...
2020-06-22T13:06:46.8813842Z Cloning into '.'...
2020-06-22T13:06:48.0722298Z Switched to a new branch 'dev'
2020-06-22T13:06:48.0728086Z Branch 'dev' set up to track remote branch 'dev' from 'origin'.
2020-06-22T13:06:48.0728191Z
2020-06-22T13:06:48.0728784Z ๐ Evaluating results against base runs...
2020-06-22T13:06:48.0803675Z panic: unexpected end of JSON input
2020-06-22T13:06:48.0803787Z
2020-06-22T13:06:48.0803903Z goroutine 1 [running]:
2020-06-22T13:06:48.0804030Z main.load(0xc0000bfde8, 0x2, 0x2, 0x0, 0x0, 0x0)
2020-06-22T13:06:48.0804461Z /tmp/build/io.go:143 +0x275
2020-06-22T13:06:48.0804566Z main.main()
2020-06-22T13:06:48.0804713Z /tmp/build/main.go:162 +0xb4d
I've just observed this locally:
$ go test -run=- -bench=BenchmarkMonster ./pkg/rego_vm
--- FAIL: BenchmarkMonster
monster_test.go:125: open monster.rego: no such file or directory
FAIL
exit status 1
FAIL github.com/some/monster 0.230s
FAIL
$ echo $?
1
$ go test -run=- -bench=BenchmarkMonster ./pkg/rego_vm | gobenchdata --json test.json -v version
detected 0 benchmark suites
successfully output results as json to 'test.json'
So one obvious workaround is to add a test stage that'll run the benchmarks before the gobenchdata run. It would mean we run the benchmarks twice.
An alternative might be to add pipefail
to the set
line in entrypoint.sh?
Another alternative would be to have the action consume a file, so we could run the benchmarks once manually, save the result, and feed the result into gobenchdata.
What do you think?
Might want to consider switching charting libraries... again... but the v1 web app is now a lot more modularized and this should not be too much of a task #38 (comment)
Hi,
Is there a way to specify the Go version to use when using gobenchdata
?
e.g.in my case the Go version is pinned down, and I like to see performance changes between Go versions. Thank you!
e.g. https://github.com/ironPeakServices/iron-go-project/blob/master/.github/workflows/build.yml#L20
https://github.com/bobheadxi/gobenchdata/blob/gh-pages/benchmarks.json
more than 40 entries now
I've updated my template to version 1.1.0 by re-running generate,
gobenchdata web generate .
However clicking points on the graph no longer launches the repo URL.
Running serve
exhibits the same behavior,
gobenchdata web serve
Are changes to the config required to upgrade the web template?
Allow the action to check for regressions and fail the build if there are any
e.g. leveraging flags like -cpu
, -benchtime
, etc.
implementing #13 could help too for labelling runs with their respective hardware
After enabling gobenchdata in a GitHub action with the out-of-the-box example yml file, the benchmark failed with the following error:
๐ Running benchmarks...
panic: cpu: Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz: could not parse run: strconv.Atoi: parsing "": invalid syntax (line: cpu: Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz)
goroutine 1 [running]:
main.main()
/tmp/build/main.go:230 +0x19b3
For a minute I was wondering if this was the repo under test invoking panic, or caused by error in my github action yml file, or a bug in gobenchdata. I can see now this error is returned by this line of code:
Line 102 in 45436a7
Instead of the following code to parse the line:
split := strings.Split(line, "\t")
How about using a regex that matches the expected benchmark output? Maybe skip lines that don't match the expected output? Define a config parameter in the yml file to decide whether to skip or not? Extra output may be generated by go test
:
#go test -bench=.
goos: linux
goarch: amd64
pkg: github.com/CiscoM31/godata
cpu: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
The cpu
line above causes the panic
Hi, I have an issue in ironpeakservices/iron-go-project#11 where it will keep committing new benchmarks, which will commit a new benchmark, which triggers the Action again.
The following check tests for a regression in Mem.BytesPerOp
. However, the result is always 0.
checks:
- name: BytesPerOp
package: .
benchmarks: [ '.' ]
diff: (current.Mem.BytesPerOp - base.Mem.BytesPerOp) / base.Mem.BytesPerOp * 100
thresholds:
max: 25
Results:
{
"Status": "pass",
"Checks": {
"BytesPerOp": {
"Status": "pass",
"Diffs": [
{
"Status": "pass",
"Package": "...",
"Benchmark": "...",
"Value": 0
}
],
"Thresholds": {
"Min": null,
"Max": 25
}
}
}
}
Sample input:
[
{
"Date": 1651034107,
"Suites": [
{
"Goos": "linux",
"Goarch": "amd64",
"Pkg": "...",
"Benchmarks": [
{
"Name": "...",
"Runs": 5,
"NsPerOp": 111754445,
"Mem": {
"BytesPerOp": 14284630,
"AllocsPerOp": 503144,
"MBPerSec": 0
}
}
]
}
]
}
]
json
could get real large real quick
See #41 - using the action directly is a good way to get started, but doesn't give you a lot of control. Effectively re-implementing it (hashicorp/terraform-ls#840) is a bit involved
Hello! This is a fine tool, and I've enjoyed using it so far, thank you ๐
What I didn't get to work is the publishing of the processed benchmark results into another git branch of a private github repo. The workflow run ends in
Cloning into '.'...
remote: Invalid username or password.
fatal: Authentication failed for 'https://github.com/cant/tell.git/'
exit status 1
My workflow definition is this:
- name: gobenchdata publish
run: go run go.bobheadxi.dev/gobenchdata@v1 action
env:
INPUT_GO_TEST_FLAGS: -timeout=120m
INPUT_PUBLISH: true
INPUT_PUBLISH_BRANCH: benchmarks
GITHUB_TOKEN: ${{ secrets.ACCESS_TOKEN }}
timeout-minutes: 120
I also got this in the yaml workflow file:
permissions:
# contents permission to update benchmark contents in 'benchmarks' branch
contents: write
and I'm not entirely sure if "write" implies "read" or if the problem is something else.
FWIW looking at entrypoint.sh, I've found that it's making a call like
git clone https://${GITHUB_ACTOR}:${GITHUB_TOKEN}@github.com/${INPUT_PUBLISH_REPO}.git .
which doesn't seem entirely correct to me -- the GITHUB_ACTOR is the committer of the last commit, but the GITHUB_TOKEN isn't belonging to them. Is that a problem?
Thanks!
@bobheadxi : i've made progress with this, but i's complaining about my empty benchmarks.json
file: https://github.com/ironpeakservices/iron-go-project/runs/795465395?check_suite_focus=true#step:7:41
Originally posted by @hazcod in #41 (comment)
SUBDIRECTORY
is useful when we have a project in another directory. However it doesn't work with multiple directories. It'd be nice to add something like SUBDIRECTORIES
to use this tool in monorepo projects.
Example Layout: https://github.com/gofiber/template
Hi, I think it would be nice if we could generate an image from the JSON file to e.g. automatically display & replace the image on README.md on merge.
go: 1.14.2
gobenchdata: v0.5.1
I'm using gobenchdata in a private github repository, and as far as I'm aware, github actions have problems integrating with private github repositories. As such my process for using gobenchdata is a bit different than normal, and consists of the following process during release builds:
go test -bench=. --benchtime 1000x ./benchmarks/releasemark/... -benchmem > bench.txt
cat bench.txt | gobenchdata --json release/tex-gobenchdata-$(git describe --tags).json
Up until today this has been working without issues beyond manual labour, however today this failed with the error:
panic: could not parse runs: strconv.Atoi: parsing "2020-04-25T22:43:13.377Z": invalid syntax
go test -bench=. --benchtime 1000x ./benchmarks/releasemark/... -benchmem > bench.txt
cat bench.txt | gobenchdata --json release/tex-gobenchdata-`git describe --tags`.json
panic: BenchmarkTemporalX_DownloadMulti-2: could not parse runs: strconv.Atoi: parsing "2020-04-25T22:43:13.377Z": invalid syntax
goroutine 1 [running]:
main.main()
/home/travis/gopath/pkg/mod/go.bobheadxi.dev/[email protected]/main.go:66 +0xa37
Makefile:152: recipe for target 'release-bench' failed
make: *** [release-bench] Error 2
The command "if [ "$TRAVIS_OS_NAME" = "linux" ]; then make release-bench ; fi" failed and exited with 2 during .
Your build has been stopped.
I tried downgrading to v0.5.0 and got the same error:
go test -bench=. --benchtime 1000x ./benchmarks/releasemark/... -benchmem > bench.txt
cat bench.txt | gobenchdata --json release/tex-gobenchdata-`git describe --tags`.json
panic: BenchmarkTemporalX_DownloadMulti-2: could not parse runs: strconv.Atoi: parsing "2020-04-25T23:45:42.353Z": invalid syntax
goroutine 1 [running]:
main.main()
/home/travis/gopath/pkg/mod/go.bobheadxi.dev/[email protected]/main.go:66 +0xa37
Makefile:152: recipe for target 'release-bench' failed
make: *** [release-bench] Error 2
The command "if [ "$TRAVIS_OS_NAME" = "linux" ]; then make release-bench ; fi" failed and exited with 2 during .
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.