mozilla-frontend-infra / firefox-performance-dashboards Goto Github PK
View Code? Open in Web Editor NEWFirefox's performance dashboard
Home Page: https://arewefastyet.com
License: Mozilla Public License 2.0
Firefox's performance dashboard
Home Page: https://arewefastyet.com
License: Mozilla Public License 2.0
For instance, besides Kraken to have a drilldown icon that would take the user to //kraken
I was considering this icon:
https://materialdesignicons.com/icon/arrow-down-thick
When switching from one graph to another, I thought there was something going wrong and that an error happened, because nothing appeared on screen for seconds. It was just that the data was loading. Maybe it would be nice to display a spinner when the data is being fetched?
Visit https://arewefastyet.com/win10/tp6-amazon?numDays=36, and scroll down
Notice that the X axes don't seem to agree:
we switched our chrome test runs from per commit on m-c to the nightly scheduler- this means that they post under a different config than previously. I assume we can add this to the same graph.
@armenzg tips?
It would be nice if there was a direct link to this repo in the footer, so that it's easy to determine where to open issues.
See for instance: http://js-perf-dashboard.netlify.com/mac/motionmark-animometer
In particular, in the Design benchmark chart, the Firefox and Chrome lines are very close to one another, but there could be a lower-than-unit scale that would better display the differences and evolution of each vendor's score over time. Does it make sense to use a smaller scale, in this case?
Just visited the new site (which looks like a great improvement!) - can't see webkit any longer. Is it no longer being tested?
It has a lot of features that metrics-graphics does not have at the moment.
This cuts development cost to fix some of the other issues already filed. Plotting features that the original AWFY already had.
Page title is currently "JavaScript Performance Dashboard" but the results are not all related to JavaScript. We should make the title more generic such as "Firefox Performance Dashboard".
The platform name has changed from 'windows10-64-ux' to 'windows10-64-ref-hw-2017'. We should pull in the datapoints from the new name, and update the label to 'Windows 10 64bit (2017 reference laptop)'.
We currently only show Amazon, Docs, Facebook, Google, Sheets, Slides, and YouTube page load results. We should extend this to show all page load results available on each platform.
We currently have hyperlinks with the text “Perfherder link”, let’s change that up for some icons.
I think icon will go well:
https://material.io/tools/icons/?search=link&icon=link&style=baseline
Let's place the icon to the right of the benchmarks title.
It appears that due to various platform and test name changes, we are no longer showing the some results since the end of March:
Here's an example where the Chromium results are no longer showing:
In this case, the platform changed from linux64-nightly to linux64-shippable, and then later the test name changed from a suffix of 'chrome' to 'chromium'. The complete data can be see in this Perfherder graph:
Another example is the macOS overview, which shows no results since the end of March. This is due to the platform name changing from osx-10-10 to osx-10-10-shippable. These graphs should also show the Chromium results, but they do not.
The Windows platforms show similar issues.
In prepareData.js we did not inverted these two benchmarks based on Perfherder data since it seems to be incorrect.
d557e0f#diff-8c587180a3856b9aacf5305f1013296fR78
If you visit the Linux64 page (the issue does not have on the win10 page):
https://arewefastyet.com/linux64/overview?numDays=90
A click on any of the drilldowns (see downward arrow):
it will take you to here:
https://arewefastyet.com/win10/overview?numDays=90
instead of here:
https://arewefastyet.com/linux64/ares6
If you enter this url directly into the browser:
https://arewefastyet.com/linux64/ares6
the same will happen, however, if you enter the following:
https://arewefastyet.com/linux64/ares6?numDays=90
it will work as expected.
I think this has to do with this logic:
https://github.com/mozilla-frontend-infra/firefox-performance-dashboard/blob/master/src/App/index.jsx#L27-L30
const timeRange = Math.round(searchParams.get('numDays'));
if (!validCombination(platform, benchmark, timeRange)) {
return <Redirect to={CONFIG.default.landingPath} />;
}
Let's only validate platform
and benchmark
and forget about timeRange
. We will be adding more parameters like timeRange
in the future and I don't want to complicate the logic as we add more.
Routing works locally, however, we need to adjust Netlify to redirect to '/'.
This will allow sending a link to a specific platform/benchmark.
For now:
[1] defined in config.js
Currently, we have one single file that wrangles the various platforms we have with the various benchmarks we have.
In some sense, it has a little resemblance to the buildbot-configs configuration files (which is a horrible thought) and I want to refactor it into multiple platform config files (more a-la-TaskCulster-style).
The changes on this issue will require modifications at least in these two places:
https://github.com/mozilla-frontend-infra/js-perf-dashboard/blob/master/src/utils/prepareData.js#L53-L91
https://github.com/mozilla-frontend-infra/js-perf-dashboard/blob/master/src/utils/fetchData.js
Proposal to have these files under src/configuration
have:
import DEFAULT_DESKTOP_BENCHMARKS from '../desktopDefaults.js';
const BENCHMARKS = // XXX: Make a deep copy of DEFAULT_DESKTOP_BENCHMARKS
// Make modifications specific to this platform
BENCHMARKS['six-speed'] = {
compare: {
'six-speed-sm': {
color: '#e55525',
label: 'SpiderMonkey',
frameworkId: JSBENCH_FRAMEWORK_ID,
suite: 'six-speed-sm',
buildType: 'opt',
},
'six-speed-v8': {
color: '#ffcd02',
label: 'Chrome v8',
frameworkId: JSBENCH_FRAMEWORK_ID,
suite: 'six-speed-v8',
buildType: 'opt',
},
},
labels: ['SpiderMonkey', 'Chrome v8'],
label: 'Six Speed (JS shell)',
};
const CONFIG = {
label: 'Linux 64bit',
platform: 'linux64',
benchmarks: BENCHMARKS,
};
export default CONFIG;
Octane: "Geometric mean of Score", higher is better
Sunspider: "Arithmetic mean of ms", lower is better
Kraken: "Arithmetic mean of ms", lower is better
Speedometer: "Arithmetic mean of runs/min", higher is better (not sure
about individual benchmarks)
The data represented as 'Chrome' is actually running on Chromium. We should update all labels to reflect this. We are planning to also run tests on Chrome, so it will be important to be able to distinguish and accurately reflect what we're testing.
Currently there's no way to share a URL that will draw the viewer to a specific graph.
In the current version we have no idea what the numbers are. Some are arbitrary scores and others are more meaning full such as: Arithmetic average of the execution time (ms) [Sunspider, Assorted DOM], Geometric average of the inverse of the execution time [Octane].
I currently labeled all benchmarks with 'Execution time' by default and manually set certain benchmarks with 'Score'.
I'm going to make the assumption that:
I had researched if Treeherder exposed the "unit", however, that would refer to 'ms' vs 's' which it would still be 'Execution time'.
I will allow for a way to overwrite my assumption above.
You can read some documentation about it in here:
https://docs.sentry.io/platforms/javascript/react/?_ga=2.113302979.599574481.1537974632-1717276547.1536588410&platform=javascript
Here's an example code that can be used as inspiration (no copy/paste; think about it and name it properly):
import * as Sentry from '@sentry/browser';
// Sentry.init has to be called in the somewhere before
class ExampleBoundary extends Component {
constructor(props) {
super(props);
this.state = { error: null };
}
componentDidCatch(error, errorInfo) {
this.setState({ error });
Sentry.configureScope(scope => {
Object.keys(errorInfo).forEach(key => {
scope.setExtra(key, errorInfo[key]);
});
});
Sentry.captureException(error);
}
render() {
if (this.state.error) {
//render fallback UI
return (
<a onClick={() => Sentry.showReportDialog()}>Report feedback</a>
);
} else {
//when there's not an error, render children untouched
return this.props.children;
}
}
}
This Outreachy project in itself will be difficult but the number of good first issues is limited.
For the purposes of your applications you can still make contributions (and will count for this Outreachy application) to one of these two projects:
The technologies used there are very similar, so they should work
nicely as contributions for your application.
You can still contact @djmitche and I for mentoring on the #frontend-infra IRC channel.
we now run our raptor tests on mozilla-central on lower end reference laptops:
https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=64-ux&revision=073045259e75e0c8f7b8ffcd5e4bf72570f98f3e
we should plot this line in additions to firefox/chrome.
@armenzg can you help make sure this issue is annotated correctly or filed in the right places so we don't forget about it?
By default the dropdown menu should select "Windows 10 64-bit".
For now, the dropdown should be:
This will require passing to MetricsGraphics multiple lines instead of a single one.
SunSpider seems to plot "lower is better" which is correct, however, the metadata for these two benchmarks is marked as "lower_is_better: false"
8:58 AM https://treeherder.mozilla.org/perf.html#/graphs?timerange=1209600&series=mozilla-central,1730689,1,10&series=mozilla-central,1730582,1,10
8:58 AM https://treeherder.mozilla.org/api/project/mozilla-central/performance/signatures/?framework=10&id=1730582
The PR that inverts the Y axis takes lower_is_better: false into consideration and inverts improperly.
See this PR:
https://deploy-preview-46--js-perf-dashboard.netlify.com/win10/overview
Handling ?platform=&benchmark=<benchmarkName|overview>.
We’re currently using the default font, however, I would like us to use Roboto Sans.
You can see how a different font got added in here:
mozilla-frontend-infra/firefox-health-dashboard@9ff6b2e
People would probably like to see the final result of Speedometer, that is, the value named score (higher is better). Could we display it for the Speedometer benchmark?
That functionality only landed on Firefox health.
This will require a new perf-goggles release.
If you load the network tab when loading the page you will see that just the .js code is about 700+kb.
If you run |yarn build| you will notice webpack warns us about this [1].
We can optimize the build size by following some of the recommendations mentioned here:
You can limit the size of your bundles by using import() or require.ensure to lazy load some parts of your application.
For more info visit https://webpack.js.org/guides/code-splitting/
[1]
armenzg@armenzg-mbp js-perf-dashboard$ yarn build
yarn run v1.10.1
$ webpack --mode production
Hash: bea53e863567bcb39bcf
Version: webpack 4.20.2
Time: 19516ms
Built at: 10/04/2018 9:00:31 AM
Asset Size Chunks Chunk Names
index.dc5a259c.js 16.1 KiB 0 [emitted] index
1.bfef2dde.js 745 KiB 1 [emitted] [big]
runtime.1ca11f96.js 1.42 KiB 2 [emitted] runtime
index.html 345 bytes [emitted]
WARNING in asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).
This can impact web performance.
Assets:
1.bfef2dde.js (745 KiB)
WARNING in entrypoint size limit: The following entrypoint(s) combined asset size exceeds the recommended limit (244 KiB). This can impact web performance.
Entrypoints:
index (762 KiB)
runtime.1ca11f96.js
1.bfef2dde.js
index.dc5a259c.js
WARNING in webpack performance recommendations:
You can limit the size of your bundles by using import() or require.ensure to lazy load some parts of your application.
For more info visit https://webpack.js.org/guides/code-splitting/
✨ Done in 22.30s.
We now have results for platform windows10-aarch64, which we should add to the dashboard.
The graph which are on arewefastyet are hard to spot issues because small variantions are masked by the fact that they are packed horizontally.
Could we have taller graphs instead?
Note: There is a resizing bug when opening and closing the devtools which can be used as a work-around.
... following the work that happened in bug 1493648. Maybe we should wait a bit that it sticks :)
@nbp I'm afraid the performance of some of the drilldowns could be slowed down.
Should we do 90 days for the 'overview' and 30 days for the drilldowns?
We might also want to add the dropdown to change the timerange as part of this.
This depends on a new release of perf-goggles:
mozilla-frontend-infra/perf-goggles#13
They will want to see the data for inbound on the -sm ones; and show the v8 ones from
m-c.
Switching to the benchmarks with more graphs (e.g. Speedometer) to plot can be rather slow.
Sometimes the Perfherder metadata about subtests can be incorrect:
https://bugzilla.mozilla.org/show_bug.cgi?id=1502036#c2
Over here is where we calculate if we should reverse the graphs or not:
https://github.com/mozilla-frontend-infra/firefox-performance-dashboard/blob/master/src/utils/prepareData.js#L42-L50
We could pass an extra parameter in prepareData
and fetchData
:
https://github.com/mozilla-frontend-infra/firefox-performance-dashboard/blob/master/src/utils/prepareData.js#L53
https://github.com/mozilla-frontend-infra/firefox-performance-dashboard/blob/e274db75ed490598e0708dbae7df8336f096becd/src/utils/fetchData.js#L6
We should show the performance results for our mobile platforms.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.