w3c / navigation-timing Goto Github PK
View Code? Open in Web Editor NEWNavigation Timing
Home Page: https://w3c.github.io/navigation-timing/
License: Other
Navigation Timing
Home Page: https://w3c.github.io/navigation-timing/
License: Other
NT2 spec does not explicitly mention TAO opt-in check for unloadEventStart|End. This should be mentioned explicitly.
The security section does say "Resource providers can explicitly allow all timing information to be collected for a current document by adding the Timing-Allow-Origin HTTP".
But it should be mentioned inline for unloadEvent*
When timing allow check fails redirectCount is explicitly set to zero (https://w3c.github.io/navigation-timing/#dom-performancenavigationtiming-redirectcount)
This makes it impossible to determine whether there were actually no redirects or just the information isn't available due to security restrictions.
Is it possible to have an explicit signal on when information like this isn't available due to security restrictions?
The WebIDL test verifies the existence of the method, but we've seen that sometimes properties are not serialized properly historically. We need a test for this.
Travis CI sent a message this week alerting of a bug in the way they used to encrypt environment variables. Owners of this repo should have received it.
Depending on the way encryption was done in this repo using the Travis CLI, those values might have leaked.
This is a reminder for maintainers of this repository to make sure they discard old values, and re-encrypt new ones with the latest version of travis
, now that the bug has been fixed.
Please close this issue if action has already been taken. Feel free to ping me or sysreq if you need help.
Otherwise it's somewhat unclear when these measurements are supposed to be made.
[[
I see nothing anywhere in
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming2/Overview.html
that creates instances of PerformanceNavigationTiming. Should there be
something? There are examples that claim that such instances should
exist and that performance.getEntriesByType should be able to find them,
but I see no normative basis for that claim.
]]
https://lists.w3.org/Archives/Public/public-web-perf/2014Jun/0015.html
Much of the navigation-timing spec is the same as resource-timing. To avoid issues of copy/paste, perhaps we can define resource section in resource-timing spec and extend?
(based on discussion at TPAC..) /cc @slightlyoff
fetchStart
is when we send request to worker, SW startup time should be captured before that.How about [swStartupStart, swStartupEnd]
, between "Negotiate Link" and "redirect"? See processing model.
P.S. Once we agree on the language, same logic should be replicated in Resource Timing - the worker can be killed between requests; we can't rely on SW being there. Also, UT and RT will be exposed in SW (see w3c/ServiceWorker#553), so I assume we will simply report 0's for startup time when inside a SW.
How does an interface "participate" something else?
Does it just mean that some PerformanceEntry objects are queued? If so, when?
[[
I see nothing anywhere in
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationTiming2/Overview.html
that creates instances of PerformanceNavigationTiming. Should there be
something? There are examples that claim that such instances should
exist and that performance.getEntriesByType should be able to find them,
but I see no normative basis for that claim.
]]
http://lists.w3.org/Archives/Public/public-web-perf/2014Jun/0015.html
(and follow up)
Continuing from w3c/performance-timeline#32 (comment)
In some ad-hoc testing it looks like Performance.timing
and Performance.navigation
are widely implemented, is the hope that those be removed from browser or should both forms of the API be supported at the same time?
If we don't have tests for the TAO update, we need them.
prerenderSwitch
attribute was added back in 2013There is a subtle gotcha with it too as far as interop with PerformanceObserver is concerned: our processing model says it should be set at step 26, before the entry is queued to notify any observers...
Does that mean we should delay steps 27,28 until the visibilityChange transition fires? If so, we need to change the wording in that step to account for cases where page is not being prerendered.. It also means the nav timing entry would be delayed until page is made visible.. This seems a bit odd.
I'm wondering if we should simply drop prerenderSwitch from the spec? It's trivial to get the timestamp via existing Page Visibility and User Timing APIs..
document.addEventListener('visibilitychange', function() {
performance.mark(document.visibilityState);
});
Thoughts, objections?
Navigation Timing doesn't contain any timings for the beforeUnload event.
As beforeUnload is triggered at the 'start of the navigation' from one page to the next, it's execution time affects users' experience
Some sites hook the event to determine whether it's safe to navigate e.g. will unsaved changes be lost, others use it to gather, compress and beacon back UX type data e.g. mPulse Resource Timing data, Tealeaf with DOM nodes.
I've observed beforeUnload event handlers take over 2s on mobile, so our current lack of timings leaves a blind spot and page timings that may not reflect actual users' experience.
I'm less sure about how we should record the timings for them though - moving the time origin seems a non-starter, as does having -ve values relative to time origin.
NT2 "name" attribute is currently set to "document".
Should this be set to the URL instead (as in RT2) ?
Related discussion in: https://crbug.com/675039
It seems like the abort conditions (under processing model) are missing a bullet point on user initiated canceled page loads. Specifically, what happens when the user cancels page loads.
I quickly tried this in Chrome and Safari (both latest stable), they will both cancel gathering of navigation timing data and will list the remaining data points with their initial value.
Last successful publication of navigation-timing
with Echidna was on 22 Apr; after that, all jobs submitted have failed.
One step that is erroring is HTML validation. In particular:
div
not allowed as child of element ul
in this context”div
not allowed as child of element dl
in this context”100%
for attribute width
on element img
: expected a digit but saw %
instead”And also one Echidna-specific check:
https://www.w3.org/2004/01/pp-impl/NNNNN/status
)”eg, labs.w3.org/echidna/api/status?id=019c95fb-4fa1-4f5d-98e4-c03610c8b4e0
--
(Assigning all contributors who have committed since the last successful publication, and pinging @marcoscaceres. Let me know if you need help!)
Processing model chart - https://w3c.github.io/navigation-timing/#processing-model - is missing a marker for domLoading.
HTTP/2.0 (draft-14) is now shipping in stable Chrome and Firefox; IE 12 has a preview HTTP/2 implementation; Apple is shipping SPDY v3 implementation is latest Safari; Chrome is also actively experimenting with QUIC... Navigation Timing and Resource Timing interfaces should expose the negotiated protocol to enable RUM-based logging, measurement, and optimization.
Previous discussions:
@mnot proposed:
readonly attribute DOMString protocol;
This optional attribute reflects the protocol used to fetch the resource, as identified by the ALPN Protocol Identifier https://tools.ietf.org/html/rfc7301. Its value is a DOMString representing the Protocol Identifier.
Because ALPN Protocol Identifiers are specified as arrays of bytes, this specification assumes that they will be encoded in UTF-8 or a compatible encoding; if the Protocol Identifier is not valid UTF-8, the attribute's value should be an empty string.
Note that this attribute is intended to identify the protocol in use for the fetch regardless of how it was actually negotiated; that is, even if ALPN is not used to negotiate the protocol, this attribute still uses the ALPN Protocol Identifier to indicate the protocol in use.
My only concern with above is hard dependency on ALPN TLS extension. QUIC doesn't use ALPN, and I think we should make above more flexible:
http://www.w3.org/TR/2017/WD-navigation-timing-2-20170509/#extensions-to-the-performance-interface
partial interface Performance {
[SameObject]
readonly attribute PerformanceTiming timing;
[SameObject]
readonly attribute PerformanceNavigation navigation;
};
The partial interface need a new [Exposed=Window]
attribute, as original Performance interface has [Exposed=(Window,Worker)]
and both the PerformanceTiming and PerformanceNavigation are only exposed to Window.
So says http://www.w3.org/TR/navigation-timing/#sec-navigation-info-interface
Follow-up to #22
For real-world navigation processing, hyper link is able to handle JavaScript click/touch events. But latest Processing Model doesn't suppose that use case.
See this sample. http://furoshiki.github.io/sample/perf02/index.html
This example navigation spends more than 1 second, but nav-timing API (also nav-timing 2) can't audit it.
So says http://www.w3.org/TR/navigation-timing/#sec-navigation-info-interface
It's really weird, but changing it does not seem useful. The constant is never actually used in Blink, so if any change here is worthwhile, it would be to remove TYPE_RESERVED entirely.
Follow-up to #22
In Nav Timing Level 2, the processing model defers reporting of any nav timing data until the load event has fired.
This means that pages that reach earlier events, such as dom content loaded, but that are aborted before the load event fires, will fail to report the earlier events even though those events were reached during the page load. This leads to bias in aggregate data. Consider a few examples:
Example 1:
Consider 2 pages:
page 1: typically reaches domcontentloaded quickly, but onload very late
page 2: typically reaches domcontentloaded at the same time as page 1, and onload immediately after DCL fires
Because we only get to see data if the page reaches onload, page 1 is more likely to lose DCL samples for slower page loads that get aborted in the period between DCL and onload firing. This means
(a) we'll receive fewer DCL stats than actually happened - bias
(b) the lost data will be biased towards slower page loads, which means the aggregate DCL measured for this page will be artificially lower than the true DCL
The end result would be that even if the 2 pages have the same DCL, the aggregated stats will likely suggest that page 2 is slower due to it losing fewer samples than page 1, and thus it being more likely to report DCL samples for page loads that are slow.
Example 2:
Is there any opportunity to fix this issue and allow for reporting metrics as they happen, rather than once when the onload event is fired?
Please consider using https://github.com/tabatkins/bikeshed which handles this and much more automatically.
At TPAC 2017, we realized that the steps in Navigation Timing 2 often are copies of PerformanceResourceTiming. Can we point to Resource Timing rather than having copy/paste?
The purpose of this issue is to document the currently agreed upon proposal for computing nextHopProtocol and track any changes:
Currently it is difficult to understand the timing of cross origin redirects. For example, if https://facebook.com redirects to https://www.facebook.com, any information regarding that redirect is obscured from navigation timing.
Could we use the Timing-Allow-Origin header from resource timing to address this? A request that returned a redirect to another origin could include a Timing-Allow-Origin for that origin to specify that the final document is allowed to see timing information for the cross origin redirect.
The Processing Model diagram shows only a subset of possible situations, i.e. the case with beforeunload dialog present (in the case where beforeunload handler was run but there was no confirmation dialog, startTime should mark the point before the "Prompt for unload" square).
It must be made clear to the readers, or (probably better) the diagram should be changed to show two possible situations.
Also the steps described in Processing Model do not specify at which point "prompting to unload" the previous document happens.
(see the discussion in https://groups.google.com/a/chromium.org/forum/?utm_medium=email&utm_source=footer#!topic/loading-dev/W5MbNq5ScLs)
Source: https://lists.w3.org/Archives/Public/public-web-perf/2015Aug/0013.html
domLoading attribute
This attribute must return the time immediately before the user
agent sets the current document readiness to "loading".or in the Processing Model:
Record the time as domLoading immediately before the user agent
sets the current document readiness to "loading".The "current document readiness" of what? I'm willing to accept that a
PerformanceTiming object has an implicit reference to the Window
through the Performance object from which it was retrieved (though
that should be clarified too), but even if there is a 1-to-1 mapping
from the Window object to the Document object in general, there are
exceptions.
Hi there.
Im really surprised by absence of first paint time in the Navigation Timing 1/2.
Can you add new timing value for that? Its really important to know, when user see the first pixel of site (but not the white screen).
By the first paint time I mean (I suppose) the timestamp of visibilityState change from "hidden" to "visible". You also can see it as a green vertical line in Timeline of Chrome DevTools.
Thanks in advance!
PS: Why there is a something strange and useless like "prerenderSwitch" but no "renderSwitch" ?? http://www.w3.org/TR/navigation-timing-2/
Per @DLehenbauer who investigated this recently using dev tools of various browsers.
Pretend that all of the browsers are reporting the time of the document/parser association for ‘domLoading’ today. If so, the current numbers would mean that:
Those are all reasonable implementation choices per the current wording of domLoading. It’s also information about the internals of the browser that is of no use to a page author.
Should we consider deprecating this value or are we missing the value of this specific entry?
The intent behind w3c/resource-timing#21 will break:
[[
Only the current document resource gets included as the only PerformanceNavigationTiming object in the Performance Timeline of the relevant context.
]]
Assuming we land w3c/resource-timing#21, there may be multiple PerformanceNavigationTiming records (e.g. one or more redirect requests plus final 200 OK req/resp).
requestStart is
defined as: "This attribute MUST return a DOMHighResTimeStamp with a time value equal to the time immediately before the user agent starts requesting the current document from the server, or from relevant application caches or from local resources."
Current processing model:
(Step 12): If the resource is fetched from the relevant application cache or local resources, including the HTTP cache, go to step 18.
(Step 17): Immediately before a user agent starts sending request for the document, record the current time as requestStart, and set the value of nextHopProtocol to the ALPN ID used by the transport connection.
(Step 18): Record the time as responseStart immediately after the user agent receives the first byte of the response.
Step 12 skips to 18 and requestStart is not initialized if fetch is coming from cache.
Needed for whatwg/webidl#365.
NavigationType
uses 'back_forward' as an enum value; the convention for enum values is to used hyphenated names
Hi!
We recently deprecated WebIDL serializers. You can now directly specify toJSON operations instead, which you previously weren't allowed to do.
To deal with common cases, we added a new [Default] extended attribute which triggers the default toJSON operation that behaves similarly to how serializers={attributes}
or serializers={attributes, inherit}
used to. That is, it serializes all attributes that are of a JSON type into a vanilla JSON object.
It seems the following interfaces in this spec are impacted by this change:
All of these seem good candidate for the default toJSON operation, so the below should be all you need, provided you also similarly update PerformanceResourceTiming
and PerformanceEntry
in their relevant specifications:
interface PerformanceNavigationTiming : PerformanceResourceTiming {
readonly attribute DOMHighResTimeStamp unloadEventStart;
// etc...
[Default] object toJSON();
};
[Exposed=Window]
interface PerformanceTiming {
readonly attribute unsigned long long navigationStart;
// etc...
[Default] object toJSON();
};
[Exposed=Window]
interface PerformanceNavigation {
const unsigned short TYPE_NAVIGATE = 0;
// etc...
[Default] object toJSON();
};
I'm sorry for the inconvenience this causes, but our hope is that this ultimately makes things a lot simpler and clearer for everybody.
Please feel free to reach out if you have any questions.
Thanks!
Start at either https://www.w3.org/TR/navigation-timing-2/#sec-PerformanceNavigationTiming or https://w3c.github.io/navigation-timing/#sec-PerformanceNavigationTiming and try to walk the ancestor interface chain for the PerformanceNavigationTiming interface.
What, you say "PerformanceResourceTiming" is not linkified in the IDL? Why would an implementor ever need to know silly things like what this interface actually inherits from?
OK, search around in the document, find section 4.2 which has a link to https://www.w3.org/TR/resource-timing/#idl-def-PerformanceResourceTiming. Oops, that's a broken link: no such anchor.
OK, search in the document, find https://www.w3.org/TR/resource-timing/#performanceresourcetiming which says it inherits from PerformanceEntry. Which is of course not linkified. Search in the document some more, find two prose bits that link to PerformanceEntry. Both those links point to https://www.w3.org/TR/hr-time-2/#dom-performanceentry which is awesome because that document doesn't even mention PerformanceEntry. Same thing for the editors' drafts of both resource-timing and hr-time, by the way. The editor's draft of resource-timing at https://w3c.github.io/resource-timing/ instead links to https://www.w3.org/TR/performance-timeline-2/#performanceentry which is also a broken link: no such anchor. At least it's the right document, finally, in the sense of defining a PerformanceEntry interface. Whether it defines the PerformanceEntry interface that the editor's draft of navigation-timing expects, or whether that's https://w3c.github.io/performance-timeline/, I dunno.
So here's my question: how is anyone ever expected to implement this pile of broken links?
N.B.: I haven't even found the part I was looking for, which is the action at a distance about PerformanceEntry instances having magic happen about adding them to the performance entry buffer when they get constructed. Presumably that's defined in some yet other spec which is carefully not linked from anywhere that anyone might be able to find it.
It looks like this cannot be implemented.
See
[[
https://lists.w3.org/Archives/Public/public-web-perf/2015Jun/0067.html
https://lists.w3.org/Archives/Public/public-web-perf/2015Jun/0090.html
]]
Based on my reading of the spec performance.timing.navigationStart seems to be the same as time origin as specified in hr-time spec. The wording is not exactly the same so I may be mistaken.
If my interpretation is correct then I think we should spell this relationship explicitly in this spec and perhaps align the texts. Otherwise, I think they should be made the same. :)
If the global object [HTML51] is a Window object, the time origin must be equal to:
This attribute must return the time immediately after the user agent finishes prompting to unload the previous document. If there is no previous document, this attribute must return the time the current document is created.
where
current document = Window object's newest Document object.
Previous discussions:
Highlights of raised issues and questions:
Philippe: While trying to write a proposal, I realize that we have two different byte sizes that could be returned: 1. the byte size of the response; 2. the byte size of the response's body - source
Boris: My question was what "response body" means. Is it the entity/payload
body or the message body?
If we're counting the size of the headers, then using the size of the
entity/payload body would be the most sensible thing, since that's what
goes on the wire with the headers. But other comments in this thread
suggest that's not what Chrome does... - sourceBoris: Specifically, given some data that is sent with Content-Encoding:gzip
and Transfer-Encoding:chunked over an SSL connection, are we measuring
the size of the HTTP "message body" (which in this case is gzipped,
note), or the "payload body" (which has the extra bytes for the chunked
encoding) or the actual HTTP messages (which include headers in addition
to the data) or the size of the TCP packets (which include SSL overhead
bits and the HTTP messages in encrypted form, plus the TCP handshake
itself) or something else?More interesting is what the status code should be reported as if the
browser did a conditional request and got back a 304. Is the status
code 304 or whatever the status code for the cached response is? - sourceHiroshi: What's the value, when download is failed? It's really important. - source
Alois: We would have to check whether size over wire is available in all browser implementations. Some use abstraction layers that do not allow to get this information. Most likely the number to get is size of content when browser gets it. - source
Investigating what and how different browser developer tools shows the following...
Chrome (docs)
Firefox (docs)
Safari (docs)
IE (docs)
tl;dr: no consistency. Safari is arguably the most useful, consistent, and sane.
Taking a step back, I believe we should to surface two metrics:
transferSize
): this attribute must return the size, in octets received by the client, consumed by the response header fields and the response message body. This SHOULD include HTTP overhead (such as HTTP/1.1 chunked encoding and whitespace around header fields, including newlines, and HTTP/2 frame overhead, along with other server-to-client frames on the same stream), but SHOULD NOT include lower-layer protocol overhead (such as TLS or TCP).
decodedSize
): this attribute must return the size, in octets, of the message body used, after removing any applied content-codings.
decodedSize
is always present, regardless of whether resource is fetched from the network, or comes from cache; decodedSize
does not reflect the "on the wire" size.transferSize
allows to identify compression savings based on GZIP, SDCH, and/or other compression/delta-update mechanisms.The values are reported as follows:
transferSize
= request/response headers size + request/response body sizedecodedSize
= decoded response body sizetransferSize
= request/response headers size (body size = 0)decodedSize
= decoded response body size (from cache, same value as 200)transferSize
= 0decodedSize
= decoded response body size (from cache, same value as 200)In short, decodedSize
always indicates the decoded response body size, and transferSize
is the total number of bytes transferred over the network.
Other notes:
[[
Small edit, the spec does make one reference to
window.performance.timing in Step 20.f of the Processing Model, which
should probably be updated.
]]
https://lists.w3.org/Archives/Public/public-web-perf/2015Jun/0056.html
Chrome provides w.chrome.loadTimes().firstPaintTime and Internet-explorer provides window.performance.timing.msFirstPaint. It will be helpful to include first paint event in navigation timing API, so that all browsers provide this info in a consistent way. Thanks.
The spec should make it clear that NavTiming (and Resource Timing) are async.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.