Coder Social home page Coder Social logo

wicg / attribution-reporting-api Goto Github PK

View Code? Open in Web Editor NEW
336.0 118.0 154.0 7.14 MB

Attribution Reporting API

Home Page: https://wicg.github.io/attribution-reporting-api/

License: Other

Python 4.44% Makefile 0.24% Bikeshed 55.94% JavaScript 0.11% HTML 0.66% TypeScript 38.61%

attribution-reporting-api's Introduction

Attribution Reporting API

The Attribution Reporting API supports measurement of clicks and views with event-level and aggregate reports.

This repository hosts multiple technical explainers that specify various features of the API. This document offers an overview of the API and its explainers.

Get started

This repository hosts detailed technical explainers. Before diving into these, check out this introductory article. If you're looking to experiment with the API, head over to this guide.

All developer resources for this API are listed on developer.chrome.com.

Participate

This API is being incubated and developed in the open. Here are ways to participate:

  • 🗓 Join the bi-weekly meetings (every second week). In these calls, participants discuss API design proposals and how the API could support various measurement use cases. You can add topics to the next meeting's agenda at any time. Everyone is welcome to join these discussions⏤only make sure to join the WICG.

If you have implementation questions, for example about your origin trial in Chrome, see how to get support.

Overview

The Attribution Reporting API makes it possible to measure when an ad click or view leads to a conversion on an advertiser site, such as a sale or a sign-up. The API doesn't rely on third-party cookies or mechanisms that can be used to identify individual users across sites.

The API enables two types of attribution reports:

  • Event-level reports associate a particular event on the ad side (a click, view or touch) with coarse conversion data. To preserve user privacy, conversion-side data is coarse, and reports are noised and are not sent immediately. The number of conversions is also limited.
  • Aggregatable reports provide a mechanism for rich metadata to be reported in aggregate, to better support use-cases such as campaign-level performance reporting or conversion values.

These two report types can be used simultaneously. They're complementary.

API features (proposals)

🕙 Last updated: Jan 2023

All the features below are proposals under incubation. This list evolves over time.

Event-level reports (clicks and views)

Attribute cross-site click-through or view-through conversions with reports at a per-event level.

See details in the Explainer.

Implementation status: Available in Chrome as an origin trial

Aggregatable reports (clicks and views)

Attribution reports for aggregated conversions (both clicks and views). Complements the event-level reports.

See details in the Explainer.

Implementation status: Available in Chrome as an origin trial

App-to-web (clicks and views)

Attribution reports for web conversions for ad clicks (touches) or views that occurred within an Android app.

See details in the web explainer and Android explainer.

Implementation status: Expected in Chrome and Android for origin trial in Q2 2023.

External documentation

Chrome developer resources for the Aggregation Reporting API are available on developer.chrome.com.

Android has documentation on developer.android.com.

attribution-reporting-api's People

Contributors

agarant avatar akashnadan avatar alexmturner avatar alextcone-google avatar apasel422 avatar appascoe avatar csharrison avatar cwilso avatar heyawhite avatar hidayetaksu avatar hostirosti avatar johnivdel avatar jurjendewal avatar k-o-ta avatar lbdvt avatar linnan-github avatar logicad avatar maudnals avatar miketaylr avatar palenica avatar paul-ki avatar paulgoldbaum avatar pm-nitin-nimbalkar avatar remysaissy avatar shigeki avatar subhagam avatar thomasquesadilla avatar vikassahu29 avatar yanzhangnyc avatar yoavweiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

attribution-reporting-api's Issues

Alternative idea for a discussion

I'd like to suggest a different point of view based on a few different fundamental principles.

  1. Many advertisers are ready to give up the user identity altogether and only need correct event attribution. However, there are too many cases of how the attribution happens, both on technical and business level. They cannot and should not be constrained with limitations other than not using the identity. It also means staying as simple as possible and introducing as little new specifications as possible.
  2. The main privacy goal of the API is to make linking identity between two different top-level sites difficult.
    I think the goal should be fundamentally different and not mixed up with identity protection. The goal of the API should be solely to provide attribution procedure, while one of the implementation requirements is to not compromise or interfere with browser-specific identity-protection policies and practices. In other words, the attribution API should not solve any identity-focused cases which already exist and can be used (or misused, depends on your viewpoint) regardless of the API, it's a task for other components. As long as it does not introduce new cases it should be allowed to do anything required for the attribution process.
  3. Instead of reducing the 'entropy' and the number of permutations in the data passed along with the attributed events the policy should be focused on increasing it, i.e. making the identification very complicated because of too diverse data.
  4. Unlike user identity, attribution information can be freely shared with s2s calls between vendors. It is beyond browser capabilities (or responsibilities) to interfere with this process. On the other hand, providing a simpler and working alternative makes it possible to remain an active part of the process and introduce privacy-focused features presenting the users with options to influence the process. I.e. the attribution API should not be a police forced upon vendors but a policy welcomed and appreciated, because the only way for mass adoption is not punishing legal and legitimate vendors for the crimes they have not committed.

From the practical side, I suggest entertaining the following scenario.

Step 1. Publisher.com creates a friendly frame with a special attribute to inform the browser that it should be handled differently and loads SSP framework into it <iframe allowattribution="true" src="https://ssp.com/serve/foo/bar>. Such frame becomes stateless and does not provide access to cookies, localStorage or any other data storage, regardless of the origin of the resources. For the time being, let's assume this means the ad vendors are not able to establish or persist the user identity. (I know it's strictly speaking not quite true, it's just a different point of the discussion).

Step 2. Instead of data storages the scripts in said frame can access window.transactionID which is a random read-only token generated by the browser at the moment of the frame creation, e.g. 3RAmJKyIdcdmfOZg9TpUTl9a9BP4gJHqwyneVdwCh4r8RiwSgnHSHznl3mKXcENDX.

Step 3. SSP, DSP and other ad vendors register the tracking end-points associated with the current ad and the type of the conversion, e.g. window.trackingService.register('purchase', 'https://dsp.com/tracking?foo=bar&transaction={transactionID}');. There is no limitations to the data that can be passed in the trackers. {transactionID} is a macro that will be rendered by the browser later on.

Step 4. Ad vendors render the creative, the user clicks on it and get redirected to advertiser.com/landing.page. The transaction ID from the original frame is now assigned to the advertiser.com but it is not accessible for the advertiser, i.e. there's no API method or any different way to obtain the transaction ID (or even the fact that it's assigned) for any script or resource. (Alternatively, the transaction can be assigned to the browser tab or session). It can be optionally restricted to only one associated transaction meaning attribution happens to the last click, preventing attribution fraud. It can also have an expiration period.

Step 5. The user can freely navigate between different pages on advertiser.com and at some point executes an action which needs to be tracked. At this point a script on advertiser.com (regardless of its origin) makes a simple call to the API window.trackingService.fire('purchase') and the previously registered trackers are fired with {transactionID} macro rendered to the transaction ID of the current domain (tab, session). The conversions names can be limited to a fixed short list, a few hundreds options should be enough for any reasonable case. It can also be amended with a limited conversion value, e.g. a single byte. Such restrictions mean there is a limit to the amount of information which can be shared between advertiser and ad vendors by the means of this attribution API, but not to the way attribution providers work and execute their tasks.

Now, I am not saying that this is a bullet-proof solution against identity sharing, but I am saying that this solution does not introduce any new loopholes and the browsers can freely apply any restriction they consider necessary to prevent identity sharing, such as restricting cookies/localStorage access for any vendor at any step, preventing URL decoration, etc. There are workarounds to circumvent them, but they exist outside of the scope of the event attribution.

To make this note shorter I'm skipping interesting and more complicated cases and scenarios, but I'd be happy to discuss them in depths.

Attribution windows in conversion reports

Hi @csharrison and @michaelkleber,

A few weeks ago on the W3C web-adv call, we discussed the possibility of adding random reporting delays. During that call, I asked if we could include the actual window a particular conversion came from (i.e. conversion-window=7d) in the conversion report.

Advertisers want to know the window to which their conversions attributed. They also optimize their campaigns for conversions within a particular window. While a small random delay (on the order of an hour or two) relative to a large attribution window (on the order of days or weeks) may have a small impact on accuracy, a large random delay relative to a small attribution window (on the order of a day or two) would have a significantly detrimental impact on accuracy.

@csharrison suggested we follow up on Github to iron out any potential privacy concerns. I'm fairly confident that revealing just the conversion window would not leak personally identifiable information if we round it to a granularity we are comfortable with (days?)

Do you see any potential privacy concerns? How do we make progress here? Thanks

Limiting the delivery of reports to untrusted third-parties using Feature Policy seems to go against the agreement on the scope of the Feature Policy specification

I believe https://github.com/w3c/webappsec-feature-policy/issues/282#issuecomment-486267212 describes the latest agreement between Google and Mozilla on splitting Feature Policy into three separate pieces, where the existing allow attribute would be used for delegating permissions for powerful features.

https://github.com/csharrison/conversion-measurement-api#third-party-reporting refers to using feature policy's delegation of permissions to address restricting delivery of conversion reports to untrusted third-parties, without going into too much detail on how that would be done. But at any rate, since this isn't a powerful feature, this seems to be incompatible with this latest agreement, and as such I believe feature policy isn't a good candidate for addressing this use case.

CCing @annevk from the Mozilla side for visibility.

Modify the reporting feature policy to enable/disable the entire API

Currently, this explainer proposes using a feature policy to enable/disable reporting for specific third parties in cross origin contexts (https://github.com/csharrison/conversion-measurement-api#permission-delegation).

To simplify this mechanism, we could instead gate the entire conversion measurement API behind a feature policy, with reporting to third parties explicitly allowed. The feature could be allowed in the top level context by default, but not in cross-origin children. Without the feature allowed, a cross origin child would not be able to declare an impression anchor tag.

This is more inline with the purpose of feature policy and better classifies as a powerful feature per w3c/webappsec-permissions-policy#252, a concern raised on #1 .

A top level browsing context and cooperating iframe could recreate the API functionality by postMessaging impression data, reporting domain, and ad destination to the top level context who can then wrap the iframe in an anchor tag. With some additional complexities behind handling clicks on the iframe. Exposing this via feature policy simplifies this process.

If the top level page only partially trusts an iframe, they can still utilize this postMessage configuration to sanitize inputs or exercise other arbitrary controls over the impression declaration.

This also gives the publisher site the ability to disable the API for the entire page if they do not want to entrust 3P scripts in the top level context to use this API.

Consider simplifying reporting mechanism

Currently, the API as proposed has some complexities around reporting. That is, third party reporting origins are declared by impression / conversion tags.

This could probably be simplified. Here is one alternative: just send reports directly to the advertiser or publisher (via a choice at the impression side). The target site can optionally fan-out to third party reporting endpoints via Reporting API / server-to-server / redirects / etc. This could be done for example via an <a> attribute: <a report-to={source,dest}>.

This has a few nice benefits:

  1. It reduces the need for using Feature Policy to control reporting configurations (see issue #1).
  2. It simplifies the API substantially, conceptually reducing the number of pertinent parties involved from 3 to 2.
  3. It aligns better with the Reporting API, which cannot be configured via JS.

However, it has a major downside: it makes deployment harder, since it can't be done by the ad-tech on behalf of the advertiser / publisher. To integrate with their ad-tech, advertisers / publishers will need to make intentional server-side changes. We should weigh this trade-off against the benefits.

A few questions about attribution

I have a few questions about attribution mechanisms resulting from this proposal:

  • Today advertisers often rely on several marketing channels, often operated by different ad-tech companies. A scenario can be a customer acquisition campaign (metric: cost of acquisition of new buyers), with an additional re-engagement campaign (metric: cost / targeted buyers). In this case a conversion can be attributed to clicks from several different ad-tech companies. So in the framework of this proposal there would be several clicks with a score of 100 for a conversion, but each belonging to a different reporting domain. Could you clarify if the constraint of having sum of attribution scores to be 100 is for reporting domains separately or across all reporting domains?

  • In the proposal it seems the attribution is completely owned by the browser, could you clarify how you handle cases where advertisers have their own special attribution mechanism (happens quite often) and more importantly what happens when several browsers implement different attribution schemes? This will almost surely break the ML models of ad-tech companies.

  • About multi-touch attribution and information leak: the proposal says that models other than last-click potentially leak more cross-site information, but I fail to understand this affirmation since even with a last-click attribution negative reports are still sent (reports from impressions with a 0 score of attribution)

Impossibility to model conversion flow

The proposal mentions the possibility to model the whole conversion flow (checkout, purchase, etc.). I would like to point that with the current level of noise proposed (5%) then if we use the 3bits metadata to encode steps in this flow it becomes actually impossible to learn models for each step since the level of noise (5%/8) will be higher than the typical conversion rate that is observed on advertiser sites. Actually it seems any level of noise would render this task very difficult or impossible. The only option with noise is that conversions would be registered only at the final step of the flow, and the 3bits metadata would only encode the value of conversion (such as for example a bucket of sales amount), which would severely limit the utility of this proposal.

respect example domain

there are lot of domain in the explainer for help to understand.
but it can be a real domain even if there are no owner now.

  • toaster.com
  • ad-tech.com

etc
for this purpose, Example domains are exists.

.example can be free & recommended to use in documentation like expainer.
it seems better to respect them.

(same as WICG/trust-token-api#24)

Register only viewed impression

There are currently third party vendors using third party cookie checking to make sure that impressions have been viewed before making the post view attribution. With the removal of third party cookie and as we are cleaning the ecosystem, we should make sure that only ads that have been seen can register impression.

I am suggesting to add to the specs : "In order to prevent ads that have not been seen by the user from registering impressions, the Conversion Measurement API should check to see if the ad has been seen by the user before registering the impression. "

Consider renaming "last-clicked" to "credit"

The explainer currently annotates all conversion reports with a "last-clicked" query param. Consider changing this to "credit" to make the API agnostic to the attribute model implemented.

A last clicked model could operate within this API by setting credit=1 for the last clicked impression and 0 for the rest.

Poisoning Inputs In Order to Cripple or Discard a Competitor's Reports

#40 describes an attack where "Cheater.pub wants to make it look like they are driving more conversions than they actually are." Most of the conversation in that issue is on how to ensure the integrity of the report with what I interpret as the intent to discard reports if they contain poisoned data.

You can also imagine an attack where EvilAdNetwork wants to cripple or destroy its competitor BenignAdNetwork's aggregate data reports. That can be done very similar to what Cheater.pub is doing in #40 by for instance reporting large sets of low value conversions, but could also deliberately inject bad data that will invalidate the whole report and deny BenignAdNetwork its measurement all together. A denial of service attack if you will.

(Charlie Harrison and I recently discussed this issue in the W3C Privacy CG Slack channel.)

ROAS Optimization

Understand that restricting conversion metadata to 3 bits is purposeful, however this doesn’t allow the specific conversion value to be specified for the conversion. This effectively prevents any ROAS optimization, either by ad tech or by the advertisers. This would be severely limiting from Microsoft Ads perspective as ROAS optimization (e.g. Target ROAS auto bidding) is one of the most popular optimization tools used by advertisers.

It would be good to understand more clearly what the privacy risks associated here are. Also would like to understand what Google Ads position on this is since it would impact Google Ads advertisers similarly. Just for the sake of completeness, although the aggregate API can allow conversion values in aggregate, it does not work for ROAS optimization scenarios such as auto bidding.

Scheduled Reporting Window for Spend

Thank you for the api proposal. I find it helpful to read especially with the Sample Usage.

I am curious as whether there has been updates on scheduled reporting window in regards to spend? For pacing reasons of a campaign, I can imagine it would be helpful if the delay can be shortened.

also please correct me if i'm wrong, but the spend metric in the conversion-measurement-api context would be stored under impressiondata as it would be a metadata from a given impression. Is that correct?

Poisoning Inputs to Aggregate Service

Thanks for the detailed explainer, Charlie.

I think there may be a gap in the methods you describe to combat false inputs ruining the aggregate calculation. The Authenticated Inputs section nicely describes how you can use a trust token to authenticate, in a similar manner to previous discussions about even level report APIs. There is an additional problem when the browser is creating secret shares instead of a plaintext report - the value the browser sneaks into the secret share can't be easily verified, might be implausible, and have an out-sized impact on the final report.

Example Attack

Advertiser.shop is using the aggregate measurement API with many publishers to track conversions attributed to certain ad campaigns. Cheater.pub wants to make it look like they are driving more conversions than they actually are. They use a few genuine accounts to look at ads on cheater.pub and then make small purchases on advertiser.shop - since these transactions are all real and have real identities attached to them they will pass with token procedure and submit reports with genuine tokens. However, cheater.pub modifies the browsers they use for this exercise slightly so that they record a wildly inflated value in the RawAggregateReport, near the maximum for the domain. Using the 16 bit domain example, they might be able to turn a $5 real purchase into a $655 purchase report. Even if the total revenue numbers don't add up, it will be very difficult to discover which reports even lead to the discrepancy, let alone which transactions on the website were used to get the tokens or which publisher got the bad credit.

Mitigation

You mention that this work is inspired by PRIO, which tries to mitigate this problem with the novel SNAPS constructions to do efficient ZK proofs that the sum of the secret shares falls within a range. I don't believe that exact approach is feasible here because it relies on some minimal interactivity, and the EncryptedAggregateReport is held in escrow for a while by the reporting origin before being delivered to the helper servers. I believe the limited domain size (16 bits in the example) is intended to address this issue, but it is a bit tricky.

If you want the aggregate report to be able to capture sums larger than the range available to individual RawAggregateReports, you will need a way to translate those secret shares into a larger domain. I'm not aware of a general way to do this (but would love to hear if there is one!), but I can think of some specific tricks for specific cases that don't have great behavior. Example: treat the values in the EncryptedAggregateReport as "signed ints" and translate them into your new domain - this "works" but inadvertently allows negative RawAggregateReport values and allow for overstatement larger than the domain if there are more than 2 helpers!

Questions

Do you have additional thoughts or information about preventing this type of report poisoning?

Do you plan to translate the values into a larger domain for reporting?

Do you envision helper servers enforcing domain bit length, or does the encryption method enable the reporting origin to enforce length based on the EncryptedAggregateReport payload?

Clarifying attribution logic when multiple publishers are involved

I have read the proposal a few times, and I am not exactly sure how things work in a situation where there are clicks from multiple publishers prior to a conversion.

So let's take an example. I see an ad on instagram.com for a back-rest. I'm interested. I click on it. I browse the website for a bit then decide to do some more research. I go to Google and search for similar products. I click on a few search results (that happen to be sponsored). Eventually I decide to buy the original one. I do a Google search the original product again, as a means of navigating back to the product. I click the (sponsored) link back to the website. I buy the product. The websites fires a conversion event.

So in this example, there are 2 clicks from publisher websites that both direct to the same destination website. How does this work? My read of the proposal is: "Both instagram.com and google.com will receive an attributed conversion report. The Google one will have an "attribution credit" of 100. The instagram one will have an "attribution credit" of 0. Is this correct?

Tracking pixel + redirect mechanism is ugly

Tracking pixels have always been ugly hacks, and this feature's got further ugliness in checking for a redirection to a well-known URL and then parsing a query parameter to extract the conversion data. Further, it entangles conversion reporting with having a visual user agent that fetches images.

I don't understand this text:

This redirect is useful, because this mechanism enables the reporting origin to make server-side decisions about when attribution reports should trigger.

Why should that be useful? Isn't the conversion reporting triggered by conversions - in which case, how can the reporting origin make any kind of decision before the conversion report itself? I note that 'attribution report' is a hapax legomenon in the explainer, so maybe something got lost during revision.

I also don't understand why the reporting origin is the one that's setting conversion data via the redirect-parsing method; isn't the only reasonable thing to pass through what the advertiser tells it, since only the advertiser knows what the conversion actually consisted of? In which case, doesn't it make more sense for the advertiser to provide the conversion data directly to the UA?

Since this feature already requires UA complaisance, I'm not sure why an all-around more direct route can't be pursued here. #22 is asking for a JS API, which might be part of the answer, but I wonder about a more declarative solution, especially since the tracking pixel mechanism doesn't require any JS either.

As a strawman, here's a design that I think is much clearer in intent: when a conversion occurs, the advertiser's website can include a meta element with conversion-data and reporting-origin attributes, e.g., <meta name=conversion-report conversion-data=5 reporting-origin=adtech.example>. When the UA sees such a meta element (which could be script-added!), it will then schedule a conversion report to the reporting origin with the conversion data.

If that strawman wouldn't suffice, the reasons why should probably go into the explainer to keep people like me from being confused and to show that the tracking pixel + redirect mechanism really is load-bearing.

Question for Multi-touch scenarios

Hi teams, I have a few questions regarding to this multi-touch scenarios:

  1. "The default attribution model will be last-click attribution, giving the last-clicked impression for a given conversion event all of the credit." can we confirm this last-click attribution is finalized by default? If we choose this approach, does that means the last impression take attribution credit 100 and rest of them are 0?

  2. "If multiple impressions were clicked and led to a single conversion, send conversion reports for all of them."
    According to the delay window, all the events in the same report will be sent at the same time(same window), wondering how long is this time range? In 5 mins or in 10 mins? or it depends on how many events we have? Cause we need to set up some timer at downstream.

  3. "To provide additional utility, the browser can choose to provide additional annotations to each of these reports, attributing credits for the conversion to them individually. " Do we have more clarification on how to do the annotations?

Thanks for help!

Unnecessary API?

I may be missing something, but I am not convinced that this API is necessary? Let’s assume that third-party cookies are gone, fingerprinting is blocked, redirects are not permitted, and the ad-tech industry has given up on trying to track users across domains. Even then, the ad-tech industry could implement better conversion measurement using the remaining, currently existing APIs than will be provided by this new API (or the Apple alternative). We have the following ad-tech players:
• The publisher (P)
• The advertiser (A)
• The user (U)
• The ad platform/supply side platform (SSP)
• The demand side platform (DSP)
P uses SSP to place ads on their site (nothing changes from today, except that SSP doesn’t have a cross-site identifier for U, so ad personalization is based solely on U’s interactions with P, which P knows because of their first party cookie, UP, containing P’s ID for U). SSP holds an auction for the ad. DSP wins bid for A’s ad, X (summer promotion for widget W). SSP serves X to P (recording ad instance AI). DSP and SSP record AI, X, A, P, DSP/SSP (and perhaps details about interactions derived from UP that influenced the bid) to backend databases.

X is viewed on P by U, then clicked by U. Link takes U directly to a landing page L on A. A generates a first-party ID, UA, for U, if U does not already have one, storing UA in a first-party cookie. The ID for X is embedded in L (as a query parameter or in the path or a sub-domain). A has browser fire a pixel to DSP and SSP that notifies them of the click and includes UA, X and the eTDL+1 of P (obtained from the stripped down referrer). Note this does require changes to A’s site compared to their implementation today, changes which are not required with your proposal. Smaller advertiser’s may not be able to implement this, but in most cases, their DSP should be able provide the necessary code.

There is also nothing stopping L from containing AI, which could then be included in the pixels sent to the DSP and SSP. This would allow them to link UM and UP, but we’ll assume A doesn’t provide AI to them, because these parties are not attempting cross-site tracking. (However, in practice I don’t see any way to prevent AI or X from being embedded in L, where L leads the user to the correct landing page on A. Safari attempts to make this harder by reducing the lifetime of UA, if it is created client-side after arriving via a link with query parameters).

When a conversion occurs on A, A can directly pass the conversion to the DSP and/or SSP, including the type of conversion, the value of the conversion and UA (all values with zero fuzziness or randomness). If A does this via the browser using a pixel, then A may encode the value of the conversion in such a way as to hide it from the user, but that is completely up to A and DSP/SSP. If A does a server-to-server call, then that conversion report is completely out of the control of the browser. The DSP and SSP can then associate this conversion with all ads clicked on by UA, including X. They can then compute attribution based on last-click, first-click, all recent clicks, or more advanced techniques. DSP/SSP can use the full ad history for this user, across multiple conversions, rather than having an arbitrary limit on the number of conversions that an ad click can contribute to.

Note that if the user has opted out of first-party cookies on P, then UP isn’t created and the user gets less-personalized ads (only those related to the site and current page), but everything else works. If the user has opted out of cookies and A, then they may find it difficult to place an order as a cookie might be needed for their cart. However, GDPR and similar laws might still allow A to use cookies that don’t uniquely identify the user (block storage of UA, but not other cookies), such that A could still do conversion reporting. To handle this situation, A could store in a first-party cookie the list of ads the user has clicked on to arrive at A, including X. When a conversion occurs, A would report this conversion to the DSP/SSP with the list of ads that contributed to the conversion, but this would now be anonymous.

SSPs will dislike the above solution, because their revenue is based partially on clicks and if redirects are disallowed, then they have to rely on A to report clicks and it will be cheaper for A if they don’t. So as long as redirects are supported, I would expect that SSPs will use them. In this use case, though, the SSP only knows that X was clicked by UP, but still needs the pixel callback from A to associate X with UA for attribution purposes.

As long as the above is feasible, I don’t see the ad tech industry adopting your more restrictive proposal, and the current proposal from Apple offers so little value to ad-tech that I don’t think they would ever adopt it. I’m not trying to be pessimistic and I would really like to come up with better solutions than we have today. However, I am worried that the current browser privacy efforts are going to drive many in ad-tech into even more privacy compromising alternatives. Obviously fingerprinting is an example that you’re also working to eliminate and I fully support that effort. What I am seeing though is that publishers are moving toward requiring a “free” login in order to access content/services. When logging in with Facebook, Google, or an email address, they then take a hash of the email address (or of an ID from a DB of all accounts known to be associated with the individual) and store it in a first party cookie. The corresponding first-party cookie ID on all sites visited by the user will have the same value, so tracking continues as it does today with third-party cookies. In some ways, it is even worse, because now the ID will be consistent across all of the user’s devices and browsers (including private browsing if they have to login). The one benefit is that at least the user has to consent to provide the cross-site ID, and may not be willing to provide it on sites they rarely visit or don’t trust.

As a side-note, I am working on a new proposal for privacy preserving ads (including personalization and conversion reporting). I hope to share it soon.

Move reporting endpoint to be origin based

Having this endpoint be domain based provides very little concrete benefit, since we enforce that the conversion registration needs to land on precisely on the URL https://<reportingdomain>/.well-known/register-conversion?... to stick (i.e. it can't be a sub domain).

Therefore to provide more flexibility, I propose making the following changes:

  • Make the reporting endpoint an origin
  • Enforce that the conversion registration is the result of a same-domain redirect

@johnivdel what do you think?

Conversions coming from a different domain than landing page

Hi @csharrison and @michaelkleber !

Context

In past W3C web-adv calls, we've discussed the case where conversions come from a different domain than the landing page's domain. This could happen for a number of reasons--localized websites (i.e. shop.com taking you to shop.co.uk), commerce platforms (i.e. my-store.com taking you to my-store.shopify.com) or others (i.e. gap.com taking you to gapfactory.com).

My understanding is that the current draft specification would lose the conversion. From the calls, it seems that people believe this is a legitimate use case, so I'm eager to work with you all to come up with a privacy-safe way of supporting this use case.

I'm not attached to any particular solution if we can solve these problems, but to kick off the discussion, here are a few thoughts:

Delegation of trust

This could be either implicit or explicit. When my-store.com (the "landing page domain") takes you to my-store.shopify.com (the "destination domain"), it could be annotated (in the anchor tag or redirect) that it's delegating the reporting permissions to the destination. When conversions finally happen on the destination domain, they are reported to the publisher as if they came from the landing page domain.

This introduces more complexity because the browser has to track the pathway to the conversion. I don't think this requires significant amounts of new data but I appreciate that it has its costs.

Do people see privacy concerns with this?

Support of multiple domains in the original ad annotation

Instead of listing just one domain, perhaps the publisher can list multiple domains. There would have to be some limit to the number of domains, with perhaps an exception for country-level TLD variations. This could also be used in conjunction with the above proposal to add more protections, so that both the publisher side and the advertiser side need to certify the delegation.

Quick-redirect through the conversion domain

This was brought up by @csharrison during a call. It has a few downsides:

  1. It presumes that the publisher knows where the conversion will happen. This is the case for commerce platforms but not the case for multi-national domains.
  2. It adds another redirect overhead to the initial page load, which is a bad user experience
  3. I'm not sure that it's compatible with PCM's privacy model. My understanding is that the ad must land on the adDestination domain, not merely traverse through it.

Next steps

How do we move this discussion forward? Are there other ideas for addressing this major use case without compromising privacy? I'm open to feedback on how we could build upon or improve any of the ideas above, or any other ones that can solve this problem.

Thanks!

Support choosing Aggregate / Event-level / both at conversion time

For the event-level API, we may need to implement smaller numbers of conversions per click than with the aggregate API. Also, since we impose noise on the metadata it might not be suitable for as many use-cases as the aggregate API. As such we should have some API surface for advertisers to choose which (or both) APIs they want to use.

The simplest strawman proposal is to make the presence of the conversion-metadata field indicate that the conversion should be considered for event-level reporting, along with agg-converison-metadata for aggregate reporting. Possibly, conversion-metadata should be renamed to capture this new behavior.

@johnivdel cc for thoughts.

Hex-encoded values for metadata adds unnecessary complexity

We originally chose this to make the IDs shorter when appearing in markup, but I don't think that's a good enough reason to add the complexity on callers to hex-encode their IDs. We should just change this to be a normal unsigned long.

Converting multiple times from the same impression

There are a couple potential ways to solve this issue:

  1. Allow any given impression to convert some small number of times - this reveals more information from the advertiser.

  2. Allow multiple conversions within the same reporting epoch a method of local aggregation.

  3. Give every conversion a priority (similar https://wicg.github.io/ad-click-attribution/index.html#terminology), so high priority conversions will supersede low priority ones. Similar to (2) if your aggregation function is something like MAX.

Session-related metrics

Hi guys!
Considering that performance-based ads are here to prove that a campaign has a real impact on the activity of an advertiser, could it be possible to include into this API the measurement of sessions-related metrics?
Knowing that a user who clicked on an ad has spent a lot of time on your website and/or visited a lot of pages is considered as a good proxy to measure the performance of this ad.
It is obviously a weaker signal than a conversion but it is still a good indication for advertisers.
Those metrics are often provided by tools like Google Analytics or directly by ad-tech themselves, using tricks like URL decorations like the ones with the UTM parameters, and third-party scripts embedded on advertisers' pages.

So what about sending a report which will give, at minimal, the time spent on a website? This value doesn't have to be precise. We can imagine having a restricted set of values:

  • 0 => the user spent very few seconds on the website and/or visited only one page (what is called a bounce today)
  • 30 => the user spent more than 30 seconds
  • 60 => the user spent more than 1 minute
  • 120 => the spent more than 2 minutes
  • 300 => the user spent more than 5 minutes

This reporting would be subject to the same rules as the conversion report (e.g only sent in reporting windows every 2 days or so, potentially affected by some noise?) and should be sent even if no conversion happened during the session itself.

To sum up, something like:

  • A user clicks on an ad displayed on fancy-publisher.com
  • He lands on the homepage of fancy-advertiser.com, the browser records the timestamp of the landing
  • Approx. 3 minutes later, the user leaves fancy-advertiser.com to go visiting another-publisher.com
  • The browser detects that the domain of the website has changed, computes the time spent on the freshly left domain, and schedule a report.
  • 2 days later a report is sent to the reportorigin: https://ad-tech.com/.well-known/user-session?impression-data=12345678&duration=120

Publisher opt-in through an HTML attribute

https://github.com/csharrison/conversion-measurement-api#permission-delegation mentions as a goal designing a publisher opt-in mechanism for the destination of the conversion reports, however it uses an HTML attribute for that.

Besides the issues around reusing the allow attribute (#1), what mechanism do we have to ensure that the iframe's allow attribute isn't controlled by ad-tech.com's third-party scripts running inside the top-level page? It seems to me that an effective publisher opt-in mechanism needs to enforce that ad-tech.com doesn't control its own access to conversion reports.

Scheduled Reports vs. Real-time Reporting

Can you confirm and/or clarify when both "publisher" and "advertiser" would receive the event and/or aggregate data? Are there mechanisms to provide data to their reporting systems in real-time or near-to-real-time?

Both buyside and sellside platforms use real-time data (which includes, impressions, clicks and conversion data) for many reasons:

  1. QA that conversion tags are firing correctly, and that the proper data or parameters being passed through are working correctly, and registered counts are appearing in the appropriate platform
  2. buyers look at reporting multiple times throughout the day to optimize campaigns, and make adjustments manually to settings, bids, targeting and such. So if things aren't performing well, we need to make decisions
  3. Pacing Reports. Many companies need to monitor the pacing of the media campaigns to see if the campaign is delivering at it's intended pace. If things are either under-delivering or over-delivering, decisions will be made to either the bid or the targeting parameters. Buyers monitor this daily, weekly and monthly, and sometimes even hourly. I've had several campaigns where publishers said that the campaign is live and running. But when we pull reports, sometimes we don't see any or low data...which means that either something is wrong technically/logistically, or that the publishers is not telling the truth.
  4. Budget Management / Reconciliation. With respect to pacing reports, there are implications to financials. if there are delays in getting the event and conversion data, and the browser can't send the data at its intended "scheduled" time, then the publisher loses out on revenue. Buyers make budget shifts constantly, and needs to know at any given time how much has been spent. Contracted agreements and payment terms require that all delivery for a given month should be reported as accurately as possible. Publishers send out invoices typically on the 1st of the month for the previous month's activity. And buyers are required to pay that invoice within 30-60 days. If delivery information is not fully reported, and data rolls in after - how do we reconcile the spend across all parties?

these are 4 use cases for the need for real-time reporting. There are more, but these are the priority issues if there is no real-time data.

fyi - i brought this up at the end of the 4/9 call.

Handling web conversions that started in-app

As shown in our recently shared data in the web-adv group, we see that 69% of conversions happen on the "same device, same user agent" where the impression happened, this includes cases where the ad impression was shown in a native app, but the conversion happened in our in-app web view.

With the current measurement proposal described here, these conversions would not be measured since the ad click didn't occur on an HTML element rendered in the browser. We see from our data and in a recent study commissioned by Apple that states that an estimated $45B/$519B is enabled by in-app advertising, that this is not an uncommon scenario.

We would like to open up a conversation on the topic of addressing this significant share of conversions, specifically focusing on journeys where the ad was served in an app with a subsequent conversion on the web.

We believe that the majority of these conversions could be measurable if it were possible to register an ad click that occurred within a native application, much like a click on an <a> tag in current proposals will register that click within the measurement and reporting framework as described in this repository. If this were possible, subsequent conversions could be tracked in either in-app web views, or web browsers on the same device, giving a fuller picture to publishers and advertisers.

Can you help us find a solution for mobile app publishers, who want to find a privacy-preserving way of measuring conversions? Specifically where a conversion occurs on a website that was a result of traffic sent from an ad click within a native application?

Conversion registration should be gated on a Feature Policy

Currently only impression declaration is gated on a feature policy being enabled. This allows sites to prevent third-parties from using the API if they desire.

We should make this same restriction on the conversion side, this way the conversion site also has control over preventing third-parties from using the API.

Attributes naming and terms

A few terms and attribute names in the API and explainer may (or may not) create confusion.

  1. "Impression" (term) refers to both the tag and the event.

  2. impression... (attribute) may refer to both click and view in this spec. But in adtech lingo, "impression" usually means "view" as opposed to "click" (see the definition from the IAB: "Each time an advertisement loads onto a user’s screen, the ad server may count that loading as one impression" and this one). This means that impression... does not reflect the duality of "click or view" and that the API may be misunderstood as a view-through measurement API only.

  3. Both the terms "data" and "metadata" are used; similarly, we have impression-data and conversion-metadata attributes. This may be hard to remember and confusing for API users; it's not necessarily clear why one qualifies as data while the other as metadata. Note: I think the API already reflects this, and this is just about updating the explainer.

=> Ideas to solve these / proposals:

  1. In the explainer, clarify with "... tag" when referring to the tag - and stick to "..." when referring to the event.

  2. Rename impression (@johnivdel suggests: "naming needs to be agnostic to click, since some API variants may support viewed impressions").
    Ideas:

  • impression -> event, conversion -> conversion // may be confusing to differentiate between eventdata and conversiondata.
  • impression -> impact, conversion -> conversion
  • (by @johnivdel ) measurement-data
  • (by @johnivdel ) click-measurement-data / view-measurement-data
  • (by @johnivdel ) link-measurement-data
  • click-data / view-data
  • click-event-data / view-event-data
  • click-view-event-data
  • click-view-data
  • impression -> trigger, conversion -> conversion. trigger-data, conversion-data.
  • impression -> trigger-event, conversion -> conversion. trigger-event-data, conversion-data.
  1. Unify: use data everywhere, both as terms and in the attributes.

Scheduled report windows

Hi @csharrison and @michaelkleber !

A few weeks ago, we discussed the default attribution windows of 2 days, 7 days and (a configurable window up to 30 days). You mentioned that you only wanted the last window to be configurable, and wanted to keep the first two fixed.

I mentioned that 1-day and 7-days would be much more useful because it's much more common to have 1-day attribution windows in reporting products than 2-days.

@csharrison mentioned that the 2-day window was arbitrary and would be open to revising it, and suggested we discuss here, so here I am! I'd love to understand whether you have any concerns with replacing the default 2-day window with a 1-day window and how to make progress on this.

Thanks!

Account for the number of entropy bits exposed in reportingdomain

Let's say on the impression side we have a link like:

<a href="..." reportingdomain="https://uniqueid.adtech.example/" ...>...</a>

https://github.com/csharrison/conversion-measurement-api#conversion-reports mentions the conversion report will be sent to:

https://reportingdomain/.well-known/register-conversion?impression-data=&conversion-data=&last-clicked=

The number of unique bits sent alongside with this request seems more than what's mentioned there, since uniqueid may be a string that is unique e.g. per user (like a hash of their email address) or per their device (like a fingerprint).

This could be solved for example by submitting the conversion report to:

https://reportingrootdomain/.well-known/register-conversion?impression-data=&conversion-data=&last-clicked=

Where reportingrootdomain is the eTLD+1 of reportingdomain.

TPAC Meeting: Tues Oct 13

As part of W3C's annual TPAC meeting, WICG incubations like this one have an opportunity for a meeting that would be face-to-face in a normal year. The meeting for conversion measurement will happen tomorrow, October 13, 16:00 UTC (= 12am US East Coast), per the TPAC schedule here

Feel free to add agenda items here, but some initial ideas to get conversation started:

  • Plug our new blog post
  • Discuss the new Origin Trial for event-level conversions and questions related to it
  • Privacy measures we could add to the API to satisfy every browser vendor's privacy goals (inc. discussion of an aggregation service)
  • Discussion of 3p measurers
  • Resolving open issues on the issue tracker
  • E.g. #40 or #69 about authentication

The meeting will be hosted on Google meet: meet.google.com/oju-aurm-vqa
@yoavweiss FYI

Preventing conversion fraud: trust token integration w/ event-level API

Today on the w3c web advertising call Ben Savage from Facebook mentioned an interesting case where fraud might occur in the event-level API:

  1. User clicks on an ad on publisher.com, and publisher.com scapes the impression id from the tag
  2. User does not convert
  3. Some time later publisher.com sends a fake conversion report

Publishers are incentivized to show that they are converting more users than they actually are, so this case seems plausible.

The suggestion would be to augment the API to, at conversion time, have the reporting domain also issue the browser a token attesting that this conversion was legitimate. This token would be included in the subsequent conversion report.

Privacy implications

Since the browser can just drop conversions that have invalid tokens, the presence of a token does not reveal any extra information about the conversion metadata.

However, there are implications to how much data the token can sign over. In particular, we can't sign over the conversion metadata because it makes it clear when the browser sends a noise value.

@dvorak42, @michaelkleber FYI.

Interaction with URL decoration

Simplifying a bit, under the current setup (third party cookies being already blocked) post-click attribution would work this way :

  1. The user (uvw) clicks on an ad by advertiser.com on the publisher site www.publisher.com
  2. He is redirected to the URL www.retailer.com/product_id?click_id=abc&advertiser=advertiser.com
  3. First party JavaScript (retailer.com and/or advertiser.com) has access to the URL decoration and can log the click (abc) with the local web id (lwib = xyz) stored in the first-party cookie of retailer.com.
  4. The user (xyz) converts on retailer.com (conversion_id = 123)
  5. retailer.com and/or advertiser.com can attribute 123 to abc (and thus to advertiser.com).

If I understand correctly, this process would still be valid under this proposal (and also TurtleDove and other privacy sandbox related proposals).

Thus, what is the incentive for the advertiser and publisher to use the "event-level API"?

Support conversion deduplication in Event-Level API

Currently, the aggregate API explainer proposes a new attribute, local-dedup-key, for conversion registration that allows the reporting origin to deduplicate conversions client side.

In the Event-level API it is already possible for the reporting origin to do server side de-duping by looking at conversion-metadata, assuming that those bits are used in a way that de-duping is possible.

It may be possible to support this attribute in the event-level API, as it should not introduce any new information leakage on the conversion side. This would allow for a unified way of de-duping conversions client side, as well as allow the conversion metadata to be used for things other than de-duping.

Conversion Filters Proposal

Hi @csharrison and @michaelkleber!

This is an idea that we brought up on the web-adv W3C call. @michaelkleber indicated it seemed consistent with Chrome's privacy model and suggested bringing it to an issue on this repo.

Conversion Filters Proposal

Many ads shown on publisher websites direct people to large e-Commerce websites that sell a wide variety of unrelated products. Think of Amazon, Walmart, Target, Wish, etc.

Both Webkit’s “Private Click Measurement” proposal, as well as Chrome’s “Conversion Measurement” proposal currently suggest that all conversion events on a given domain could match up with any ad click that directed a browser to that domain. This poses a significant problem for such commerce websites. It would actually be desirable to collect less information here.

Use-Case 1: Collaborative Ads

Many producers do not sell directly to consumers via their own website. Instead, they sell their goods in stores and through large e-Commerce websites. As an example, let’s imagine a producer ShaveCo that manufactures shaving supplies. They sell these shaving supplies on e-Commerce platform MegaStore’s domain megastore.com. If they want to run ads promoting their shaving supplies, the destination of the ads would be megastore.com.

In their current form, the private click measurement APIs would not tell ShaveCo how many people purchased shaving products after clicking on their ads, it would tell them how many people purchased anything at all on megastore.com after seeing their ads. Since people commonly buy a wide variety of products on these large e-Commerce websites, it’s very likely that these APIs will be counting totally unrelated transactions for things produced by other producers, in totally different categories!

This information would be interesting to MegaStore (the other participant in this collaborative ads campaign) but not useful to ShaveCo. ShaveCo is only interested in conversion events that they consider relevant to a subset of the products available on megastore.com - their shaving supplies.

Use-Case 2: Dynamic Product Ads

The large e-Commerce website also wants to run their own ads promoting their website. They might run ads promoting a particular set of products (e.g. School Supplies in a “Back to School” campaign). While it might be interesting to know how many people ever bought anything at all on their website after clicking on an ad, it is also valuable to ask “how many people bought school-supplies after clicking on the ad promoting school supplies?”. They may additionally wish to know “How often was the exact product advertised purchased as a result of the advertisement?”

Proposal

Both conversion measurement APIs propose the addition of new attributes to the tag representing the ad in order to invoke the API. We propose the addition of another optional attribute, which would specify some type of “filter” for the set of conversions this particular ad wants to count.

We are not suggesting any specific protocol for how these “filters” would be implemented, but here is a short list of common use-cases that will drive value:

  • “Only count conversions where the product_id is 12345”
  • “Do not count conversions where the product_id is 12345”
  • “Only count conversions where the product is one of [12345, 23456, 34567, …]”
  • “Only count conversions which occur on the sub-domain electronics.megastore.com”
  • “Only count conversions that occur within the directory megastore.com/store/electronics”
  • “Only count conversions where the product category is ‘school-supplies’”
  • “Only count count conversions where the producer is ‘ShaveCo’”

Both conversion measurement APIs propose the introduction of certain conversion metadata which would be associated with the conversion event. The Webkit proposal also suggests a “priority” attribute. We propose that advertisers can additionally specify various meta-data about the conversion event (e.g. Producer: “ShaveCo”, product_id: “12345”, category: “Shaving Supplies”).

When the browser looks into the storage of clicks requesting attribution, it could compare the metadata on the conversion with the filter specified on that ad click. These would either “match” or “not-match”. There are two possible approaches that come to mind for what to do in the event that they do-not-match.

Do not attribute this conversion event to this click. Continue checking the other clicks that requested attribution, and if none match, do not generate an anonymous conversion report at all.
Attribute the conversion to the click, but set the first “conversion-metadata” bit to zero to indicate “did not match the specified filter”.

Privacy Considerations

This proposal does not rely on any change to the total entropy contained in anonymous conversion reports, or how the bits are distributed. Platform vendors have already taken stances about how many bits to allow for representing both campaign_id as well as conversion_metadata, and this proposal should not affect those choices. Since this does not propose to change the bit entropy in conversion reports, we do not believe this proposal meaningfully impacts user privacy.

What this does do is allow websites with complex advertising campaigns and large product catalogs to more effectively utilize those bits of entropy to support common use cases.

MEETING: Attribution Reporting API calls

Next meeting details:
Call-in link: https://meet.google.com/jnn-rhxv-nsy
Notes / Queue / Agenda doc. Comment on the agenda to add items
Date: Every other Monday, from 8-9am PT

Join the Google group here to get added automatically to the Google Meet.

If you want to participate in the call, please make sure you join the WICG:
https://www.w3.org/community/wicg/

Minutes are posted here


At TPAC there was some consensus that folks would like more time to discuss this API through the WICG. Let's use this issue for scheduling and planning. I was planning on using the same basic technology to host the meeting (Google Meet), with a shared Google doc for queuing and note taking.

Possible time slots (in GMT) for a recurring / semi-recurring 30-60 minute meeting:

  • Monday: 3-3:30pm, 4-6pm, 8-9pm
  • Tuesday: 3-3:30pm, 6:30-7:30pm, 10:30-11pm
  • Thursday: 7:30-8:30pm

Unfortunately the list of interested people was not captured in the TPAC notes, so feel free to cc folks that you think would be interested. We can use this issue to align on a time slot, duration, etc.
@benjaminsavage, @kdeqc, @ajknox, @johnivdel, @michaelkleber, @johnwilander, @maudnals, @dialtone, @BasileLeparmentier, @bslassey, @erik-anderson

Multiple reporting domains in ad impression

In case the advertiser works with multiple measurement platforms, will it be possible to define multiple reporting domains in the ad impression?
In any case, who will have the control on the reporting domain? Will it be the advertiser setting the reporting domain when defining the impression within the publisher, or is it solely the publisher?

Consider renaming "addestination" to "conversiondestination"

Consider making the HTML attribute names agnostic to specific use-cases for conversion measurement. While ads are certainly the most prolific use-case, other content may be able to benefit from measuring conversions.

One example is surveys: where a site wants to measure if a user completed a survey after clicking through to a survey site. As the web evolves, there may be more use cases to leverage the API.

Post click measurment and link decoration for non-targeted advertising

Hi Michael,
(this is a repost of issue 31 in FLOC as I understand this was not the right place to discuss it WICG/floc#31 )

Do you mean that somehow the browser would clean / filter any 'link decoration'?

UTM_source, one of the most frequent 'link decoration' in use today, is a backbone of advertising measurement, championed most notably by Google Analytics ("GA"), the largest website traffic monitoring tool in the world. It is used by advertisers to compare performance and shift budget between different marketing channels, exposing a full range of signals, from clicks, to bounce rate, to sessions, to conversions, etc.

UTM_source is not so much a way to declare where the user came from, but rather how he arived on the website ('Facebook_video_ad', 'criteo_consideration', 'google_retargeting', etc...). This usage does not seem to infringe any of the Privacy Sandbox principles. Sure, nothing enforce the fact that UTM_source is used in this fashion. But it is an important usage nonetheless, if only for accountability. And rather than killing it, it would be better to make it more robust.

Losing UTM_source would mean that GA becomes obsolete for the Open Web, as many of these signals, crucial for advertisers (especially when they use contextual advertising, are gone. Do you plan on giving enough flexibility in the Conversion Measurement API to cover all current GA metrics?

GA also extends beyond Open Web display and video impressions, to encompass a full suite of Google first party assets like Youtube, Gmail, Search, Shopping, etc. The 'level playing field' that you guys mentioned a few weeks ago during our weekly calls certainly means that GA wouldn't use link decoration or anything of the sort to track performance outside of open web, giving these proprietary channels an unfair advantage in the process, correct?

Extending the argument, will these channels be subject to noise, delay, and high level aggregation as well?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.