Coder Social home page Coder Social logo

turtledove's Introduction

TURTLEDOVE

Some online advertising has been based on showing an ad to a potentially-interested person who has previously interacted with the advertiser or ad network. Historically this has worked by the advertiser recognizing a specific person as they browse across web sites, a core privacy concern with today's web.

The TURTLEDOVE effort is about offering a new API to address this use case while offering some key privacy advances:

  • The browser, not the advertiser, holds the information about what the advertiser thinks a person is interested in.
  • Advertisers can serve ads based on an interest, but cannot combine that interest with other information about the person — in particular, with who they are or what page they are visiting.
  • Web sites the person visits, and the ad networks those sites use, cannot learn about their visitors' ad interests.

Chrome expects to build and ship a first experiment in this direction during 2021. For details of the current design, see FLEDGE.

The FLEDGE design draws on many discussions and proposals published during 2020, most notably:

Many additional contributions came from Issues opened in this repo, and from discussion in the W3C Web Advertising Business Group.

turtledove's People

Contributors

michaelkleber avatar appascoe avatar yoavweiss avatar zerth avatar brodrigu avatar erik-anderson avatar eriktaubeneck avatar erjanmx avatar jonasz avatar lknik avatar samdutton avatar shigeki avatar shivanigithub avatar

turtledove's Issues

Performance of running FLEDGE auctions

We would like to share our first observations in regard to performance of running FLEDGE auctions. The topic was previously addressed (including an issue 215) but mainly in the context of running JS in a bidding worklet environment. This time we would like to discuss potential latency bottlenecks of an end-to-end runAdAuction call.

We manage to run FLEDGE auctions in a production environment (which means real publishers, real advertisers and our bidding infrastructure with our bidding logic).

In this scenario, we run Chromium browser with FLEDGE enabled, then visit an advertiser page (which adds us to 3 interest groups), and finally visit the publisher page which runs an auction for these IGs. To measure performance we take advantage of the trace event profiling tool (chrome://tracing).

Benchmark 1 (Intel Core i7-6820HQ 2.7 GHz, Linux, fast internet connection):

Screenshot from 2022-10-20 21-10-29

Benchmark 2 (Intel Core i7-4600U 2.1GHz, Windows, slow internet connection):

Screenshot from 2022-10-18 10-05-02

Our observations:

  • an auction consists of 3 sequential phases:
    • loading interest groups - getting data from sqlite database
    • bidding and scoring - running bidder and seller worklets with generateBid and scoreAd calls
    • reporting - running bidder and seller worklets with reportWin and reportResult calls
  • calling JS is a small part of the entire invocation
  • most of the time goes to loading interest groups, initializing worklets and requesting trusted bidding signals, in our benchmark 2 an auction runs in ~2.5s which includes:
    • loading interest groups ~350ms
    • initializing worklets ~550ms (during bidding and scoring phase) and ~250ms (during reporting phase)
    • requesting trusted bidding signals ~1.2s

Having in mind that runAdAuction would be run for all participating buyers, and fetching ctx signals and ad rendering will take additional time, we are afraid that such a latency would not be acceptable for an end user.

TBS requests latency could be reduced by replacing BYOS by key-value TBS and/or by improvements proposed in 333 (TBS prefetching, caching etc.) but it is not clear what to do with the other latencies.

Are you aware of the mentioned issues? Do you have plans to address them in the future?

FLEDGE auctions e2e latency (on-device)

At RTB House, we are aware that higher latency of ads in Fledge compared to Classic RTB system could have significant impact on final metrics and overall success of transition to Fledge. This topic was previously discussed in these issues: LINK#215 and LINK#385.

Today, due to the disabling of third-party cookies for a portion of Chrome browsers (Mode B testing), event-level reporting, and the fact that we managed to integrate our Fledge implementation with various SSPs, we are able to compare Fledge and Classic end-to-end bidding latency. In this issue, we would like to share our internal findings.

Scope:

  • On-device only
  • Bidding and Auction services is out of scope
  • We're interested in various environments, various devices and various network types

We define end-to-end bidding latency as the time elapsed from the bid (start of bid request processing) to the impression (start of ad rendering). This means that we compare:

  • Fledge: rendering request ts (server-side) - contextual bid request ts (server-side)
  • Classic: rendering request ts (server-side) - classic RTB bid request ts (server-side)

We conduct measurements on the following segment of Internet traffic:

  • Fledge: treatment traffic with third-party cookies disabled - which means bidding for users with treatment_1.X (X=1,2,3) labels from Mode B
  • Classic: legacy traffic with third-party cookies enabled - which means bidding for users without Mode A and Mode B labels

Each of the following diagrams consists of:

  • A histogram showing the distribution of latencies
  • A table of latency percentiles (p50, p80, p90)

Comparison between Fledge (treatment traffic) and Classic (legacy traffic)

The dataset used to generate the diagram includes impressions (from both Fledge and Classic), meaning winning auctions that resulted in ad rendering, for various SSPs and ad slots.

all-ssps

. fledge_imps.bid_to_rendering_time legacy_imps.bid_to_rendering_time
p50 2999 ms 993 ms
p80 5453 ms 2254 ms
p90 6630 ms 4140 ms

Comparison between Fledge and Classic: split by device type

The same dataset, segmented by device type (PC and PHONE).

all-ssps-pc

. fledge_imps.bid_to_rendering_time legacy_imps.bid_to_rendering_time
p50 1919 ms 688 ms
p80 4721 ms 1683 ms
p90 5937 ms 3375 ms

all-ssps-phone

. fledge_imps.bid_to_rendering_time legacy_imps.bid_to_rendering_time
p50 3833 ms 1053 ms
p80 5745 ms 2336 ms
p90 6954 ms 4224 ms

Comparison between Fledge-over-RTB and Prebid-over-RTB

In this part, we use results from our own tests. For Fledge-over-RTB and Prebid-over-RTB, we buy impressions via direct integration or Classic RTB on real publishers' pages. Then, depending on the test scenario, we perform one of two actions:

  • run our own Fledge auction (Fledge-over-RTB)
  • run a Classic auction using Prebid (Prebid-over-RTB)

These tests allow us to compare the latency of Fledge (Fledge-over-RTB) and Classic (Prebid-over-RTB) impressions independently from Fledge integration with SSPs and other buyers, as there are no other buyers participating in the auction in both scenarios.

fledge-over-rtb

. fledge_over_rtb.bid_to_rendering_time prebid_over_rtb.bid_to_rendering_time
p50 1562 ms 189 ms
p80 2961 ms 336 ms
p90 4499 ms 531 ms

To eliminate the possibility that our implementation of bid logic in Fledge is the source of the problem, we repeated the Fledge-over-RTB experiment, this time completely removing the part responsible for model evaluation, both server-side (in contextual and TBS requests processing) and client-side (in the bidding function). In our case, this means reducing the size of contextual and TBS responses by over half, as well as decreasing server-side and client-side computations by over half. After such intervention, the histogram latency did not decrease.

dummy-fledge-over-rtb

. fledge_over_rtb_dummy.bid_to_rendering_time fledge_over_rtb_regular.bid_to_rendering_time
p50 1576 ms 1562 ms
p80 3056 ms 2961 ms
p90 4928 ms 4499 ms

Conclusions

To sum up our results, we can see that in the Classic, 50% of auctions last less than 1 second, while in the Fledge, which usually lasts three times longer, more than 50% of auctions for mode B users are taking over 3 seconds. The situation is even worse in the case of mobile devices because the difference in latency, when we limit it to auctions on phones, increases from 3x to 4x.

Results from our internal tests comparing Fledge-over-RTB and Prebid-over-RTB indicate the overhead on the Fledge stack. In a very simple setup, where we are the only buyer in the Classic system, 50% of actions take less than 200ms. In the case of Fledge with the same setup, these auctions could last up to 8 times longer. Additionally, the fact that after reducing computations and the size of contextual and TBS responses significantly, latency did not decrease, along with the observation (as we verified) that our bidding function is fetched from the cache in 95% of auctions, suggests that neither processing nor fetching the bidding function in this case is the bottleneck.

At RTB House, we still believe that migrating to the Protected Audience API is feasible without losing retargeting potential. Although remaining concern is the current on-device implementation, where resources dedicated to Fledge auctions are limited and shared by multiple buyers and SSPs altogether significantly impacting e2e latency. It is important to resolve such fundamental concerns before removing support for third-party cookies. Additionally, we believe that transitioning early to Bidding and Auction Services could be a solution although we couldn’t perform similar measures yet due insufficient traffic.

Bidding worklet performance limitations

Hi,

We have started experimentation with the current FLEDGE implementation in Chromium. As part of this, we have provided end-to-end functional and performance tests.

For this issue we would like to discuss the bidding worklet's performance limitations in the context of potential bidding logic. To give an example, our production generateBid() implementation could evaluate a feed-forward neural network with 3-4 layers (repeated for 5 different ML models) and then it would look like this:

generateBid(interestGroup, auctionSignals, perBuyerSignals, trustedBiddingSignals, browserSignals) {

   const nn_model_1_weights = [
       [[1.23, 3.14, 2.7...], [100.1, 100.2,...], ...], // 200x200 matrix
       [...], // 200x100 matrix
       [...], // 100x50 matrix
       [...], // 50x1 matrix
   ]; // hard-coded weights for the 1st model (eg. CTR, CR, CV)

   const nn_model_2_weights = [...]; // hard-coded weights for the 2nd model

   const nn_model_3_weights = [...]; // hard-coded weights for the 3rd model

   const nn_model_4_weights = [...]; // hard-coded weights for the 4th model

   const nn_model_5_weights = [...]; // hard-coded weights for the 5th model

   let input = extractFeatures(interestGroup, auctionSignals, perBuyerSignals, trustedBiddingSignals,
browserSignals); // vector of 200 floats

   let bid = nn_forward(input, nn_model_weights_1) * nn_forward(input, nn_model_weights_2)
                * nn_forward(input, nn_model_weights_3) * nn_forward(input, nn_model_weights_4)
                * nn_forward(input, nn_model_weights_5);

   let ad = ... 

   let renderUrl = ...

   return {'ad': ad,  'bid': bid, 'render': ad.renderUrl};
}

where extractFeatures() extracts vector of 200 features (from signals and interest group’s data) and nn_forward() is:

nn_forward(input, nn_model_weights) {
    let X = input; // vector of 200 floats
    X = relu(multiply(nn_model_weights[0], X)); // nn_model_weights[0] - 200x200 matrix
    X = relu(multiply(nn_model_weights[1], X)); // nn_model_weights[1] - 200x100 matrix
    X = relu(multiply(nn_model_weights[2], X)); // nn_model_weights[2] - 100x50 matrix
    X = relu(multiply(nn_model_weights[3], X)); // nn_model_weights[3] - 50x1 matrix
    return X[0];
}

This is an extremely simplified version of generateBid() and focuses on multiplying the input values by the hard-coded model weights. We can expect a lot of additional boilerplate code (choosing the best ad, model feature extraction, capping & targeting logic, brand safety etc.) around this but even such a simple example is enough to illustrate performance limitations for the current implementation.

We have results from benchmarks for two different environments running the same generateBid() function:

no. test environment code run as time spent on generateBid()
1 V8 engine with jit tight loop with a warm-up 1.12 ms
2 bidding worklet (with its limitations: jitless etc.) buyer’s js 55.68 ms

In conclusion, we can see a significant performance drop (almost 50x) for a bidding worklet compared to an optimal environment. What is more, we can easily exceed the worklet’s timeout (which is 50 ms) for the mentioned use case.

Do you have any thoughts on how to optimize generateBid() code in such an execution environment? Are there any plans to provide a more effective bidding worklet?

Best regards,
Bartosz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.