Coder Social home page Coder Social logo

googlechrome / crux Goto Github PK

View Code? Open in Web Editor NEW
202.0 202.0 46.0 652 KB

The place to share queries, ideas, or issues related to the Chrome UX Report

Home Page: https://developers.google.com/web/tools/chrome-user-experience-report/

License: Apache License 2.0

JavaScript 0.90% Jupyter Notebook 99.10%

crux's People

Contributors

fili avatar nathanbower avatar powdercloud avatar rviscomi avatar tunetheweb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crux's Issues

Count FID = NULL as passing CWV assesment

According to the discussion with Annie:

If a site does not have data for FID, it is counted as passing the FID part of the assessment. It's only for FID.

There's an issue with CWV assessment script that counts only origins that have FID: https://github.com/GoogleChrome/CrUX/blob/main/sql/core-web-vitals-compliance-rates.sql#L20

Therefore, a following issue with % origins passing the CWV assessment. I recalculated with an optional FID, and in March 2021 there're 27.1% of origins that pass the CWV assessment. It's almost 3% growth (the most significant change since it's tracked).

Google Chrome causes the modem to reboot

Sorry if I created this issue in the wrong repository, please pointing me to the correct one.

When I work with Google Chrome, there has been a problem with rebooting my modem (Verizon Netgear Jetpack AC791L) for the last month, maybe two. Reboots do not always happen, only under certain conditions:

  1. this most often happened when I went to https://drive.google.com/drive/u/0/my-drive (Google Drive) and then opened one of some file (no matter which);
  2. this also sometimes happens when I am on one of youtube.com pages;
  3. the same thing sometimes happens when I just launch Google Chrome, which has 4 pinned tabs (translate.google.com, music.youtube.com, angular.io, material.angular.io).

As far as I understand, this happens when Chrome downloads information simultaneously in multiple streams.

At first I thought that the problem was in the modem itself, so I reset all the settings to factory settings, but the problem did not disappear. Then I turned off all Chrome extensions, but that didn't help either. I even install fresh Ubuntu 20.04 on my laptop (was Ubuntu 18.04), and then purchased a new Netgear Aircard AC815S modem. With the new modem, reboots became much less, but they still were.

Finally, when I start work with Mozilla Firefox (81.0 (64-bit)), I didn't see any modem reboots, even with the old modem. And at the same time, when I opened Google Chrome on the pages mentioned above, I was getting a reboot of the modem again.

I have:

  • Google Chrome Version 86.0.4240.75 (Official Build) (64-bit)
  • Ubuntu 20.04.1 LTS 64-bit (Memory 16 GB, Processor Intel® Core™ i5-7200U CPU @ 2.50GHz × 4)

Multiple Metrics in one Query

When creating a query in the cloud for CrUX, I want to get results not only for one metric, like cls, or fcps, but for all of them.
It is possible to do it. The expected result:
one query that requests all this metrics:
first_paint (FP)
first_contentful_paint (FCP)
dom_content_loaded (DCL)
onload (OL)
experimental.first_input_delay (FID)
(all that are listed in this article: [https://developer.chrome.com/blog/chrome-ux-report-bigquery/])
The metrics should be for a page, and the same for an origin.

Is there a way to do it, can you give me an example? Thank you

Automatically Close Unused Tabs

hi ;
This is an idea from a Chrome browser fan.
I think it should be nice if Chrome browser automatically close unused tabs. The user can set it on or off and can set a timer for auto-close tabs. For example 10, 30 or 60 minutes, and useless or unseen tabs will automatically close after that time. Chrome browser uses RAM and CPU so if the user forgot to close old opened tabs their device maybe see slowness or other problems.
The nominated tab for auto-closing should have these specs:

  • It is not playing audio or video
  • It is not refreshing and its content is static
  • It is not seen for a while

You can add a shortcut to reopen auto-closed tabs in the plus button right-click menu
You can show a timer on tab for last 30 seconds and if the user opened that tab then the timer will reset.

I hope you find it noticeable and user friendly.
Thank you.
Good Luck.

Build a CrUX origin search tool

As of now, there are 18,352,960 distinct origins in the CrUX BigQuery dataset since October 2017. That's a lot of websites, but clearly not all of the websites out in the wild. One of the common problems I see from CrUX users is that they're not sure if their websites' origins exist in the dataset.

I'd like to design a tool to help CrUX users quickly and easily discover origins in the dataset and make it clear when a particular origin is not found. Here's how I envision the UX working:

  • lightweight web app
  • prominent text input field
  • autosuggest known origins as you type
  • selecting a known origin will give you options for viewing its CrUX data
    • deep-linked custom CrUX Dashboard for that origin
    • deep-linked PageSpeed Insights report
    • customized BigQuery SQL snippet
  • if an origin is not found, offer boilerplate suggestions
    • ensure it is the canonical origin users actually visit (eg https not http, or www)
    • ensure it is publicly accessible (suggest Lighthouse SEO indexability audits)
    • ensure the origin receives a healthy amount of traffic (exact thresholds can't be given)
    • use first party RUM to more closely observe UX in the field
  • explain what an "origin" is (protocol + subdomain + domain, no path)

I imagine the backend will work by using a fast, in-memory storage solution for the ~18M origins. In total the size of the data is ~500+MB. However, if we build more advanced/faster search functionality (eg n-grams), it might require more storage space. An autosuggest endpoint will take the user's input, scan the origin list, and return matches via JSON. The list of origins can be populated monthly by mirroring the chrome-ux-report:materialized.origin_summary table on BigQuery.

Finding matches is the magic part. If a user types google it should return origins whose domain name (eTLD+1) starts with google, like https://www.google.com or https://mail.google.com or https://www.google.co.jp. It should also return matches whose host names (eTLD+2) are prefixed by the query, for example mail should return https://mail.google.com or https://mail.yahoo.com. Searches starting with the protocol (http[s]://) should only match origins prefixed with that input, like https://example.com matching a search for https://ex. I think this can be simplified to a regular expression where the user input is preceded by a boundary character \b, but the backend might need to tokenize origins into host names and domain names instead for performance. My goal is for the median autosuggestion to complete buttery smooth in under 100ms from the user's keyup to suggestion rendered.

For demonstration purposes, a really naive implementation would be for the backend to query the BigQuery origin_summary table directly:

SELECT
  origin
FROM
  `chrome-ux-report.materialized.origin_summary`
WHERE
  REGEXP_CONTAINS(origin, r'\b@input')
LIMIT
  20

This query processes 505 MB in 4.7 seconds, obviously not fast enough for a production-ready solution, but just showing a simple approach.

Any technology recommendations for the backend of the app?

Relation between CrUX, Google Search Console and iOS devices

Hello! I as the title suggest I would like to know how (if at all) these three points relate when it comes to sites being ranked by Google search when it comes to CWV.

Let me start with my (probably and hopefully incorrect) assumption: when a page or origin has flagged CWV issues on Google Search Console, like in the following screenshot, it could be ranked lower in Google search compared to other results that are just as relevant to the user's query.

image

But at the same time, CrUX does not take data from users on iOS devices, thus potentially hurting rank for websites that are primarily being visited by iOS users.

The reason why I'm asking this is because atm we are running an internal audit on our RUM CWV solution, and some prelimiary results have found that properties that have an overwhelming majority of iOS traffic (read as more than 40-50% of all traffic comes from iOS) show some very glaring differences in metrics (mostly CLS) between our solution and CRUX, while sites with a more even distribution tend to show data with only a couple of points of difference (all this with a 28d rolling p75 average).

Regardless of the result of our internal audit, it would be great for us, and everyone who stumbles upon this issue to know how does the fact that iOS users aren't accounted into CrUX affect page ranking for properties that used mostly by iOS users.

Thanks!

Ideas and innovation

What is the process to share any idea of concept with Google/Google map, and how can we earn benefits ?

CLS & Argument type mismatch in function LESS

When we request data for February 2020 everything work fine, but when we request data for April 2020 - get the following error: Argument type mismatch in function LESS: 'layouy_instability.cumulative_layout_shift.histogram is type bytes, '0.1000' is type double.
(https://yadi.sk/i/SycCFAbJXHhHdQ)

Do we need to modify our query somehow?

Our request is:

SELECT
  origin,
  form_factor.name,
  
    -- Cumulative Layout Shift
  COUNT(layout_instability.cumulative_layout_shift.histogram.bin.start) as cls_count,
  SUM(
    CASE WHEN layout_instability.cumulative_layout_shift.histogram.bin.start < 0.1
    THEN layout_instability.cumulative_layout_shift.histogram.bin.density
    END
  ) / sum(layout_instability.cumulative_layout_shift.histogram.bin.density) as cls_good,
  SUM(
    CASE WHEN layout_instability.cumulative_layout_shift.histogram.bin.start >= 0.1 AND layout_instability.cumulative_layout_shift.histogram.bin.start < 0.25
    THEN layout_instability.cumulative_layout_shift.histogram.bin.density
    END
  ) / sum(layout_instability.cumulative_layout_shift.histogram.bin.density) as cls_needs_emprovement,
  SUM(
    CASE WHEN layout_instability.cumulative_layout_shift.histogram.bin.start >= 0.25
    THEN layout_instability.cumulative_layout_shift.histogram.bin.density
    END
  ) / sum(layout_instability.cumulative_layout_shift.histogram.bin.density) as cls_poor

FROM 
  [chrome-ux-report.country_ru.{yearMonthInput default="202001" type="input"}]
WHERE
  origin in (
  'https://lenta.ru', 'https://m.lenta.ru', 
  'https://www.gazeta.ru', 'https://m.gazeta.ru', 
  'https://www.championat.com', 'https://m.championat.com', 
  'https://news.rambler.ru', 'https://www.afisha.ru', 'https://daily.afisha.ru', 
  'https://price.ru', 'https://rambler.ru', 'https://video.rambler.ru', 
  'https://doctor.rambler.ru', 'https://travel.rambler.ru', 'https://sport.rambler.ru', 
  'https://weekend.rambler.ru', 'https://horoscopes.rambler.ru', 'https://weather.rambler.ru', 
  'https://kassa.rambler.ru', 'hattps://afisha.rambler.ru', 'https://woman.rambler.ru', 
  'https://auto.rambler.ru', 'https://tv.rambler.ru', 'https://top100.rambler.ru', 
  'https://dating.rambler.ru', 'https://games.rambler.ru', 'https://mail.rambler.ru', 'https://finance.rambler.ru', 'https://eda.ru', 
  'https://www.ferra.ru', 'https://indicator.ru', 'https://secretmag.ru', 'https://motor.ru', 'https://www.passion.ru', 
  'https://quto.ru', 'https://rns.online', 'https://www.wmj.ru', 'https://autorambler.ru', 
  'https://www.livejournal.com', 'https://letidor.ru', 'https://moslenta.ru'
)
GROUP BY origin, form_factor.name
ORDER BY origin, form_factor.name
LIMIT 1000

Improve repo documentation

Update the top-level README with more info about how to use this repo, including the utils in each subdir.

HUGE Discrepancies CrUX vs web-vitals.js+Google Analytics+BigQuery: LCP: 300% - FID: 15% - CLS: 3500%

I am a one-man website owner and only found out about Core Web Vitals a few months ago.
I was shocked when I saw that Google considers none of my pages proving a good user experience, so I went to work improving all aspects of my site.
While I was able to make all my pages mobile friendly, I had very little success improving LCP and CLS
After 2 months of studies, experiments and improvements, my site is all “green” in lab tests and RUM tests (including my own web-vitals sent to GA4 and analyzed with BigQuery), but still in “RED” as far as Google is concerned.
I am at the end of my wits now.

Please look at my detailed screenshots below:

Edit:
core-web-vitals-discrepancies-images-new.pdf

My most visited (and also worst performing) page is:
https://www.flixxy.com/trumpet-solo-melissa-venema.htm

Notes:
I understand the differences between Lab test Field tests and RUM.
I also looked in detail into the differences between CrUX and RUM (per: https://web.dev/crux-and-rum-differences )

  1. CrUX is Chrome only: Even if I filter GA4 for Chrome users only, the differences persist.
  2. Opted-in users: This should only account for a small difference.
  3. Website must be publicly discoverable: My site is 100% public
  4. CrUX segments data by mobile, desktop, and tablet: The differences persist if I segments data by mobile, desktop, and tablet.
  5. Sampling size: I am using a sampling size of over 30,000 sessions
  6. Timespan: I am analyzing over 28 days of GA data.
  7. CrUX metrics are measured at the 75th percentile: Core Web Vitals with GA-4 and BigQuery measures data at 75p.
  8. Metrics timing
    a) LCP: I am using the Google recommended Core Web Vitals javascript.
    b) "CLS is measured through the life of the page": Visual inspection as well as Chrome Dev Tools show no significant CLS (>0.01). Neither does GA4. How come CrUX sees 0.35 CLS?
  9. CrUX does measure metrics within iframes, Core Web Vitals javascript does not :
    All my iframes are wrapped with div's defined in width and height and overflow: hidden to avoid any CLS
  10. Cross-origin resources: “LCP media served from other domains do not give render time in the PerformanceObserver API—unless the Timing-Allow-Origin header (TAO) is provided.” I noticed this one only just now and have now added a TAO to my .htaccess file.
  11. Background tabs and prerender: I do not use Background tabs and prerender.

Final questions:

  • If there really is a CLS of 0.35, wouldn’t I be able to see it with my own eyes?
  • Does Chrome Dev Tools closely approximate CrUX?
  • Can anyone help me pass Google’s Core Web Vitals? (I am happy to share access to GA4 and BigQuery)

Hugh

www.flixxy.com

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.