Coder Social home page Coder Social logo

ibm / page-lab Goto Github PK

View Code? Open in Web Editor NEW
19.0 10.0 10.0 5.16 MB

PageLab enables web performance, accessibility, SEO, etc testing at scale.

License: Apache License 2.0

Shell 0.08% Python 35.18% HTML 34.92% JavaScript 17.37% CSS 12.45%
google-lighthouse automated-testing web-performance web-performance-testing django python nodejs javascript

page-lab's Introduction

Page Lab

Web Page Performance Laboratory: Scaling Lighthouse performance and web testing tools

  • This is Alpha software and needs some testing and automation. PR's are accepted!

Page Lab is an attempt at understanding web performance at scale

The goals here are 3-fold:

Understand page performance now (and provide a historical record)

  • Easy automated Lighthouse tests of any URL
  • A history of each test run, for as many runs as desired
  • Historical data will allow us to track performance of our pages over time, giving us insight into when changes cause performance regressions
  • Coupled with the Web Timing API, Page Lab will be able to drill into any included scripts (properly instrumented) and understand what EXACTLY is impacting page performance

Get started

Setup Django Lighthouse reporting app

Setup Node testing server

Roadmap & Ideas

Automate some fixes for pages

  • For instance, being able to tell developers that certain scripts are not even used on the page
  • Automation of image compression to the proper smaller size
  • Automatically give guidance on which assets can be preloaded, etc

Pre-flight check tool for newly published pages

  • Developers can run their page through Pag Lab and see what can be fixed - some of which will be automated for them, e.g.: compressed web assets, etc

page-lab's People

Contributors

cclauss avatar daviddahl avatar ecumike avatar imgbotapp avatar johnwalicki avatar kant avatar rcalfredson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

page-lab's Issues

Monitor # of workers and restart 'failed' workers

Currently, if a worker fails for some reason (an uncaught exception or whatever) it does not re-spawn.
Thus over time the # of concurrent workers has the potential to slowly decrease.

For instance you might start with 12 concurrent workers running tests, but after hours of continuous test running you might end up with only 3 running.

This requires these features:

  • Log when a worker fails (helps determine a pattern of cause).
  • Ability to monitor/view # of currently running workers (so you can verify the # running).
  • Ability to re-spawn a worker if one fails/shuts-down (this keeps the concurrent # of workers steady)

Change report page chart and data table data set timeframe

  • Update both chart and data table to only show latest three week period runs.
  • Move chart data and data table data into separate WSR view with date option.
  • Update report ui to include links on each for 3 “weeks” and “full history”
    • onclick makes WSR, reloads chart/table
  • Future function add in start/end data in wsr view. UI tbd.

Adjust how average data is displayed for aggregate reports

As we discussed in issue 32, we should use a trailing two-week average, instead of an average of values collected for all time, when showing the "Heads-up display" view.

For usability, add some help text explaining that it's an average of scores over the past two weeks.

Setup APIs for master-to->worker cluster comms

Some APIs that will be needed for the queen server to manage the worker servers:

  • Get number of workers and which URLs are being processed
  • Pause all workers: Tell workers to not respawn, shutting them down after current work is finished
  • Re-engage workers
  • Adjust number of workers
  • Query for known remote non-master servers
  • Query & adjust workload on remote servers

Setup logger model

Setup model for logging misc errors, warnings, debug, etc. with fields like:
timestamp
message
traceback
levels:

LOG_LEVEL_CHOICES = (('0', 'info',),
                     ('1', 'log',),
                     ('2', 'warn'),
                     ('3', 'error',),
                     ('4', 'debug'),)

Enable the 'queue' API to pass a config profile ID and settings to use with each URL

Setup the API/view to pass a Lighthouse config profile ID and settings to use with each URL it provides to the node test runners. This allows different config settings to be used in different runs, and by providing the ID of the profile used, it allows the return data to be tied to the profile so it can be displayed what settings were used for that particular report.

Check that LighthouseRun queries filter out 'invalid_run' ones

For methods that query LighthouseRun or calculate averages, verify/update them to ensure they filter out LighthouseRun that have invalid_run set.

Best way is to probably create a queryset manager for LighthouseRun so it can be common used wherever needed.

For obvious reasons... we want to ensure we don't calculate averages using runs that were 'invalid' as to skew the real average performance for the URL

Omitting audits/test report content: Are there properties not needed?

There is a lot of content, descriptions, etc in the Lighthouse reports. This translates into a lot of duplicated content. At scale, with 10s of thousands of URL test reports stored per day, the database grows large quickly.

Take a look at the content stored in each report data and see if there is any data objects that can be omitted to help reduce the size stored in the DB.

Google Lighthouse config options for reference:

https://github.com/GoogleChrome/lighthouse/blob/master/docs/configuration.md

Research: On-the-fly averaging, by week only

Investigate and test "pick a week" report detail page scoping for the two averages tables; KPI and user-timing.

Allowing "week of ___" data scoping may be performant enough to do on-the-fly because it's only grabbing 7 runs and averaging them.

Test how bad the query would be to do on-the-fly date range scoping averaging of a URL's runs with a database of at least 7,000 URL records, each with 60 runs (two months of test runs data).

Add LighthouseRunConfig model

Instead of a hard-coded Lighthouse run config setting,
rttms / requestLatencyMs / throughputKbps / ... etc used on all tests, add the ability to for an admin to sign in and create a Lighthouse run config profile. This would allow different settings to be used for running a test, instead of having one set hard-coded in the node config.js file.

For example, a model where "mobile", "desktop", "slow 3g" profiles can be created with appropriate settings.

Lighthouse run config settings we need fields for:

  • rttMs
  • requestLatencyMs
  • downloadThroughputKbps
  • uploadThroughputKbps
  • throughputKbps
  • cpuSlowdownMultiplier

URL job model

A job is a named set of urls run together in a queue.

The job should have a description and an owner and other 'meta' data.

Programming error in admin UI

I think this has something to do with my current database - maybe needs that migration script @ecumike wrote?

seeing this when editing some urls:

Environment:


Request Method: GET
Request URL: http://127.0.0.1:8000/admin/report/url/217/change/

Django Version: 2.0.8
Python Version: 3.5.2
Installed Applications:
['django.contrib.admin',
 'django.contrib.auth',
 'django.contrib.contenttypes',
 'django.contrib.sessions',
 'django.contrib.messages',
 'django.contrib.staticfiles',
 'django.contrib.flatpages',
 'django.contrib.sites',
 'report',
 'inline_static',
 'django_extensions',
 'dbbackup',
 'debug_toolbar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
 'django.contrib.sessions.middleware.SessionMiddleware',
 'django.middleware.common.CommonMiddleware',
 'django.middleware.csrf.CsrfViewMiddleware',
 'django.contrib.auth.middleware.AuthenticationMiddleware',
 'django.contrib.messages.middleware.MessageMiddleware',
 'django.middleware.clickjacking.XFrameOptionsMiddleware',
 'debug_toolbar.middleware.DebugToolbarMiddleware']


Template error:
In template /home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/contrib/admin/templates/admin/includes/fieldset.html, error at line 17
   column report_lighthouserun.seo_score does not exist
LINE 1: ...ps", "report_lighthouserun"."redirect_wasted_ms", "report_li...
                                                             ^

   7 :         <div class="form-row{% if line.fields|length_is:'1' and line.errors %} errors{% endif %}{% if not line.has_visible_field %} hidden{% endif %}{% for field in line %}{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% endfor %}">
   8 :             {% if line.fields|length_is:'1' %}{{ line.errors }}{% endif %}
   9 :             {% for field in line %}
   10 :                 <div{% if not line.fields|length_is:'1' %} class="field-box{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% if not field.is_readonly and field.errors %} errors{% endif %}{% if field.field.is_hidden %} hidden{% endif %}"{% elif field.is_checkbox %} class="checkbox-row"{% endif %}>
   11 :                     {% if not line.fields|length_is:'1' and not field.is_readonly %}{{ field.errors }}{% endif %}
   12 :                     {% if field.is_checkbox %}
   13 :                         {{ field.field }}{{ field.label_tag }}
   14 :                     {% else %}
   15 :                         {{ field.label_tag }}
   16 :                         {% if field.is_readonly %}
   17 :                             <div class="readonly"> {{ field.contents }} </div>
   18 :                         {% else %}
   19 :                             {{ field.field }}
   20 :                         {% endif %}
   21 :                     {% endif %}
   22 :                     {% if field.field.help_text %}
   23 :                         <div class="help">{{ field.field.help_text|safe }}</div>
   24 :                     {% endif %}
   25 :                 </div>
   26 :             {% endfor %}
   27 :         </div>


Traceback:

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/fields/related_descriptors.py" in __get__
  158.             rel_obj = self.field.get_cached_value(instance)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/fields/mixins.py" in get_cached_value
  13.             return instance._state.fields_cache[cache_name]

During handling of the above exception ('lighthouse_run'), another exception occurred:

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/backends/utils.py" in _execute
  85.                 return self.cursor.execute(sql, params)

The above exception (column report_lighthouserun.seo_score does not exist
LINE 1: ...ps", "report_lighthouserun"."redirect_wasted_ms", "report_li...
                                                             ^
) was the direct cause of the following exception:

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/core/handlers/exception.py" in inner
  35.             response = get_response(request)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
  158.                 response = self.process_exception_by_middleware(e, request)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/core/handlers/base.py" in _get_response
  156.                 response = response.render()

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/response.py" in render
  106.             self.content = self.rendered_content

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/response.py" in rendered_content
  83.         content = template.render(context, self._request)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/backends/django.py" in render
  61.             return self.template.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  175.                     return self._render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/test/utils.py" in instrumented_test_render
  98.     return self.nodelist.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  943.                 bit = node.render_annotated(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/loader_tags.py" in render
  155.             return compiled_parent._render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/test/utils.py" in instrumented_test_render
  98.     return self.nodelist.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  943.                 bit = node.render_annotated(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/loader_tags.py" in render
  155.             return compiled_parent._render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/test/utils.py" in instrumented_test_render
  98.     return self.nodelist.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  943.                 bit = node.render_annotated(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/loader_tags.py" in render
  67.                 result = block.nodelist.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  943.                 bit = node.render_annotated(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/loader_tags.py" in render
  67.                 result = block.nodelist.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  943.                 bit = node.render_annotated(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/defaulttags.py" in render
  211.                     nodelist.append(node.render_annotated(context))

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/loader_tags.py" in render
  194.                 return template.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  177.                 return self._render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/test/utils.py" in instrumented_test_render
  98.     return self.nodelist.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  943.                 bit = node.render_annotated(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/defaulttags.py" in render
  211.                     nodelist.append(node.render_annotated(context))

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/defaulttags.py" in render
  211.                     nodelist.append(node.render_annotated(context))

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/defaulttags.py" in render
  314.                 return nodelist.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  943.                 bit = node.render_annotated(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/defaulttags.py" in render
  314.                 return nodelist.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  943.                 bit = node.render_annotated(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render_annotated
  910.             return self.render(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in render
  993.             output = self.filter_expression.resolve(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in resolve
  676.                 obj = self.var.resolve(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in resolve
  802.             value = self._resolve_lookup(context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/template/base.py" in _resolve_lookup
  864.                             current = current()

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/contrib/admin/helpers.py" in contents
  201.             f, attr, value = lookup_field(field, obj, model_admin)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/contrib/admin/utils.py" in lookup_field
  295.         value = getattr(obj, name)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/fields/related_descriptors.py" in __get__
  164.                 rel_obj = self.get_object(instance)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/fields/related_descriptors.py" in get_object
  139.         return qs.get(self.field.get_reverse_related_filter(instance))

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/query.py" in get
  397.         num = len(clone)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/query.py" in __len__
  254.         self._fetch_all()

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/query.py" in _fetch_all
  1179.             self._result_cache = list(self._iterable_class(self))

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/query.py" in __iter__
  53.         results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/models/sql/compiler.py" in execute_sql
  1068.             cursor.execute(sql, params)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/debug_toolbar/panels/sql/tracking.py" in execute
  164.         return self._record(self.cursor.execute, sql, params)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/debug_toolbar/panels/sql/tracking.py" in _record
  106.             return method(sql, params)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/backends/utils.py" in execute
  100.             return super().execute(sql, params)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/backends/utils.py" in execute
  68.         return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/backends/utils.py" in _execute_with_wrappers
  77.         return executor(sql, params, many, context)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/backends/utils.py" in _execute
  85.                 return self.cursor.execute(sql, params)

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/utils.py" in __exit__
  89.                 raise dj_exc_value.with_traceback(traceback) from exc_value

File "/home/ddahl/github.com/env-page-lab/lib/python3.5/site-packages/django/db/backends/utils.py" in _execute
  85.                 return self.cursor.execute(sql, params)

Exception Type: ProgrammingError at /admin/report/url/217/change/
Exception Value: column report_lighthouserun.seo_score does not exist
LINE 1: ...ps", "report_lighthouserun"."redirect_wasted_ms", "report_li...
                                                             ^

Additional pre-defined filter types

Description:
Our stakeholders need to be able to see how different sets of pages are performing over time. With our latest updates, PageLab will show you a report based on a query (a simple substring match with a comma-separated list). That's very good, but we also need to be able to take a report (either based on a substring query, or starting from the full set of URLs), and then filter it to only show the pages for a specific business unit or locale.

Obviously, how one sets up pages for various business units and locales, and how you can determine which pages belong to each set, will be different for each company.

I think what we'll need to do is add some general functionality to the open-source Page Lab where it can support predefined filter types (i.e. "select a business unit", "select a geography"). Then we'll need to provide a way for Page Lab developers or admins to define their specific filter types, and hook into the business logic and back-end queries they need to implement those filters.

Acceptance Criteria:

  1. Given a set of web page URLs (which may be "all URLs" or the result of a substring match), and one or more additional filters such as business unit or locale, return a report (a web page) containing:
  • The dashboard summary view with pie charts and averages, but only containing data from the set of URLs specified, and
  • The report card tiles for each URL in the set, to drill down to see more detail.
  1. Sanitize all data on input and escape the data on output, for security purposes

  2. I need to be able to share a report I've crafted with someone else who's not a techie. Sharing a copy/pasted URL is easy. Saving named reports for everyone to re-use is another way to do this, in which case, I want to be able to share a link to a named report.

  3. If I send someone a link to a report I set up, they're probably going to want to continue playing with the filtering options from there.

Add a column to data table for test profile config used for each run

Title says it all. Allows you to see within the run history table, which profile was used for the report.

This get messy when you start talking about the "average" data shown on the report detail page. That will have to be split up into "averages per config profile" tabs type display.

View analytics reports for subset of all pages (ability to filter).

Currently, analytics reports aggregate result from all pages. In addition to this existing functionality, users may want a way to specify a subset they would like to analyze.

There could be a variety of ways to specify the desired subset of pages. For example, perhaps users would like to filter down to URLs containing a given subset, or perhaps they would like to input a comma-separated list of URLs they know in advance.

It would be possible for the user to view the dashboard, the report-card tiles and the detailed page summaries using that filtered list.

Create admin home

Create admin home page.
Use access decorator as usual on the view.

Add visual cue of what the actual final URL is for a given URL

Referencing dependency of #38 and related #21 .

Think of a way to gracefully show the "final URL" in the report historical data table view for each run.
Thoughts:

  • We don't need it on every row.
  • Maybe just add a "notes" column or icon with an indication that the run had a different URL than the previous run? This would effectively show you at which point/which run the URL changed.
  • Maybe add a 'notes' section or something that simply lists the unique final URLs, and the run date which they first appeared? This would effectively show you a legend of which run(s) the final URL changed.

Add Lighthouse localStorage length 'collector'

Add collector for localstorage length at the end of each run. This allows to report on what pages and how much data is getting stored in LS (poor perf for large data, not an async function). This allows for visibility to which pages might be better off using IndexedDB.

Teach users that not all pages are monitored, and how to request that a new page be monitored

We need to be able to see how different sets of pages are performing over time. PageLab can scan and keep data on thousands of web pages, but it can't scan the entire Internet. We need to have a simple way for end users to understand that some URLs are not monitored, and to learn how to request that new pages be monitored. This user feedback will help each Page Lab owner monitor the most important web pages.

Acceptance Criteria:

  1. Add documentation explaining how to request that a new page be added to the list of pages my Page Lab instance is monitoring.
  2. If I search for a specific web page URL and it's not currently being monitored, or if any of my searches come back with no results, show a message explaining (briefly) that the page(s) is not being monitored, and explain how I can request that it be added.
  3. Sanitize all data on input and HTML-escape the data on output, for security purposes.

Linkable Lighthouse viewer reports

I'd like to be able to send a link to a lighthouse viewer/report to someone to investigate and make improvements on the page based on the report.

Can you add a way to include the ID of the run in the lighthouse-viewer URL so I can send it to someone and it will load that particular run's report.

Add final/destination URL field to LighthouseRun object

The scenario is that a URL might be submitted for tracking, but is actually not the final destination URL (AKA; it's a redirect).

Other case is that a page might get moved or sunset and redirected to a new URL.

Storing the "final" URL for each LighthouseRun allows the insights to see that if there was a drastic change in performance or URL audit metrics, it may be because a different page is actually being tested for that URL than previous runs.

This is minimal effort and is data already provided in Lighthouse report data.
Add a field to LighthouseRun obj.
In the method where we process posted LighthouseRawdata, like we do other bits, grab the URL from the data object and add it to the LHrun record.

A follow-on UI piece will be covered in a separate issue.

Docker images

Create 2 docker images that make it very simple to test out Page Lab

Django README edits

One can also use VirtualEnv in order to install all of the Python bits, in which case, the VirtualEnv will know that python means python3 by default.

Adjust how average data is displayed for a single page

We need to be able to see how different sets of pages are performing over time. Currently, when Page Lab shows the average data for a specific web page, it's averaging all of the results it has collected since the time the page was registered with the system. The longer a page has been tracked, the harder it will be to see the effect of changes made to the page and get feedback on how they positively or negatively affected performance.

I recommend making these updates:
When viewing the details for one page's performance data, use a 30-day trailing average for the performance score as the basic display.
Also show these additional calculated metrics:

  • 10 percentile and 90th percentile performance scores over the last 30 days.
  • 1-week trailing average for the performance score (to get even faster feedback on changes).

Show nerdy stats

At bottom of dashboard? nerd out and show some stats like:

  • Num of URLs
  • Num of LighthouseRuns
  • Num of invalid/valid runs
  • Num of user-timing measures
  • Avg # of runs per URL

OCD: Rainy day: Change inline-block to floats

Browsers render line-returns betwee inline elements (including inline block) as a space. This kills centering of stacked elements and makes them off.
Instead of dib use cf on parent and db on elements. Should be a straight swap like that but visually verify blocks fold as expected (they should).

Add 'invalid_run' column on lighthouse run history data table

On the URL report detail page, in Run history table at the bottom, add a column for "Invalid run" and use red icononly "close" icon to denote that run was "invalid".

This way users can easily understand if a poor or abnormal run was simply invalid and enable them to spot trends in invalidness.

Add URL admin

Create URL admin page, as typical, where you can:

  • Add a new URL
  • Edit a URL
  • De-activate a URL
  • Whatever else makes sense related to URL model.

Enable test runners to post the ID of the config settings profile used

Enable test runners to post the ID of the config settings profile used for each URL, when posting the Lighthouse report data back to Django app. This allows the relationship to be created between LightHouseRun -> LighthouseConfig so you can see what settings were used for each run report.

Add datatable of run history showing user-timing measures

On report detail page, create a run history datatable showing user-timing measures for each run.
This would be identical to the existing KPI run history table, but showing all user-timing measures for each run.

Column headings and row cols have to be dynamically created (using a 'unique' across all URL's runs) because timings are different per URL and even per run.

pagelab.js node test suite

We need a test suite to test all worker operations and worker / cluster communications:

  1. Mock the calls to Django
  2. Mock Lighthouse or spin up a test server and very light pages to test with.

Add "URL is inactive" status/cue on URL detail report

URLs have a flag for "inactive".
Inactive URLs do not get included in the automated test run queue.

Use case: A user might want to test and track a URL for a certain period, and once they identify if it's 'good', or if it's not and know what to fix.. there's no need to continue to test the page unless there's some major change to it. They do not want to delete the URL and historical data from the app, because when a major change is made and then tested, they need something/baseline metrics to compare the changes' performance and audit report to. That's half the value of the app and why people love this.

So we need a way in the report detail page where the user can easily see that the URL is inactive and currently not being included in the automated daily tests. Maybe like a toggle, or like the "build passes` type switch that shows either red or green. green means it's active, red is inactive. Not a fan of just showing "inactive" because then when it's not there, there's some doubt as to "is it active, or is this thing broken and not showing me the status".

Custom defined KPI targets

Currently, from the original implementation, there are hardcoded 'realistic' KPI targets in the dashboard view reports_dashboard:

        'fcp': {
            'fast': 1.6,
            'slow': 2.4
        },
        'fmp': {
            'fast': 2,
            'slow': 3
        },
        'tti': {
            'fast': 3,
            'slow': 4.5
        },

These drive the 3 pie charts on the dashboard as to what %s of URLs are in each of these ("average" is the # range between "fast" and "slow").

Ideally these should be a model to allow each implementation to set their own targets, as well, it will allow these to easily be changed without having to redeploy your app.

Add statistics to admin home page

Add some metrics and data auditing to admin home page, ex:

  • Avg # of runs per URL?
  • Latest test run URL and timestamp.
  • # of URLs that have NO valid runs (this surfaces bad URLs).
  • # of URLs that have NO runs (shows URLs in the system but haven't beed tested yet).
  • Latest URL and timestamp that was added to the app.
  • Throughput of queue and progress? Need to check with @daviddahl .. I think this web service was setup already. May not be committed yet though. TBD.
  • Etc.

Enable a "default" field checkbox in test config profile model

Enable a "default" checkbox in test config profile model so only one is marked as the default config to use, and is what is used for each URL run unless otherwise specified by new feature allowing you to specific a config setting for a set of URLs run.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.