catchpoint / webpagetest.visual-metrics Goto Github PK
View Code? Open in Web Editor NEWCalculate visual performance metrics from a video (Speed Index, Visual Complete, Incremental progress, etc)
License: BSD 3-Clause "New" or "Revised" License
Calculate visual performance metrics from a video (Speed Index, Visual Complete, Incremental progress, etc)
License: BSD 3-Clause "New" or "Revised" License
Right now images are resized as part of processing. It would be useful to have an option that keeps full-resolution versions of the images
Allow for custom visual completeness thresholds to be defined for a important zone within Heat Map.
(depends on #14)
It seems frame extraction and histogram generation are done for the entire video even though start/end times specify a short interval. Is that necessary? Can we do the extraction/histogram just for the interval and save some processing time?
In addition to the performance metrics it would be helpful to have the visual progress times reported as well
Hey Pat,
this happens rarely, maybe 1 or 2 times of 30 tests that for some videos the orange change is picked up as first visual change. It could be that I use the wrong input parameters, I'm not sure or ffmpeg.
I use a Docker container just to make it easy to run (with the absolute latest source of VisualMetrics).
I've uploaded the video here: https://www.dropbox.com/s/ktvc9ephtkm8lq3/18.mp4?dl=0
The video starts with orange, then go white and then loads the page. I run like this:
docker run -v "$(pwd)":/tmp wikimedia/visualmetrics python visualmetrics.py --video /tmp/18.mp4 --orange -k --dir /tmp/images --force --white
And the First Visual Change is 17 because it picks up this as the first change:
As I said this works almost all the time for me but sometimes it happens.
Best
Peter
Hi Pat,
I have two questions about using --multiple to split a video. I can do PR but wanted to check first what's the correct behaviour:
Using perceptual has a check that you really provide a video: https://github.com/WPO-Foundation/visualmetrics/blob/master/visualmetrics.py#L1734-L1738 - but if you run --multiple and split the video into multiple runs, and then use Visual Metrics with the screenshots, that should work to get perceptual Speed Index right, so I can remove that check or how should it be used?
Using --multiple always exit with 1 because that ok is always set to false in https://github.com/WPO-Foundation/visualmetrics/blob/master/visualmetrics.py#L1795 and then we exit on that status in https://github.com/WPO-Foundation/visualmetrics/blob/master/visualmetrics.py#L1863-L1866. I can fix that but not sure how you wanna handle it? Exit on 0 helps me find when something is really wrong.
Best
Peter
Pat hi,
Thank you for this great project!
Disclaimer: I'm new to the project
When using the --start option in my command and if the value after the --start is not 0 the speed index calculated will always be 0.
Following is the command line executed:
python visualmetrics.py --video /video.mp4 --start 100 --full
Following is the output:
frame= 1 fps=0.0 q=-0.0 Lsize=N/A time=00:00:00.03 bitrate=N/A speed=0.435x
video:324kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
First Visual Change: 100
Last Visual Change: 100
Visually Complete: 100
Speed Index: 0
Visual Progress: 100=100%
I have tried it with multiple videos in different frame rates and sizes.
Is it a known issue?
Any other option to start the calculation from a different point in the video?
Thanks,
Guy
Hi,
Recently I work on some performance measurement tasks and leverage your code for SI and PSI calculation on Hasal project.
But we got a little confused why the PSI Initial Value is first frame's time instead of 0 or its ssim value as SI do.
If there has any reason to use frame's time as its initial value?
Would a pull request to port this script to python3 be accepted?
It may be possible to make this script work in python2 and python3 by using the right __future__
imports.
Merge support for Heat Map calculations based on my talk here:
https://speakerdeck.com/sergeychernyshev/using-heat-maps-to-improve-performance-metrics
and some prototype code in this project:
https://github.com/sergeychernyshev/speed-selector
We can have Heat Map defined as a JSON structure that can be seen as "selector-boundaries" custom metric on this test:
http://www.webpagetest.org/result/160411_T1_16ZW/1/details/
(gathered using this custom metric script)
Support for custom thresholds within heat map are in #15
Hi Patrick,
I've got a problem with testing the Arabic version of Wikipedia. The hero elements can't handle the arabic characters.
In the Visual Metrics log it looks like this:
14:57:17.282 - Calculating hero element times
14:57:17.282 - 'ascii' codec can't decode byte 0xd8 in position 96: ordinal not in range(128)
Traceback (most recent call last):
File "/usr/src/app/node_modules/browsertime/vendor/visualmetrics.py", line 1880, in main
options.herodata)
File "/usr/src/app/node_modules/browsertime/vendor/visualmetrics.py", line 1342, in calculate_visual_metrics
'value': calculate_hero_time(progress, dirs, hero, viewport)})
File "/usr/src/app/node_modules/browsertime/vendor/visualmetrics.py", line 1543, in calculate_hero_time
logging.debug('Target image for hero %s is %s' % (hero['name'], target_frame))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd8 in position 96: ordinal not in range(128)
I've tested directly on WebPageTest and get the same error there:
https://webpagetest.org/result/191211_J8_e69430ffc54e243dae4fa05d5de4b894/
Best
Peter
Hi Pat,
we have a problem with finding the correct last visual change when we test our mobile version of the site, running Chrome in emulated mode (or setting the viewport to mobile phone size). The last visual change is always a lot higher but visually I can't see any change.
Here's two example videos:
https://www.dropbox.com/s/i3ch0ic9jnit28j/en.m.wikipedia.org-wiki-Facebook.mp4?dl=0
https://www.dropbox.com/s/vfya08i7t46d6gv/en.m.mediawiki.org-wiki-Download.mp4?dl=0
If I run latest master:
../visualmetrics.py -i en.m.mediawiki.org-wiki-Download.mp4 --orange --force --viewport --dir a
I get the Last Visual Change 4817 but it looks like it should be something like 1100 when I look at the video and the screenshots.
If I increase the 10% difference in individual pixels to a little higher (14%) then it looks better but is that the way to go? I got a feeling the problem for us is higher when we have more plain text on the screenshots.
Best
Peter
python visualmetrics.py --check
Traceback (most recent call last):
File "visualmetrics.py", line 1907, in
main()
File "visualmetrics.py", line 1794, in main
elif options.verbose >= 4:
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
Add support for taking an optional dev tools timeline and synchronizing the start time of the video relative to the first request
SSIM should be zero, and PSI calculation should use last_ssim like SI does. I filed a pull request to update the code.
Clark
Instart Logic
When I execute the test I get:
Traceback (most recent call last):
File "visualmetrics.py", line 971, in
main()
File "visualmetrics.py", line 853, in main
import argparse
ImportError: No module named argparse
Hi.
WDYT about adding an option (via command line parameter) to get back the results as JSON?
Could be useful to build tools around visualmetrics.
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.