idpf / epubtestweb Goto Github PK
View Code? Open in Web Editor NEWEPUB testsuite website: http://epubtest.org
EPUB testsuite website: http://epubtest.org
user feedback:
wouldn¹t it make sense to have the "overall/required/optional" values displayed on the "view evaluation page" ?
Request from Ric Wright to make all reading system fields required
Provide a list of book titles and download links. It should be auto-generated from the book data import. This will be useful in the following areas:
Table sorter may not be accessible, or support ARIA roles to provide feedback to screen readers. http://stackoverflow.com/questions/9704103/jquery-tablesorter-508-compliance
Request:
do automatic saves every so often and/or
have a simple “save and continue affordance” (e.g. Ctrl/Cmd-S)
Some type of indication of # of items selected, for example change label on compare button to say "compare 120 of 362 values".
“Publish Notes” option: should be possible to check for whole test category, especially when the note is “file could not be tested”.
• It is not possible to send a link of test results once a comparison is created. This is helpful when someone asks “Which reading systems support feature X?” (The result would be like a query on caniuse, which we’ve been trying to mimic, as in http://caniuse.com/#search=border-radius)
This feature is experimental at the moment. Start putting your requirements in the comments!
Enable admins of epubtest.org to group multiple versions of the same reading system together, ideally highlighting the most recent version in some way (via a collapsable or descending priority list, or other solution)
Link colors are not AAA. Luminosity Contrast Ratio for #2a6496 and #FFFFFF is 6.25:1 (Threshold: greater than 7:1 for AAA, 4.5:1 for AA).
user feedback
"In the main menu drop-down from "EPUB 3 Support Grid," might we want to change "View" to "Current results" to clarify? "
Ability to select all "required" or all "optional" boxes with a simple click on this page: http://epubtest.org/compare/
ideally sub items should have 2 or more items to be worthy of a sub bullet.
• It is not very clear which documents to download for update. There are links to the tests, but not to the files to download. Indicate which files they come from. It would be helpful if the file name were included along with the test name when the list of tests is provided.
To avoid clutter, have a “featured” vs “archived” setting for each reading system, where ‘featured” systems would appear on the main grid and “archived” systems would appear on another less-prominent page.
When viewing comparison results, column heads (test names) and row titles (device names) should be constantly visible as the user scrolls. Otherwise, all that is visible are boxes that say “supported” and “not supported”.
Indicate when the site was last updated. Note that this should reflect changes to any of the following:
Right now there is no easy way to calculate this.
The text in the third column of the compare results table ( http://epubtest.org/compare ) seems to be hiding under the second column on browsers running WebKit. It looks fine in Firefox.
Launching http://epubtest.org/export the file XML only contains the last evaluation (not all).
User feedback:
"consider using role attribute on the navigation bar text, so that screen reader are able to know that it is not a simple text"
Table of contents for this page http://epubtest.org/testsuite/#instructions-for-accessibility-evaluation, so it would be easy to jump to various sections
of the page, similar to this: http://www.daisy.org/joining
Perhaps each tool (row) on this page (http://epubtest.org/results/) could be numbered.
e.g. switch-010 and switch-020: we should never have the case where 020 passes and 010 fails.
this would require some machine-readable metadata on the tests themselves, as well as webform validation.
sorting arrows and category column lines are too light right now
the details of this potential feature are under discussion
Blues on sidebar are not high enough contrast.. Luminosity Contrast Ratio for#0099cc and #caf2ff is 2.75:1 (Threshold: greater than 7:1 for AAA, 4.5:1 for AA)
Amend instructions, e.g. “Select one or more tests from the list and click “Compare” to see a comparison of all reading systems' performances for a given set of EPUB 3.0 features.
Running list of testsuite/website issues for the upcoming update
Both 1. adding a new reading system and 2. saving an evaluation are time-consuming operations due to recursive DB calls. investigate optimization; e.g. something like django_treebeard for improving database operations for tree structures.
This should probably be blank
Pull in the latest changes from the epub-testsuite. Expected to be available at the end of March 2015.
or perhaps as well as.
"Would it be possible for you to add a comments field with each test? It’s important to have notes on fails; as testers we want to let you know if a test fails in a unique way, and provide repro steps."
Feedback from user:
" On the form the checkboxes for “supports Braile output” and “supports screen reader output” may be placed before the lables(on the left side). It is slightly confusing when reading with JAWS."
it's not working with django 1.9 but a fix is expected soon:
jazzband/django-analytical#72
we can re-enable it when it's working.
Needs further investigation. Here is the initial report and follow-up conversation (pasted from a shared google doc):
RKW:
"When I do see my projects there is an oddity. I see that I started my project and just tested on test, so it says 0.4% complete :-). But then there are additional, subsequent entries marked “internal” that show progress as 0% even though my one test is still there."
MDM:
"Did you add more than one evaluation? That would cause there to be multiple entries in a list."
RKW:
"No, these were updates to the same evaluation"
MDM:
"Can you send me a recipe to repeat this behavior, starting with adding a new reading system?"
Feature request:
"On Reading Systems that support JS and spine-level scripting a significantly higher level of automation should be possible. On Reading Systems that further support remote data access, automatically recording results should be possible. We should take advantage of these capabilities to increase ease of performing tests as well as to indirectly encourage highly capable Reading Systems"
Note that this feature affects both the results collecting (applicable here) and the testsuite materials themselves (located in another repo).
Even after upgrading to 2.7.5, the site is running on 2.6.9. Might need to rebuild mod_wsgi. Using float-to-decimal workaround until then.
scenario:
In the database, flags are set on tests, not results. If we move them to results, then they should persist across testsuite updates.
If an admin edits RS info of a RS they don't own, then they are set as the owner. What should happen is that the owner shouldn't change.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.