axisgroup / evaluation-toolkit Goto Github PK
View Code? Open in Web Editor NEWWIP that captures Axis Group's framework for evaluating dashboards during quick design sprints
License: MIT License
WIP that captures Axis Group's framework for evaluating dashboards during quick design sprints
License: MIT License
Replace icons used for best practices
Not everybody shares the same passion for cakes as you @manasvil
Just in general, it'd be nice for some things to be linked, like apps, websites, methodologies, articles, and other authorities that can back up the claims and lend multiple perspectives to the issues covered here.
otherwise I won't know which file(s) to go to next
Update section: https://github.com/axisgroup/evaluation-toolkit/blob/master/3.Plan-the-test/README.md
Use: Writing the right tasks https://projects.invisionapp.com/d/main#/projects/boards/4517347/137764798
http://design.canonical.com/2013/08/usability-testing-how-do-we-design-effective-tasks/
and this summary image https://github.com/axisgroup/evaluation-toolkit/blob/master/Assets/images/task-types.png
Add links to folders such as Setting the stage, Determine research questions as well a description of what these folders encapsulate.
Eg. Effectiveness can be measured as a % of people who completed assigned tasks across multiple usability tests, or Likert scale ratings from a survey
Anyone have any recommendations on how to make it non-redundant?
Include cool icons, references to related articles, make sentences crisper.
Content should be the nav, but have nav as the exit strategy
The challenges document is currently in a bullet form
Needs sewing up+add additional issues
Write down risks of evaluation
How UCD is based on three pillars
In planning the test, lead with an intro to the Research Landscape; How you can have attitudinal vs. behavioral measurements +qual and quant
Then say what are some things people do for data viz and pros and cons of those
Then,final section should be Putting a stake in the ground-
Clean up based on -
http://www.webdesignerdepot.com/2015/02/how-to-test-the-usability-of-prototypes-like-a-pro/
and Charline Poirier's article https://design.canonical.com/2012/03/about-usability-testing-recruiting/
Too quote-y right now
Questionnaire can be found here: https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html
Add attribution at the end
This questionnaire is based on the System Usability Scale (SUS), which was developed by John Brooke while working at Digital Equipment Corporation. © Digital Equipment Corporation, 1986.
Method template should be as such-
Executive Summary
What
When
Why
How
Method in Action
References
For each section, add a visual example of a best practice and a visual example of it opposite for a this vs that effect
https://github.com/axisgroup/evaluation-toolkit/tree/master/1.%20Following%20best%20practices
We can use a custom dashboard we've designed and try the expert review/heuristic evaluation against it, to provide more specifics around how this evaluation would work in context of data viz.
Start with Axis Style Guide
Add this snippet to the know what to measure section
https://github.com/axisgroup/evaluation-toolkit/blob/master/Assets/images/style-of-data.png
Make it easier for folks to scan the list, then dive into the section/best practice that they need to reference
The Information Discrimination section looks very awkward-
https://github.com/axisgroup/evaluation-toolkit/tree/master/1.Following-best-practices
The word existent on the data set familiarity is misspelled. Does anyone have the .docx file?
Executive Summary (if applicable)
What
When
Why
How
Method in Action
Resources
References
Such as ambiguous, frustrating
Agreement about what counts as a heuristic- In most UX scenarios, Nielsen and Norman's heuristic are used as an accepted usability standard. But in our case, we have a list of info vis best practices, which though generally agreed upon within the team, may not be as comprehensive as it could be. For instance, during the demo test, there was a question about "I cannot find details about my target for this month but how important is it really to know the number?" and the expert did not know how to categorize this issue. One approach is to capture these questions as miscellaneous. Another approach would be to increase the number of best practices and classifying them under the useful, usable, desirable framework.
Setting expectations with the expert- The expert felt the burden to find an issue in violation of every best practice or not even though the best practices section is just a guide to be able to categorize issues and also to prime them to think about potential issues. This goes to having a common understanding about the expert about what they are required to do.
Task make every test- There is a very important precursor to every testing method and that is to write tasks that allow for the evaluator to explore the interface. FInding issues is an incidental process of performing these tasks and hence the burden of this falls on the facilitator and this is not something the evaluators should worry about.
Writing task guideline- Whom you interviewed and whom you are testing with can be an important determinant of writing tasks and establishing usability criteria.
Heuristic evaluation - needs scoring method and details on how to conduct the test
Add proper links and citations to Quoted Text
Right now they're in quote blocks
The readme is classically (and on Github) used as a way to get around a project (or repo). Right now, the readme. Overall it reads very well and describes process, but it's written in a narrative style, which is good for a one-sitting read-through, but there isn't an intro or quick section listing that would allow for someone to quickly reference something. There also ought to be a succinct "this is what this repo is about" bit at the beginning and a more robust one with links to each of the folders in the How To Use section.
Folder names have changed so some links may have broken
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.