steveharoz / open-access-vis Goto Github PK
View Code? Open in Web Editor NEWA collection of open access material presented at the VIS conference
Home Page: http://oavis.steveharoz.com
A collection of open access material presented at the VIS conference
Home Page: http://oavis.steveharoz.com
I openly posted a preprint PDF of our paper "Perceptual Biases in Font Size as a Data Encoding":
http://vialab.science.uoit.ca/wp-content/papercite-data/pdf/shi2017fontsize.pdf
Will ask Eric about posting the data.
Paper: Orko: Facilitating Multimodal Interaction for Visual Network Exploration and Analysis
Arjun Srinivasan, John Stasko
5:15-5:35 PM Thursday 301-D
URL: http://arjun010.github.io/static/papers/orko-infovis-17.pdf
Teaser: http://arjun010.github.io/static/images/orko_teaser.jpg
It's pretty hard to edit the CSV online because it's difficult to match keys and values. A yaml file would be great! If you use jekyll, you could even read the data directly and the page could work without javascript. See https://jekyllrb.com/docs/datafiles/
Wait on TVCG
"2017 2018" in header
The header is bigger than I'd prefer as is. Might not have a choice but to add another row.
Dear Steve,
thank you so much for this service! I have finally added all information about this project on OSF.io. Would you kindly include the relevant URIs?
Hope that I am using this service correcetly---this is my first project on OSF, inspired by your initiative. Thanks for your efforts!
Best,
Bastian
Thanks Steve, that's an amazing initiative. And I see the community is engaged and contributing, so well done! I'm trying to not be ranked first in your chart-of-shame anymore... So, here is the info for a first paper:
Title:
MyBrush: Brushing and Linking with Personal Agency
Abstract:
We extend the popular brushing and linking technique by incorporating personal agency in the interaction. We first map existing research related to brushing and linking into a design space that deconstructs the interaction technique into three components: source (what is being brushed), link (the expression of relationship between source and target), and target (what is revealed as related to the source). Based on the generative power of this design space, we present MyBrush, a unified interface that offers personal agency over brushing and linking by giving people the flexibility to configure the source, link, and target of multiple brushes. The results of three focus groups demonstrate that people with different backgrounds leveraged personal agency in different ways, including performing complex tasks and showing links explicitly. We reflect on these results, paving the way for future research on the role of personal agency in information visualization.
Other columns (note that everything is available on the project page: http://innovis.cpsc.ucalgary.ca/supplemental/MyBrush/):
AuthorPDF
http://innovis.cpsc.ucalgary.ca/supplemental/MyBrush/2018_VIS_mybrush.pdf
Abstract
We extend the popular brushing and linking technique by incorporating personal agency in the interaction. We map existing research related to brushing and linking into a design space that deconstructs the interaction technique into three components: source (what is being brushed), link (the expression of relationship between source and target), and target (what is revealed as related to the source). Using this design space, we created MyBrush, a unified interface that offers personal agency over brushing and linking by giving people the flexibility to configure the source, link, and target of multiple brushes. The results of three focus groups demonstrate that people with different backgrounds leveraged personal agency in different ways, including performing complex tasks and showing links explicitly. We reflect on these results, paving the way for future research on the role of personal agency in information visualization.
ExplanationPage
http://innovis.cpsc.ucalgary.ca/supplemental/MyBrush/
SourceMaterials
http://innovis.cpsc.ucalgary.ca/supplemental/MyBrush/
Video
https://www.youtube.com/watch?v=1iAtPhWEV9I
Cheers,
-Charles
That's two!
Title:
Assessing the Graphical Perception of Time and Speed on 2D+Time Trajectories
Abstract:
We empirically evaluate the extent to which people perceive non-constant time and speed encoded on 2D paths. In our graphical perception study, we evaluate nine encodings from the literature for both straight and curved paths. Visualizing time and speed information is a challenge when the x and y axes already encode other data dimensions, for example when plotting a trip on a map. This is particularly true in disciplines such as time-geography and movement analytics that often require visualizing spatio-temporal trajectories. A common approach is to use 2D+time trajectories, which are 2D paths for which time is an additional dimension. However, there are currently no guidelines regarding how to represent time and speed on such paths. Our study results provide InfoVis designers with clear guidance regarding which encodings to use and which ones to avoid; in particular, we suggest using color value to encode speed and segment length to encode time whenever possible.
Again, everything is available on the project page: http://innovis.cpsc.ucalgary.ca/supplemental/2DTimeTrajectories/):
ExplanationPage
http://innovis.cpsc.ucalgary.ca/supplemental/2DTimeTrajectories/
Experiment Data
http://innovis.cpsc.ucalgary.ca/supplemental/2DTimeTrajectories/#study
Video
https://www.youtube.com/watch?v=OGccwtpg8JI
-Charles
Hi Steve, thank you.
I have a few changes to my paper. Could you update it?
Paper Title: Taking Word Clouds Apart: An Empirical Investigation of the Design Space for Keyword Summaries
First Author: Cristian Felix
Fix authors order to :Cristian Felix, Steven Franconeri, Enrico Bertini
Add Explanation Page: https://nyuvis.github.io/word-cloud/
Thank You
New URL for material and code and data:
GitHub page: https://github.com/nyuvis/explanation_explorer
Thanks!
Session: InfoVis Graphs and Paths
THURSDAY, OCTOBER 5
4:15pm-5:55pm
Room: 301-D
Paper: What Would a Graph Look Like in This Layout? A Machine Learning Approach to Large Graph Visualization
Authors: Oh-Hyun Kwon, Tarik Crnovrsanin, Kwan-Liu Ma
A supplementary material website of our paper is available at http://graphvis.net/wgl
load late
They're posted on https://www.computer.org/csdl/trans/tg/preprint/index.html
TODO: lookup papers and add DOIs to spreadsheet
Paper: "The Anchoring Effect in Decision-Making with Visual Analytics" VAST 2017, Theory and Analysis Process on Thursday
Pdf: https://github.com/wesslen/vast2017-anchoringeffect/raw/master/anchorbias.pdf
GitHub: https://github.com/wesslen/vast2017-anchoringeffect
Thanks for organizing the materials!
Let me know if you have any questions.
Title:
Using Gap Charts to Visualize the Temporal Evolution of Ranks and Scores
PDF:
http://charles.perin.free.fr/data/pub/gapchart.pdf
Abstract:
We present Gap Charts, a novel class of line charts designed for visualizing the evolution of rankings over time, with a particular focus on sports data. Gap Charts show entries, e. g., teams participating in a competition, that are ranked over time according to a performance metric like a growing number of points or a score. The main advantages of Gap Charts are that 1) tied entries never overlap—only changes in rank generate limited overlap between time-steps; and 2) gaps between entries show the magnitude of their score difference. We evaluate the effectiveness of Gap Charts for performing different types of tasks, and find that they outperform standard time-dependent ranking visualizations for tasks that involve identifying and understanding evolutions in both ranks and scores. Finally, we show that Gap Charts are a generic and scalable class of line charts by applying them to a variety of different datasets.
I need someone to help collect the data from workshops. Let me know if anyone's interested.
Completely hidden, maybe?
Clustervision: Visual Supervision of Unsupervised Clustering
Author PDF: http://perer.org/papers/adamPerer-Clustervision-VAST2017.pdf
Teaser/thumbnail image: http://perer.org/img/clustervision.png
Waiting on conference schedule
Our paper is on bioRxiv since the submission deadline, including the revision:
DOI: 10.1101/123588
URL: http://www.biorxiv.org/content/early/2017/07/09/123588
Not sure what criteria is applied but I think bioRxiv is a reliable open access repository.
Also, all the data we used is publicly available too:
https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE63525
(GEO is one of the two major data repositories for genomic data)
The config for loading this demo (and more) is open-source too:
https://gist.github.com/flekschas/8b0163f25fd4ffb067aaba2a595da447
PS: Thanks for pulling this list together!
Only show explanation/demo/video if pdf is on reliable source
Hi Steve,
This project is really great and thanks a lot for your contribution!
Our paper has some additional information and I'm wondering if you could help update it.
Paper Title: SkyLens: Visual Analysis of Skyline on Multi-dimensional Data
First Author: Xun Zhao
Information to be added:
Material link: zhaoxun.me/skylens/
Explanation link: http://vis.cse.ust.hk/skylens/
Thank you again!
Best,
Yanhong
Dear Steve,
Thank you for your efforts for the good of the community! I really like the initiative of the "Open Access VIS". I have upload related materials to the OSF. The URLs are listed below:
ReviewVenue: VAST
PublicationVenue: VAST17
Title: Understanding Hidden Memories of Recurrent Neural Networks
Authors: Yao Ming, Shaozu CAO, Ruixiang Zhang, Zhen LI, Yuanzhe Chen, Yangqiu Song, Huamin Qu
AuthorPDF: https://mfr.osf.io/render?url=https://osf.io/f36tb/?action=download%26mode=render
ExplanationPage: http://www.myaooo.com/rnnvis
SourceMaterials: http://github.com/myaooo/rnnvis
Data: http://www.myaooo.com/rnnvis
Video: https://www.youtube.com/watch?v=0QFDNLdQ6_w
Not sure if the following information is useful:
Supplement Material: https://mfr.osf.io/render?url=https://osf.io/7grp8/?action=download%26mode=render
Teaser Image (400x300): https://mfr.osf.io/render?url=https://osf.io/zu2nh/?action=download%26mode=render
Again, thank you for your efforts!
Best,
Yao
Pull more directly from the COS badges. Also, clarify that "data" means "raw measurements from the article's experiment", not other people's datasets that this article uses.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.