CS397 Information Visualization

Evaluation

Overall takeaway from the following readings: get a good understanding of the problem domain and the goal of the visualization. Heuristics are a good way of evaluating prototypes, but they are not sufficient; ideally, conducting ethnographic research with the end users or audience provides the clearest idea of how good the visualization is.

Information Visualization Heuristics in Practical Expert Evaluation (Väätäjä et. al 2016) looks at the application of 10 information visualization heuristics from prior research and suggests three additional heuristics. They provide the heuristics as well as the questions that participants were asked in evaluating the usefulness of the heuristics.

Heuristics for information visualization evaluation (Zuk et. al 2006) performs a meta-analysis on the selection, organization, and using of heuristics. They provide and compare three different sets of heuristics for a single visualization: both the issues identified and also the direction of suggested solutions. They find some characteristics to look for when comparing different heuristics. This is useful for when thinking about which heuristics to use.

Knowledge Precepts for Design and Evaluation of Information Visualizations (Amar & Stasko, 2005) looks at how limitations in information visualization systems result in analytic gaps between the systems and higher-level analysis tasks such as learning and decision-making; more specifically: the Worldview Gap (what is shown vs. what should actually be shown) and the Rationale Gap (perceiving a relatioinship vs. understanding the usefulness of the relationship). The paper suggests three ways to narrow each gap.

A Nested Model for Visualization Design and Validation (Munzner 2009) proposes splitting visualization design into a four layer model to analyse and guide design processes. Each layer affects the successive one, and breaking the process down into these layers allows designers to explicitly pinpoint threats and validations (what can go wrong and how problems can be averted). Miscommunication can be avoided by explicitly stating threats to downstream levels and assumptions about upstream levels.

This Storytelling with Data post walks through evaluating and redesigning a visualization of qualitative data, which was, in this case, customer concerns expressed on a survey. The post expands on the three tips and gives a thorough explanation of the steps and choices taken in the redesign. In a similar vein, this post tackles evaluating word clouds (when they’re useful and when they’re not).

Empirical Studies in Information Visualization: Seven Scenarios (Lam et. al 2012) considers how information visualization is evaluated in seven different scenarios and includes contemporary practices and different design approaches. They conducted a literature review of evaluation methods used in ~360 published papers and created the scenarios from their findings. Each scenario includes questions to ask and different ways of interacting with the user.

Strategies for Evaluating Information Visualization Tools: Multi-dimensional In-depth Long-term Case Studies (Shneiderman & Plaisant 2006) discusses using multi-dimensional (observations, surveys, etc) in-depth (engagement with users) long-term (from training to expert usage) case studies (MILCs) in order to better understand the user. They discuss guidelines for applied ethnographic methods (Section 6) that includes how to prepare, things to do, and how to evaluate current designs.

Misc / Resources

VizItCards: A Card-Based Toolkit for Infovis Design Education (He & Adar: IEEE Vis 2016) discusses a card-based workshop to be used in teaching infovis. They specific the learning goals they consider, as well as which aspects of design that the cards can be used to support.

Creative User-Centered Visualization Design for Energy Analysts and Modelers discusses techniques that deliberately promote creativity in contexts of design problems where the data is relatively unknown and the needs are not well understood.

Evaluation of Semantic Fisheye Zooming to Provide Focus+Context (Afram et. al 2007) is a study on the effectiveness of semantic fisheye zooming on a concept map, which is a graph where nodes are individual concepts and edges are relationships. Participants were asked several questions that tested their understanding and recall of the concept map, and results suggest that semantic fisheye zooming is useful.

A Multi-Level Typology of Abstract Visualization Tasks (Brehmer & Munzner 2013) provides a typology to describe complex tasks as a sequence of interdependent smaller ones. This typology bridges the gap between low and high level tasks by thinking about answering three questions about a task: why, how, and what? Their consitent lexicon allows for multiple levels of specificity as well as a corresponding visualization of an abstract task.


<< Exploring WCMA Dataset (and C3) Accessibility >>