A cafe table with beignets and a cup of coffee. The CHI22 logo floats to the upper right

CHI Trip Report: Visualization, Fabrication and Accessibility

I regularly take notes when I attend a conference, and especially when attending for the first time after several years I have the sense that there is so much to absorb and enjoy! I have been much less consistent about doing this in a public way, but this year I’m particularly trying to do so for the sake of the many people who can’t attend in person or at all; as well as all of us who can only go to one session when so many are worth attending!

It would be a very long blog post if I summarized all of the great accessibility work, as that is an increasingly large subset of CHI work. I’ll focus on things relevant to areas that are of growing importance to me — visualization and fabrication, along with a brief section on some accessibility highlights. To learn more about the experience of attending remotely versus in person, see my other trip report.

Visualization Sessions

I attempted to attend a mix of paper sessions and other conversations around data and accessibility. In the SIG on “Data as Human Centered Design Material” I had a number of interesting conversations. I spoke with Alex Bowyer who looks at personal data use. One important use of data he mentioned to “create evidence.” Interesting to think about “data for activism” in this context. Another interesting conversation from that SIG centered on how to summarize complex data for retrospective interviews, and accessibility concerns here. Another challenge is how to design apps that use data effectively in the field/live use, again accessibility concerns arise. Further, how do you properly generalize machine learning and visualizations? How do you improve start up, and scale well? How do you support customization of both visualizations and machine learning?

The session on accessible visualization was my next dip into visualization. The first talk talked about audio data narratives for BLV users. Their study highlighted how audio narrative could help to highlight features that might be hard to hear otherwise through strategies like foreshadowing breaking up the basic sonification. The second talk was on 1dof haptic techniques for graph understanding, for BLV users. The authors explored the value of static 3D printed and haptic moveable cues for helping with graph understanding tasks such as comparison. An interesting technique, though sonification also held its own in their data. An interesting question to me is whether a dynamic chart with a piece of flexible material over it (to “smooth” the line) would be better than the slider in terms of the experience — similar to the 3D printed chart, but dynamic. Next, a retroactive fix for a range of charts inaccessible to screen reader users was presented in VoxLens. The authors highlighted the state of online visualization today (which is truly terrible) and then provide a real solution to this problem. The solution combines sonification, high level summaries, and NLP based queries, all automatically supported given simple configuration in JavaScript about axes and a pointer to the chart object. It would be interesting to see them take advantage of ideas from a related paper on proactive visualization exploration support agents in this work. The next presentation focused on improving color patterns to address color vision deficiency. The final paper in the session focused on infographic accessibility.

Some additional papers also looked at accessibility. Joyner et al looked at visualization accessibility in the wild. They analyzed visualizations and surveyed and interviewed practitioners and found the vast majority were accessibility novices; 30-40% did not think it was their job and 71% could think of reasons to eliminate accessibility features even though the acknowledged accessibility was important. They also highlight some difficulties in creating accessible visualizations, such uncertainty in what and how to do (such as how to deal with filters), as well as lack of organization support and lack of tool support. “ComputableViz” support composition such as union, difference, and intersection. They discuss the potential for this approach to make a visualization more cognitively accessible. The intermediate representation used in this work is a relational database derived from the vega-lite specification — I think this has great potential for other accessibility applications including better description; change monitoring; end-user authoring of sonifications; and more. Finally, MyMove is a new method for collecting activity data from older adults.

Two studies made use of simulation. I call them out because of their thoughtfulness in doing so — simulation can have negative impacts and is usually not an appropriate substitute for working with people with disabilities. One study modified visualizations to simulate color deficiency on published visualizations and then crowdsourced large scale data about their accessibility. I think this is a reasonable application of that technique for two reasons: (1) the benefits of data at this scale are high and (2) the specific disability and task structure are unlikely to create bias either in the study data or in the participants (i.e. negatively influence their opinion of disability). Another good example of a study which used hearing people instead of DHH people was ProtoSound. This allowed them to collect data about the accuracy of non-speech sound recognition by their system. However they made use of DHH input throughout the design process.

I also want to highlight research that I though had interesting applications here that were not accessibility papers: “Data Every Day” was interesting because of its emphasis on personalized data tracking, including customization, data entry and authorship, both things that are under-explored in accessible visualization research. “Cicero” allows specification of transformations to visualizations declaratively — thus making visualization changes computationally available which creates many interesting opportunities. “CrossData” is an NLP interface to a data table which provides a fascinating model for authorship and exploration of data. Although the authors didn’t mention this, I think this could be a great way to author alt text for charts. “Diff In The Loop” highlights the ways that data changes during data science programming tasks as code changes. The authors explored a variety of ways to represent this– all visual– but highlights why change understanding is so important. It also raises issues such as what time scale to show changes over which would be relevant to any responsive visualization interaction task as well. Fan et al’s work (VADER lab) on addressing deception made me wonder whether accessible charts (and which chart accessibility techniques) can perform well on visual literacy tests. Cambo & Gergle’sq paper on model positionality and computational reflexivity has immediate implications for disability, and particularly highlights the importance of not only whether disability data is even collected but also things like who annotates such data. Finally, Metaphorical Visualizations translates graph data into arbitrary metaphorical spaces. Although the authors focus on visual metaphors, this could be valuable for audio as well.

There were several tangible visualizations, again relevant to but not targeting accessibility. Shape changing displays for sound zones; STRAIDE used interactive objects mounted on strings that could be actuated to move vertically in space over a large (apparently 4ft or so) range of heights to support rapid prototyping of moveable physical displays. Sauvé et al studied how people physicalize and label data. They explored the relation between type and location of labels, which may have implications for intuitive design of nonvisual physical charts. Making Data Tangible is an important survey paper that explores cross-disciplinary considerations in data physicalization.

Finally, a look at applications, and here we see a mix including some accessibility focused papers. CollabAlly, can represent collaboration activities within a document, and provided an interesting synergy with the earlier talk this same week, Co11ab that provided realtime audio signals about collaborators. This dive into what makes tables inaccessible suggests concrete viable solutions (and check out some other interesting work by this team). Graphiti demonstrated a thoughtfully designed approach to interacting with graph data extracted dynamically with computer vision, a complex and intricate exploratory visualization task. Xiaojun Ma and her students created a system for communicating about data analysis by automatically producing presentation slides from a Jupyter notebook. Working jointly and iteratively with automation and a machine, from a source notebook, could make it easier to make those slides accessible. ImageExplorer vocalizes description of an image with auto-generated captions to help BLV people identify errors. I wonder what one might need to change in this approach for auto-generated (or just badly written) chart captions. Two important learnings from ImageExplorer were the value hierarchical, spatial nature of navigation supported; and the need to support both text and spatial navigation. Cocomix was focused making on comics, not visualizations, accessible to BLV people, but I think the many innovations in this paper have lots of applications for change description and dashboard description. Saha et al discuss visualizing urban accessibility, exploring how different types of secondary information and visualization needs vary for different stakeholders.

Fabrication Work

The vast majority of fabrication work this year did not directly engage with accessibility. That said, I saw a lot of potential for future work in this space.

I first attended the workshop, Reimagining Systems for Learning Hands-on Creative and Maker Skills. We discussed a wide variety of topics around who makes, how people make, and what we might want to contribute to and learn about those experiences as a community. We talked about who is included in making as well. Some provocative discussions happened around whether digital tools or physical tools are a better starting place for learning; how we engage with diverse communities without “colonizing” those spaces; and whether we should “claim” diverse crafts as making or not; The ways in which new technologies can inhibit learning (everyone makes a keychain tag); and the need to educate not only students but educators. Another interesting discussion happened around how time plays out in making and adds challenges when things take time to print, grow, etc etc. Another topic that came up was the digital divide in access to maker spaces, i.e. what may be available in rural communities. Another was tutorial accessibility and whether/how to improve that. Some of the interesting people I learned more about include: Jun Gong (formerly at MIT, now at Apple; does everything from interaction technique design to meta materials); Erin Higgins (introducing youth in Baltimore City to making); and Fraser Anderson (whose work includes instrumenting maker spaces to better understand maker activities). I also want to highlight some students I already knew about here at UW 🙂 including my own student Aashaka Desai (working on machine embroidered tactile graphics) and Daniel Campos Zamora who is interested in the intersection of fabrication and access (ask him about his mobile 3D printer :). 

The first fabrication session I made it to was a panel that placed fabrication up against VR/AR/Haptics. Hrvoje Benko and Michael Nebeling were in “camp AR/VR” and listed challenges such as interaction devices; adaptive UIs and so on. Valkyrie Savage, “camp fabrication” talked of interaction and sensing (as well as the sensory systems); Huaishu Peng & Patrick Baudisch talked about fabrication + interaction; small artifacts; and AR/VR. The wide-ranging discussion mentioned machine reliability; accessibility and 3D printing as a public utility; using AR/VR to bridge the gap between a physical object that almost fills a need and reality with AR/VR (repurposing); and/or business (i.e. Amazon vs Fab). I will focus on what I see as accessibility challenges in the arguments made: Can have anything delivered to our door in 2 hrs? I would claim that our process today works for mass manufacturing but is not customizable to people with disabilities. Next, a potential flaw of AR/VR is its dependence on electricity — if we are to make meaningful, useful objects, physicality is essential for people who either cannot count on having power, or who depend upon a solution being available all the time. Another concern is bridging from accessible consumption to accessible, inclusive authoring tools. As Valkyrie Savage argued, physicality addresses multiple types of inclusion better than VR/AR. Lastly, materials were discussed briefly, though not from an accessibility perspective (though material properties are critical to successful access support in my mind).

Moving on to individual papers, I was excited to see multiple people talking about the importance of programming language support for parametric modeling and physical computing (an interest of my own). For example, Tom Veuskensideas about combining WYSWIG and code for parametric modeling look highly intuitive and fun to use. Valkyrie Savage talked about the importance of this for novice learners and lays the idea of a Jupyter notebook style approach to physical computing design. Tom Vueskens et al. provide a beautiful summary of the variety of code based approaches to support re-use in the literature. Another interesting and I think related set of conversations happened around how you understand making activities. An example is Colin Yeung’s work on tabletop time machines. The concept of being able to go back and forward in time, ideally with linkages to physical actions, project state, and digital file state, is really interesting from an accessibility perspective. Relatedly, Rasch, Kosch and Feger discuss how AR can support learning of physical computing. Finally, Clara Rigaud discussed the potential harms of automatic documentation.

In the domain of modeling tools, accessibility did not come up much. While many papers mentioned custom modeling tools, none of the talks mentioned accessibility for authors/modelers nor did they report on studies with disabled people. The most obvious challenge here is BLV accessibility; but to the extent that generative design is simplifying the modeling experience and computationally controllable, I think there are some very easy ways to improve on this situation. I was actually most intrigued by a non-fabrication paper’s relevance to both fabrication and accessibility: HAExplorer which the fabrication person in me sees as an interesting model for understanding the mechanics of motion of any mechanics, not just biomechanics. In addition, visualizations of mechanics raise the possibility of accessible visualizations of mechanics.

The haptics session had multiple potential accessibility applications. FlexHaptics provides a general method and design tool for designing haptic effects using fabrication, including in interactive controllers; while ShapeHaptics supports design of passive haptic using a combination of springs and sliders which together can control the force profile of a 1D slider. An interesting accessibility side effect of this approach is to modify existing objects by adding both physical and audio effects at key moments, such as when pliers close. In some configurations, the system can also be easily swappable, thus allowing for manual selection experiences based on context. ReCompFig supports dynamic changes to kinematics using a combination of rods and cables (which can be loosened or tightened by a motor). They create effects like a joystick, pixelated display with varying stiffness; and a range of haptic effects such as holding a glass of liquid, or stiff bar. 3D printed soft sensors uses a flexible lattice structure to support resistive soft sensor design.

Interesting materials included Embr, which combines hand embroidery with thermochromic ink, which characterizes 12 of the most popular embroidery stitches for this purpose and supports simulation and design. The results are beautiful to look at and I wonder if there is potential for creating shared multi-ability experiences, such as combining capacitive touch-based audio output with simultaneous visual feedback. I loved multiple aspects of FabricINK because of its sustainable approach (recycling e-ink) as well as the way it opens the door to fine grained displays on many surfaces. Luo it al present pneumatic actuators combined with machine knitting. ElectroPop uses static charge combined with cut mylar sheets to create 3D shapes (it would be interesting to know how well these would hold up to exploratory touch). SlabForge supports design (for manual production) of slab based ceramics. Finally, the Logic Bonbon provides a metamaterial edible interaction platform. Nice to see the range of materials+X being presented, and lots of interesting potential applications of these low-level capabilities.

Another grouping of papers explored capabilities that could increase the accessibility of objects with additional information. M. Doga Dogan et al created InfraredTags, which allow embedding of infrared QR codes in 3D printed objects. Although not a focus of the paper, this has accessibility value (for example an audio description could be embedded). This does require an IR camera. Pourjafarian et al present Print-A-Sketch, which supports accurate printing on arbitrary surfaces, including reproductions of small icons as well as scanning and line drawing. Although not a focus of the work, it would be very interesting to think about how this could be used to create tactile and interactive graphics, or to annotate existing documents and objects for increased accessibility. It would be interesting to think about inks with other kinds of properties than conductive (such as raised inks) as well.

Although my theme is accessibility, disability justice is an intersectional issue that must consider other aspects of design. In that vein, I want to highlight two lovely examples of sustainable design. ReClaym, which creates clay objects from personal composting, and Light In Light Out which harvests and manipulates natural light computationally.

There were a few papers that focused on accessibility, all in the applications space. Roman combines object augmentation with a robotic manipulator to support a wide range of manipulation tasks not easily manipulable by robotic arms. The robot is augmented with a motorized magnetic gripper, while the target object is augmented with a device that can translate rotary motion into an appropriate motion and force profile. Mobiot is a system that supports the creation of custom print and assembly instructions for IoT mobile actuators by leveraging recognition of objects in a known model database combined with demonstration. My student Megan Hofmann presented Maptimizer, a system which creates optimal 3D printed maps was very synergistic with 3D printed street crossings for mobility training. TronicBoards, which support electronic design. FoolProofJoints improves ease of assembly, which although not tested with disabled participants seems to me to have direct accessibility benefits. One final application was material adaptations for supporting accessible drone piloting. This involved both software adaptation, physical control adaptation and posture adaptations. The authors supported multiple disabilities (and multiply disabled people); and open sourced their toolkit.

Other Disability/Accessibility Highlights

Dreaming Disability Justice Workshop: This workshop discussed academic inclusion and the importance of research at the intersection of race and disability that is strongly influenced by both perspectives in conjunction (as opposed to a “this and that” model). Also just not erasing that history (e.g. see these examples of Black, disabled, powerful women). Some of the interesting people I learned more about include:  Cella Sum (politics of care in disability spaces); Frank Elavsky (accessible visualization); Harsha Bala (an anthropologist); Erika Sam (Design Researcher at Microsoft) and Tamana Motahar (A PhD student studying personal informatics and Social Computing for empowering marginalized populations worldwide).

On Tuesday morning I attended Accessibility and Aging. The first talk explored the experience of older adults sharing the work of financial management. Joint accounts & power of attorney are both problematic mechanisms. Banking assistance needs fall along a spectrum these are two blunt for. These concerns seem common to many domains (e.g. health portals; social media accounts; etc). The second talk was on The third was a paper I collaborated on, access needs research design was presented by Kelly Mack. How can we empower researchers to anticipate the broad range of disabilities they might encounter? Anticipate (make things accessible from the beginning) and Adjust as needed. End by Reflecting on things like power dynamics and access conflicts. Next, Christina Harrington‘s award winning paper addressed communication breakdowns in health information seeking using voice for black older adults. Her paper addressed critical concerns such as what dialects are supported by voice systems.

In reviewing other accessibility-related talks, I want to highlight the importance of end-user and multi-stakeholder authoring of interaction and experiences. Dai et al discussed relational maintenance in AAC conversation between users and caregivers. Seita et al show how DHH and speaking people can effectively collaborate when using a visual collaboration system like Miro. Ang et al discuss how video conferencing systems can better support the combination of speaking / signing / interpreting in mixed ASL/spoken language video conferencing calls. One concern — the difficulty of dynamically adjusting communication — reminded me of challenges I’ve experienced with online conferencing as well. Another mixed stakeholder case is Waycott et al’s study staff’s role in supporting VR experiences for elders are positive. A unique case of multi-stakeholder interaction is Choi’s analysis of disabled video blogger interaction with content curation algorithms and how algorithms could better support identity negotiation and audience selection.

I also want to highlight some interaction design work. I loved the WoZ study with ASL control of voice assistants by Glasser et al. It got right to the heart of an important unsolved problem, and in the process shed light on challenges we must solve for ASL as an input language for all sorts of automated contexts. Zhao et al described user-defined above-shoulder gesturing for motor-impaired interaction design, highlighting the importance of end user control over the interaction language for things like social acceptability and comfort. Nomon is an innovative approach to single switch input. This might be amenable to smooth saccade tracking as well.

To summarize an inspiring event. I was glad to be able to review it so thoroughly, thanks to all the online talks which I watched asynchronously.

One thought on “CHI Trip Report: Visualization, Fabrication and Accessibility”

Comments are closed.