Category Archives: Trip Reports

CSUN Trip Notes

Just got back from my second time attending CSUN. More and more it’s becoming a place I see as home. It has its foibles, but overall it’s a place where I am able to connect with the larger disability community, see great presentations, find other likeminded accessibility researchers, view up and coming technology, and learn about so many things. Some highlights this week:

The first day’s highlight was the accessibility researcher’s lunch. I tried to include everyone I knew but I discovered at least two more people who were in town after it was over — the number of accessibility researchers seems to keep growing at CSUN. I hope we can make that lunch an annual event (and maybe hold it in a less noisy venue: Due to my tendency for vocal injury I was grateful to be sitting next to two friends who use ASL and were willing to deal with my imperfect knowledge of the language).

I got up early the next day to see the AI for WCAG presentation put on by TestParty. They have a great pitch, first of all, so I took avid notes on the presentation itself. But I also found the description of a 2023 Harvard study on how AI impacts knowledge workers to be an interesting take on AI. According the Michael, Half the users divided the work (“Centaurs”) and the other half integrated the AI (“Cyborgs”). Cyborg’s prompted 4-6 x more on average, and benefited more from using AI. He then compared the improvement to the steam engine and power (18-22% improvements) and argued that AI users were causing 17%-40% gains. After talking about some uses of AI for WCAG, he discussed some examples of what AI had found in TestParty’s tests, such as cases hidden in 100k lines of code, and argued that human-in-the-loop AI use was incredibly powerful for accessibility, especially at the source code level.

One thing CSUN does not do well at is providing spaces that are friendly to those of us with chronic illness, sensory overload, and other things that would benefit from a quiet room. For that reason, I chose to stay in the CSUN hotel this year instead of with a friend. I spent most afternoons resting or sleeping in my room, so this story picks up the next morning, when after having breakfast with a close friend I attended Lainey Feingold’s talk on the legal context in the US today (“Digital Accessibility Legal Update: U.S.”).

Again, I took notes on the talk as much as the content. I was particularly impressed by the care with which everyone who contributed to the talk in any domain from tech support to images was credited. Feingold reassured us that even if disability is no longer a bipartisan issue, regulations can’t be changed with a memo and private lawsuits can help to challenge actions. In fact, she argued that disability and civil rights lawyers are currently winning in the courts. Feingold also reminded us that global laws are much less at risk.Many of her points can also be found on Feingold’s blog. I especially appreciated her comments on not complying in advance, a theme in her article on how states are pushing back on anti-DEIA executive orders. and her article on joy as an approach to justice, and the importance of celebrating victories.

The rest of the conference was the usual: presentations of my own, meeting with old friends and getting to know new ones, new ideas about technologies to use in my research, and continued efforts to rest as much as possible. One of the best parts of CSUN for an academic is the inherent interest in translation. For example, our group was approached by folks who work on WCAG, ARIA and similar standards about potential implications of our work. I returned home with new ideas, connections, and excitement for our field.

CHI Trip Report: Visualization, Fabrication and Accessibility

I regularly take notes when I attend a conference, and especially when attending for the first time after several years I have the sense that there is so much to absorb and enjoy! I have been much less consistent about doing this in a public way, but this year I’m particularly trying to do so for the sake of the many people who can’t attend in person or at all; as well as all of us who can only go to one session when so many are worth attending!

It would be a very long blog post if I summarized all of the great accessibility work, as that is an increasingly large subset of CHI work. I’ll focus on things relevant to areas that are of growing importance to me — visualization and fabrication, along with a brief section on some accessibility highlights. To learn more about the experience of attending remotely versus in person, see my other trip report.

Visualization Sessions

I attempted to attend a mix of paper sessions and other conversations around data and accessibility. In the SIG on “Data as Human Centered Design Material” I had a number of interesting conversations. I spoke with Alex Bowyer who looks at personal data use. One important use of data he mentioned to “create evidence.” Interesting to think about “data for activism” in this context. Another interesting conversation from that SIG centered on how to summarize complex data for retrospective interviews, and accessibility concerns here. Another challenge is how to design apps that use data effectively in the field/live use, again accessibility concerns arise. Further, how do you properly generalize machine learning and visualizations? How do you improve start up, and scale well? How do you support customization of both visualizations and machine learning?

The session on accessible visualization was my next dip into visualization. The first talk talked about audio data narratives for BLV users. Their study highlighted how audio narrative could help to highlight features that might be hard to hear otherwise through strategies like foreshadowing breaking up the basic sonification. The second talk was on 1dof haptic techniques for graph understanding, for BLV users. The authors explored the value of static 3D printed and haptic moveable cues for helping with graph understanding tasks such as comparison. An interesting technique, though sonification also held its own in their data. An interesting question to me is whether a dynamic chart with a piece of flexible material over it (to “smooth” the line) would be better than the slider in terms of the experience — similar to the 3D printed chart, but dynamic. Next, a retroactive fix for a range of charts inaccessible to screen reader users was presented in VoxLens. The authors highlighted the state of online visualization today (which is truly terrible) and then provide a real solution to this problem. The solution combines sonification, high level summaries, and NLP based queries, all automatically supported given simple configuration in JavaScript about axes and a pointer to the chart object. It would be interesting to see them take advantage of ideas from a related paper on proactive visualization exploration support agents in this work. The next presentation focused on improving color patterns to address color vision deficiency. The final paper in the session focused on infographic accessibility.

Some additional papers also looked at accessibility. Joyner et al looked at visualization accessibility in the wild. They analyzed visualizations and surveyed and interviewed practitioners and found the vast majority were accessibility novices; 30-40% did not think it was their job and 71% could think of reasons to eliminate accessibility features even though the acknowledged accessibility was important. They also highlight some difficulties in creating accessible visualizations, such uncertainty in what and how to do (such as how to deal with filters), as well as lack of organization support and lack of tool support. “ComputableViz” support composition such as union, difference, and intersection. They discuss the potential for this approach to make a visualization more cognitively accessible. The intermediate representation used in this work is a relational database derived from the vega-lite specification — I think this has great potential for other accessibility applications including better description; change monitoring; end-user authoring of sonifications; and more. Finally, MyMove is a new method for collecting activity data from older adults.

Two studies made use of simulation. I call them out because of their thoughtfulness in doing so — simulation can have negative impacts and is usually not an appropriate substitute for working with people with disabilities. One study modified visualizations to simulate color deficiency on published visualizations and then crowdsourced large scale data about their accessibility. I think this is a reasonable application of that technique for two reasons: (1) the benefits of data at this scale are high and (2) the specific disability and task structure are unlikely to create bias either in the study data or in the participants (i.e. negatively influence their opinion of disability). Another good example of a study which used hearing people instead of DHH people was ProtoSound. This allowed them to collect data about the accuracy of non-speech sound recognition by their system. However they made use of DHH input throughout the design process.

I also want to highlight research that I though had interesting applications here that were not accessibility papers: “Data Every Day” was interesting because of its emphasis on personalized data tracking, including customization, data entry and authorship, both things that are under-explored in accessible visualization research. “Cicero” allows specification of transformations to visualizations declaratively — thus making visualization changes computationally available which creates many interesting opportunities. “CrossData” is an NLP interface to a data table which provides a fascinating model for authorship and exploration of data. Although the authors didn’t mention this, I think this could be a great way to author alt text for charts. “Diff In The Loop” highlights the ways that data changes during data science programming tasks as code changes. The authors explored a variety of ways to represent this– all visual– but highlights why change understanding is so important. It also raises issues such as what time scale to show changes over which would be relevant to any responsive visualization interaction task as well. Fan et al’s work (VADER lab) on addressing deception made me wonder whether accessible charts (and which chart accessibility techniques) can perform well on visual literacy tests. Cambo & Gergle’sq paper on model positionality and computational reflexivity has immediate implications for disability, and particularly highlights the importance of not only whether disability data is even collected but also things like who annotates such data. Finally, Metaphorical Visualizations translates graph data into arbitrary metaphorical spaces. Although the authors focus on visual metaphors, this could be valuable for audio as well.

There were several tangible visualizations, again relevant to but not targeting accessibility. Shape changing displays for sound zones; STRAIDE used interactive objects mounted on strings that could be actuated to move vertically in space over a large (apparently 4ft or so) range of heights to support rapid prototyping of moveable physical displays. Sauvé et al studied how people physicalize and label data. They explored the relation between type and location of labels, which may have implications for intuitive design of nonvisual physical charts. Making Data Tangible is an important survey paper that explores cross-disciplinary considerations in data physicalization.

Finally, a look at applications, and here we see a mix including some accessibility focused papers. CollabAlly, can represent collaboration activities within a document, and provided an interesting synergy with the earlier talk this same week, Co11ab that provided realtime audio signals about collaborators. This dive into what makes tables inaccessible suggests concrete viable solutions (and check out some other interesting work by this team). Graphiti demonstrated a thoughtfully designed approach to interacting with graph data extracted dynamically with computer vision, a complex and intricate exploratory visualization task. Xiaojun Ma and her students created a system for communicating about data analysis by automatically producing presentation slides from a Jupyter notebook. Working jointly and iteratively with automation and a machine, from a source notebook, could make it easier to make those slides accessible. ImageExplorer vocalizes description of an image with auto-generated captions to help BLV people identify errors. I wonder what one might need to change in this approach for auto-generated (or just badly written) chart captions. Two important learnings from ImageExplorer were the value hierarchical, spatial nature of navigation supported; and the need to support both text and spatial navigation. Cocomix was focused making on comics, not visualizations, accessible to BLV people, but I think the many innovations in this paper have lots of applications for change description and dashboard description. Saha et al discuss visualizing urban accessibility, exploring how different types of secondary information and visualization needs vary for different stakeholders.

Fabrication Work

The vast majority of fabrication work this year did not directly engage with accessibility. That said, I saw a lot of potential for future work in this space.

I first attended the workshop, Reimagining Systems for Learning Hands-on Creative and Maker Skills. We discussed a wide variety of topics around who makes, how people make, and what we might want to contribute to and learn about those experiences as a community. We talked about who is included in making as well. Some provocative discussions happened around whether digital tools or physical tools are a better starting place for learning; how we engage with diverse communities without “colonizing” those spaces; and whether we should “claim” diverse crafts as making or not; The ways in which new technologies can inhibit learning (everyone makes a keychain tag); and the need to educate not only students but educators. Another interesting discussion happened around how time plays out in making and adds challenges when things take time to print, grow, etc etc. Another topic that came up was the digital divide in access to maker spaces, i.e. what may be available in rural communities. Another was tutorial accessibility and whether/how to improve that. Some of the interesting people I learned more about include: Jun Gong (formerly at MIT, now at Apple; does everything from interaction technique design to meta materials); Erin Higgins (introducing youth in Baltimore City to making); and Fraser Anderson (whose work includes instrumenting maker spaces to better understand maker activities). I also want to highlight some students I already knew about here at UW 🙂 including my own student Aashaka Desai (working on machine embroidered tactile graphics) and Daniel Campos Zamora who is interested in the intersection of fabrication and access (ask him about his mobile 3D printer :). 

The first fabrication session I made it to was a panel that placed fabrication up against VR/AR/Haptics. Hrvoje Benko and Michael Nebeling were in “camp AR/VR” and listed challenges such as interaction devices; adaptive UIs and so on. Valkyrie Savage, “camp fabrication” talked of interaction and sensing (as well as the sensory systems); Huaishu Peng & Patrick Baudisch talked about fabrication + interaction; small artifacts; and AR/VR. The wide-ranging discussion mentioned machine reliability; accessibility and 3D printing as a public utility; using AR/VR to bridge the gap between a physical object that almost fills a need and reality with AR/VR (repurposing); and/or business (i.e. Amazon vs Fab). I will focus on what I see as accessibility challenges in the arguments made: Can have anything delivered to our door in 2 hrs? I would claim that our process today works for mass manufacturing but is not customizable to people with disabilities. Next, a potential flaw of AR/VR is its dependence on electricity — if we are to make meaningful, useful objects, physicality is essential for people who either cannot count on having power, or who depend upon a solution being available all the time. Another concern is bridging from accessible consumption to accessible, inclusive authoring tools. As Valkyrie Savage argued, physicality addresses multiple types of inclusion better than VR/AR. Lastly, materials were discussed briefly, though not from an accessibility perspective (though material properties are critical to successful access support in my mind).

Moving on to individual papers, I was excited to see multiple people talking about the importance of programming language support for parametric modeling and physical computing (an interest of my own). For example, Tom Veuskensideas about combining WYSWIG and code for parametric modeling look highly intuitive and fun to use. Valkyrie Savage talked about the importance of this for novice learners and lays the idea of a Jupyter notebook style approach to physical computing design. Tom Vueskens et al. provide a beautiful summary of the variety of code based approaches to support re-use in the literature. Another interesting and I think related set of conversations happened around how you understand making activities. An example is Colin Yeung’s work on tabletop time machines. The concept of being able to go back and forward in time, ideally with linkages to physical actions, project state, and digital file state, is really interesting from an accessibility perspective. Relatedly, Rasch, Kosch and Feger discuss how AR can support learning of physical computing. Finally, Clara Rigaud discussed the potential harms of automatic documentation.

In the domain of modeling tools, accessibility did not come up much. While many papers mentioned custom modeling tools, none of the talks mentioned accessibility for authors/modelers nor did they report on studies with disabled people. The most obvious challenge here is BLV accessibility; but to the extent that generative design is simplifying the modeling experience and computationally controllable, I think there are some very easy ways to improve on this situation. I was actually most intrigued by a non-fabrication paper’s relevance to both fabrication and accessibility: HAExplorer which the fabrication person in me sees as an interesting model for understanding the mechanics of motion of any mechanics, not just biomechanics. In addition, visualizations of mechanics raise the possibility of accessible visualizations of mechanics.

The haptics session had multiple potential accessibility applications. FlexHaptics provides a general method and design tool for designing haptic effects using fabrication, including in interactive controllers; while ShapeHaptics supports design of passive haptic using a combination of springs and sliders which together can control the force profile of a 1D slider. An interesting accessibility side effect of this approach is to modify existing objects by adding both physical and audio effects at key moments, such as when pliers close. In some configurations, the system can also be easily swappable, thus allowing for manual selection experiences based on context. ReCompFig supports dynamic changes to kinematics using a combination of rods and cables (which can be loosened or tightened by a motor). They create effects like a joystick, pixelated display with varying stiffness; and a range of haptic effects such as holding a glass of liquid, or stiff bar. 3D printed soft sensors uses a flexible lattice structure to support resistive soft sensor design.

Interesting materials included Embr, which combines hand embroidery with thermochromic ink, which characterizes 12 of the most popular embroidery stitches for this purpose and supports simulation and design. The results are beautiful to look at and I wonder if there is potential for creating shared multi-ability experiences, such as combining capacitive touch-based audio output with simultaneous visual feedback. I loved multiple aspects of FabricINK because of its sustainable approach (recycling e-ink) as well as the way it opens the door to fine grained displays on many surfaces. Luo it al present pneumatic actuators combined with machine knitting. ElectroPop uses static charge combined with cut mylar sheets to create 3D shapes (it would be interesting to know how well these would hold up to exploratory touch). SlabForge supports design (for manual production) of slab based ceramics. Finally, the Logic Bonbon provides a metamaterial edible interaction platform. Nice to see the range of materials+X being presented, and lots of interesting potential applications of these low-level capabilities.

Another grouping of papers explored capabilities that could increase the accessibility of objects with additional information. M. Doga Dogan et al created InfraredTags, which allow embedding of infrared QR codes in 3D printed objects. Although not a focus of the paper, this has accessibility value (for example an audio description could be embedded). This does require an IR camera. Pourjafarian et al present Print-A-Sketch, which supports accurate printing on arbitrary surfaces, including reproductions of small icons as well as scanning and line drawing. Although not a focus of the work, it would be very interesting to think about how this could be used to create tactile and interactive graphics, or to annotate existing documents and objects for increased accessibility. It would be interesting to think about inks with other kinds of properties than conductive (such as raised inks) as well.

Although my theme is accessibility, disability justice is an intersectional issue that must consider other aspects of design. In that vein, I want to highlight two lovely examples of sustainable design. ReClaym, which creates clay objects from personal composting, and Light In Light Out which harvests and manipulates natural light computationally.

There were a few papers that focused on accessibility, all in the applications space. Roman combines object augmentation with a robotic manipulator to support a wide range of manipulation tasks not easily manipulable by robotic arms. The robot is augmented with a motorized magnetic gripper, while the target object is augmented with a device that can translate rotary motion into an appropriate motion and force profile. Mobiot is a system that supports the creation of custom print and assembly instructions for IoT mobile actuators by leveraging recognition of objects in a known model database combined with demonstration. My student Megan Hofmann presented Maptimizer, a system which creates optimal 3D printed maps was very synergistic with 3D printed street crossings for mobility training. TronicBoards, which support electronic design. FoolProofJoints improves ease of assembly, which although not tested with disabled participants seems to me to have direct accessibility benefits. One final application was material adaptations for supporting accessible drone piloting. This involved both software adaptation, physical control adaptation and posture adaptations. The authors supported multiple disabilities (and multiply disabled people); and open sourced their toolkit.

Other Disability/Accessibility Highlights

Dreaming Disability Justice Workshop: This workshop discussed academic inclusion and the importance of research at the intersection of race and disability that is strongly influenced by both perspectives in conjunction (as opposed to a “this and that” model). Also just not erasing that history (e.g. see these examples of Black, disabled, powerful women). Some of the interesting people I learned more about include:  Cella Sum (politics of care in disability spaces); Frank Elavsky (accessible visualization); Harsha Bala (an anthropologist); Erika Sam (Design Researcher at Microsoft) and Tamana Motahar (A PhD student studying personal informatics and Social Computing for empowering marginalized populations worldwide).

On Tuesday morning I attended Accessibility and Aging. The first talk explored the experience of older adults sharing the work of financial management. Joint accounts & power of attorney are both problematic mechanisms. Banking assistance needs fall along a spectrum these are two blunt for. These concerns seem common to many domains (e.g. health portals; social media accounts; etc). The second talk was on The third was a paper I collaborated on, access needs research design was presented by Kelly Mack. How can we empower researchers to anticipate the broad range of disabilities they might encounter? Anticipate (make things accessible from the beginning) and Adjust as needed. End by Reflecting on things like power dynamics and access conflicts. Next, Christina Harrington‘s award winning paper addressed communication breakdowns in health information seeking using voice for black older adults. Her paper addressed critical concerns such as what dialects are supported by voice systems.

In reviewing other accessibility-related talks, I want to highlight the importance of end-user and multi-stakeholder authoring of interaction and experiences. Dai et al discussed relational maintenance in AAC conversation between users and caregivers. Seita et al show how DHH and speaking people can effectively collaborate when using a visual collaboration system like Miro. Ang et al discuss how video conferencing systems can better support the combination of speaking / signing / interpreting in mixed ASL/spoken language video conferencing calls. One concern — the difficulty of dynamically adjusting communication — reminded me of challenges I’ve experienced with online conferencing as well. Another mixed stakeholder case is Waycott et al’s study staff’s role in supporting VR experiences for elders are positive. A unique case of multi-stakeholder interaction is Choi’s analysis of disabled video blogger interaction with content curation algorithms and how algorithms could better support identity negotiation and audience selection.

I also want to highlight some interaction design work. I loved the WoZ study with ASL control of voice assistants by Glasser et al. It got right to the heart of an important unsolved problem, and in the process shed light on challenges we must solve for ASL as an input language for all sorts of automated contexts. Zhao et al described user-defined above-shoulder gesturing for motor-impaired interaction design, highlighting the importance of end user control over the interaction language for things like social acceptability and comfort. Nomon is an innovative approach to single switch input. This might be amenable to smooth saccade tracking as well.

To summarize an inspiring event. I was glad to be able to review it so thoroughly, thanks to all the online talks which I watched asynchronously.

Fabrication Work at ASSETS 2020

I attended my second virtual conference in a week, ASSETS 2020. Once again, kudos to the organizers for pulling off a wonderful experience. It was very similar to UIST (discord+zoom), with some different choices for format — a slightly slower pace with more opportunities to take breaks. I’m not sure I have a strong preference there, but I was definitely more tired after the longer days.

One other significant difference was the lack of a video option in Discord — this choice was made for accessibility reasons, because interpreters would only be possible on zoom, and it was (somewhat) made up for by the many zoom social events. Still, I did miss the more unplanned nature of the UIST social events. I wonder if there’s a way to have the best of both worlds — spontaneity and accessibility.

There was far too much outstanding work at ASSETS for me to summarize it all, including lots of award-winning work by UW accessibility researchers (CREATE). However, in this post I want to sample a particular subset of ASSETS work: Fabrication work. I was excited to see more and more of this at ASSETS, ranging from bespoke projects such as this one-handed braille keyboard (video) to this innovative exploration of low-cost materials for making lock screens tangible which explored everything from cardboard to quinoa (video)!

One theme was making the fabrication process itself accessible to people with disabilities. For example, Lieb et al work on using haptics to allow 3D model mesh inspection by blind 3D modelers (video), while Race et al presented a workshop curriculum for nonvisual soldering (video).

Of course there was a range of papers exploring tactile graphics. For example, PantoGuide is a hand-mounted haptic display that presented meta data as the user explores (video). I particularly loved Gong et al’s study, which takes a nuanced approach to image understanding that values a variety of ways of understanding graphics:

A final theme I want to call out is in the space of physical computing, support for learning was a theme in TACTOPI (Abreu et al; video) and TIP-Toy (Barbareschi et al; video). TIP-Toy particularly interested me because of its support extended to allowing not only consumption of content but also authorship. On a completely different note, Kane et al.’s fascinating work on the ableist assumptions of embedded sensing systems was a best paper nomination.

While this quick tour by no means covers every relevant project, it does highlight the wide range of ASSETS work at the intersection of accessibility and fabrication. I look forward to seeing this area expand in years to come!

UIST 2020 Trip Report

I have just finished attending UIST and loved the format this year — it’s been outstanding to attend UIST remotely, and the format of short talks and Q&A format has been very engaging. I think the use of both discord and zoom worked really well together.

A little background — I haven’t been able to attend UIST regularly for quite a while due to a variety of personal and family obligations, and disability concerns. So for me, this was an enormous improvement, going from 0 to 70% or so. I imagine that for those who feel they’re going from 100% to 70% it may have been less ideal, but the attendance at the conference demonstrates that I was definitely not the only person gaining 70% instead of losing 30%

I want to speak to two things that surprised me about the online format. First, the value of immediate connection making when I think of something, and the space and time to make note of that and follow up on it right away, was striking. I was sending things to various students all morning, particularly on Thursday, when I went to so many demos and talks.

A second value was that of making connections even for people not attending. For example, I posted a question in a talk thread that came from a student who wasn’t attending UIST, and the speaker ended up emailing that student and making the connection deeper before the day ended. I don’t think this would have happened at an in-person event.

I also want to reflect on some of the content. What I was inspired by at UIST this year was the variety of work that had really interesting accessibility implications. Maybe that just happened to be my own lens, but the connections were very strong. In many cases, the technology facilitated accessibility but wasn’t directly used that way, in others the application to accessibility was directly explored. Some examples, in the order I happened upon them

This video demonstrates an interesting combination of programming and graphics. The work treats data queries as a shared representation between the code and interactive visualizations. A very interesting question is whether there could be the possibility of also generating nonvisual, accessible visualizations in addition to visual ones.

Bubble Visualization Overlay in Online Communication for Increased Speed Awareness and Better Turn Taking explores how to help second language speakers adjust their speed awareness and turn taking. However this would also be very valuable when a sign language translator is present. On the topic of audio captioning, one of the papers/demos that received an honorable mention focused on live captioning of speech, and was live every time I saw the authors, with a google glass-like interface. The major contributions of this work include low-power modular architecture to enable all-day active streaming of transcribed speech in a lightweight, socially-unobtrusive HWD; Technical evaluation and characterization of power, bandwidth and latency; and usability evaluations in a pilot and two studies with 24
deaf and hard-of-hearing participants to understand the physical and social comfort of the prototype in a range of scenarios, which align with a large-scale survey of 501 respondents. This is exemplary work that featured a user among its authors and lots of experimentation. The project uses the google speech API and live transcribe engine and can also do real time translation and non-speech sound events.

Another system, Unmasked, used accelerometers on the lips to capture facial expressions and display them using a visualization of lips outside a mask to make speaking while wearing a mask more expressive (video). It would be interesting to know whether this improved lip reading at all. Very impressive. Finally, the video below shows a system for authoring audio description of videos, a very difficult problem without the right tools. An interesting question my student Venkatesh raised is whether this could be described with crowdsourcing to partly automate descriptions.

Interface design was another theme, sometimes connected directly to accessibility (as in this poster on Tangible Web Layout Design for Blind and Visually Impaired People) and sometimes indirectly: This project allows multimodal web GUI production (video) and this project converts a GUI interface to a physical interface. It is an interesting twist on helping people build interfaces, as well as supporting physical computing and in general converting GUIs from one modality to another has interesting accessibility implications.

Next, “multiwheel” is a 3D printed mouse for nonvisual interaction (video) while swipe&switch is a novel gaze interaction interface that improves gaze input (traditionally very difficult to deal with) by speeding it up (video). Turning to interaction “with the world” instead of the computer, this system has the important advantage of giving people who are blind agency (a key tenet of a disability justice focused approach) in deciding what they want to hear about when navigating the world, by letting them use a joystick to explore their surroundings by “scrubbing” (video). The system is currently implemented in Unity, it will be interesting to see how it performs in real world environments.

On the fabrication front, several projects explored accessibility applications. In the sports domain, this demo showed a custom prosthetic end effector for basketball (video). A second project simulated short arms and small hands. While this was not intended for accessibility uses, the use of simulation is something that accessibility folks often critique and the project does not problematize that choice, focusing instead on the technical innovations necessary to create the experience (video). Another fabrication paper allowed embedding transformable parts to robotically augment default functionalities (video) and the paper that won best demo award:

This very cool demo is a tool for creating custom inflatable motorized vehicles. A bike like vehicle and a wheelchair like vehicle are demonstrated. Vehicles can easily be customized to user’s skeletal characteristics.

We chatted briefly about the potential of partial inflation for multiple purposes, pressurizing on demand, and how to add texture either at manufacture time or using a “tape on” technique, e.g. for off-roading.

Some of the fabrication work I was most excited about wasn’t directly accessibility but had interesting implications for accessibility. For example, the robotic hermit crab project (video) tied one robot to many functions making a really fun set of opportunities for actuation available. I could imagine making an active, tangible desktop a reality using such a system. Two papers provided extra support when assembling circuits and physical objects, with I think obvious potential accessibility applications, and one is a very cool general mechanism for addressing uncertainty in size during laser cutting. This can allow users with less experience to share and produce laser cut objects. Another beautiful piece of work on supports making wooden joints. Finally Defextiles supported printing of cloth using consumer-grade printer, an advance in materials and flexibility. All of these innovations help to broaden the set of people who can repeatably make physical objects, including accessibility objects. And of course I have to call out our own paper on KnitGist: optimization-based design of knit objects as falling into this category as well (video). Lastly, there was some very interesting work on velcro that can be recognized when you tear one thing off another, and laser cut, based on the shape of the velcro (video). Could you embed tactile signals in the velcro for blind people (we can after all 3D print velcro now)?

Another exciting session focused entirely on program synthesis. This paper looks at ambiguous user examples in the context of a regexp; small step live programming shows a program’s output is shown live as it is edited, and in this article users can edit the results (instead of the program) and the synthesizer suggests code that would generate those results and the last one addresses loop understanding.

That concludes a very long list of inspiring work that I enjoyed at UIST this year. I sometimes think that an advantage of missing multiple years of a conference is how fresh and exciting it all seems when you get back to it. That said, I truly think UIST was also just fresh and exciting this year. Kudos to everyone involved in making it such a success!

2017 SIGCHI Accessibility report

The SIGCHI Accessibility group has put out its 2017 report, which anyone can comment on. I’ve also pasted the text in here.

Contributors to the report: Jennifer Mankoff, Head, SIGCHI Accessibility Community, Shari Trewin, President, SIGACCESS

Contact person: Jennifer Mankoff, jmankoff@cs.cmu.edu

Introduction

It has been two years since the SIGCHI Accessibility Community first published a report on the state of accessibility within SIGCHI, summarized in [Mankoff, 2016]. This report was the first of its kind for SIGCHI, and reflected a growing awareness in the SIGCHI community of a need to directly engage with and improve the accessibility of both conferences and online resources sponsored by SIGCHI. The report argued thatSIGCHI can attract new members, and make current members feel welcome by making its events and resources more inclusive. This in turn will enrich SIGCHI, and help it to live up to the ideal of inclusiveness central to the concept of user-centered design” and that ACM’s own guidelines clearly argue for a standard that SIGCHI was not meeting.

The report laid out a series of five recommendations for improving the accessibility of SIGCHI, including the accessibility of conferences (R1) and content (R2). Additional recommendations include a better process for handling accessibility requests (R3), increasing representation of people with disabilities within SIGCHI (R4) and assessing success at least once every two years (R5). This update is our attempt to fulfill R5.

The rest of this report starts an executive summary of the biggest accomplishments of the last two years (along with the data sources used to draw those conclusions). The remainder is organized by recommendation, highlighting the goals set out and what has been accomplished.

Table of Contents

Introduction

Table of Contents

Executive Summary

Data Used in this Report

R1: Conference Accessibility

R2 Content Accessibility

R3. Handling Accessibility Requests

R4. Increasing Representation

R5. Assess success

Executive Summary

The SIGCHI Accessibility community has made significant progress over the last two years, particularly on short term goals set out for R1 (conference accessibility), R4 (increasing representation) and R5 (assessing success). Less progress has been made on R2 (content accessibility) and R3 (handling accessibility requests). Most significant among our progress are the release of new accessibility guidelines for conferences collaboratively with SIGACCESS (http://www.sigaccess.org/welcome-to-sigaccess/resources/accessible-conference-guide/), publication of a CACM article on accessibility efforts within SIGCHI [Lazar, 2017], addition of an accessibility representative to the CHI Steering committee, and establishment of a facebook group, sigchi-access

Despite our progress, there are major steps that still need to be taken in every category. In addition, the last two years’ effort has made it clear that the five goals represent a more ambitious program than a small volunteer community can easily achieve all on its own. We are very grateful for the help of SIGACCESS and members of the SIGCHI leadership, which have made a huge difference in what the community was able to accomplish in the last two years. However, as long as we remain an all volunteer effort, the SIGCHI Accessibility community will have to decide what to prioritize over the next two years: Continued effort on R1, or other goals. For these reasons, we strongly recommend that both the SIGCHI leadership and the community consider what funds might be available to help move this effort forward, what tasks can be transferred to contract employees of SIGCHI such as Sheridan, and fundraising efforts to support these changes.  

Analysis of SIGCHI Accessibility Metrics

The SIGCHI Accessibility Communities efforts focus on the digital and physical accessibility of SIGCHI resources. As such, important metrics relate to the experiences of people with disabilities who consume those resources as well as the overall percentage of the resources that are accessible. As will become evident in this report, our current focus is on online videos, pdfs, and conference accessibility. Thus, relevant metrics might include, for example, the percentage of new videos that are accessible, the percentage of PDFs that are accessible, the percentage of conferences that have accessibility chairs, and surveys of conference attendees.

For this report, the Accessibility Community combined several different sources of data. As with the previous report, much of our data is focused on conference and meeting participation, and we use the same categories: We refer the reader to the original report for details on these categories. To briefly summarize, we include direct observation, experiences reported by SIGCHI accessibility community members and survey data from conferences (CHI and CSCW 2014-16). Appendix A has the survey questions that were asked. We did not conduct a survey of our own of community members this year.

Results

Overall, there is a trend toward increased accessibility at CHI, based on the conference surveys. However, these numbers are difficult to interpret since the direct question about access needs was not asked in 2016. In addition, it is possible multiple people might report on the same person not being able to attend, some of these answers are about financial barriers rather than disability, the sample is self selected, most people who cannot attend may not be remembered by attendees filling out the survey, and people with disabilities who attend may not want to disclose.

Screen Shot 2017-09-03 at 7.36.40 AM
Accessibility questions and answer counts from CHI 2014, 15 and 16 conference surveys.

For all of these reasons, it is in many ways more instructive to look at the categories of issues reported. However, in our analysis, the majority of specific data reported was CHI 2016. Thus, we cannot easily describe trends over time. Here are the major categories mentioned in the open ended responses to the CHI 2016 survey: Difficulty of networking; lack of ramps for presenters; need for speech to text or sign language interpretation; need for accessible digital materials (program, maps, pdfs); distance between rooms; lack of reserved seats in the front of rooms (e.g., for the hearing impaired); cost of attendance for people with disabilities; prioritization of robots over people with disabilities; accessibility of robots; food issues; need for a quiet room or space; bathroom usability.

In addition, there were several issues that have to do with educating the community. These included the need to coach presenters to remember some people cannot see their slides, the importance of an SV orientation training segment for special needs work, the importance of publishing accessibility issues up front (before conference paper submission).

In terms of hard metrics, conference accessibility is increasing. In the chart below, a score of 1 means that either there was a page on the conference website that mentioned accessibility or an accessibility chair of some sort on the organizing committee (2 means both). The Total (bottom bar) is the total number of conferences that year that had at least a score of 1.  

Screen Shot 2017-09-03 at 7.38.43 AM
Chart showing conferences that mentioned accessibility on their conference website in 2015/16 and 2017/18. The trend is increasing, but the numbers are small.

Video accessibility is also increasing, captions are now included in all videos uploaded through the official SIGCHI Video service (see R2). PDF accessibility is low, and not improving, hovering around 15% [Brady, 2015; Bigham 2016].

R1: Achieve Conference Accessibility

Ensure that 100% of conferences are accessible, have an accessibility policy and have a clear chain of command for addressing accessibility issues.1

The first recommendation from the 2015 report dealt with conference accessibility. The community set out a series of short term goals:

  • [Not Met] Have an accessibility chair for every conference by the end of 2017. This goal has not been achieved. However, the SIGCHI accessibility community now has representation on the CHI steering committee, and was asked to present to the SIGCHI Executive Committee. Both groups voted to ask all conferences to have a web page documenting their efforts to be accessible (or acknowledging a lack of accessibility), specifically on a /access subpage of their conference URL.
  • [Met] Educate the community. While there is always more work to be done, the SIGCHI accessibility community has participated in several education efforts, including the publication of an interactions blog post and article [Mankoff, 2016]; a CACM article on making the field of computing more inclusive [Lazar, 2017]; a 2016 SIG on the state of Accessibility within SIGCHI held at CHI [Rode, 2016]; the creation of a facebook group called SIGCHI-access; and the aforementioned presentations to the SIGCHI leadership.
  • [Met] Create updated conference accessibility guidelines in collaboration with SIGACCESS. This goal was completed, and the guidelines are available at http://www.sigaccess.org/welcome-to-sigaccess/resources/accessible-conference-guide/
    In addition to providing general guidance for making a conference accessible, the guidelines provide instructions specifically for how to generate the content that can be placed at the /access URL

To summarize, we have made some progress but not achieved our most important goal of widespread adoption of accessibility practices across conferences. In addition, we have not yet started the longer-term goal of best practices and financial viability. That said, we hope that the /access URL will move us toward the ultimate goal of having all conferences start planning on accessibility from the very beginning.

R2: Achieve Content Accessibility

Ensure that 100% of new content such as videos and papers meets established standards for accessibility and develop a process for achieving this.1

The SIGCHI Accessibility community has not focused as much attention on this goal as on R1 in the last two years. Our short term goals for content accessibility include assessing current status, creating guidelines to use as a standard, and developing a process for addressing accessibility. These goals need to be addressed for multiple types of content — papers, videos, and websites.

The SIGCHI Accessibility community, and others within CHI, have mainly focused on papers and videos in the last few years. No work has been done on websites. With regard to paper accessibility:

  • [Partly Met] Assess current status. We do not have a comprehensive assessment of current status. However, some statistics on paper accessibility are available in the CACM article [Lazar, 2017]. Specifically, pdf accessibility of CHI papers from 2013 through 2016 is mentioned in that article, and the numbers hover around 15 percent in most years.
  • [Not Met] Create guidelines to use as a standard. This is an open problem, particularly because in the case of paper accessibility, the most recent ACM revision of the SIGCHI template is less accessible than the 2016 version. SIGACCESS is experimenting with an HTML5 format that SIGCHI may want to adopt if it is successful.
  • [Not Met] Develop a process for addressing accessibility. Multiple methods for paper accessibility have been tested so far (giving authors notifications of their accessibility flaws in CHI 2014, having a team of people making fixes on request in CHI 2015). It’s clear that the most effective approach used so far (based on the data in the CACM paper), is a contract-based approach that includes professional support, underwritten by conference or journal budgets. Such an approach will be more effective and consistent than one that relies entirely on author/conference volunteers.

With regard to video accessibility, progress has been very successful. A process for video captioning was piloted with the help of the SIGCHI CMC in 2014/2015, and has now been adopted as part of the SIGCHI Video service: “SIGCHI provides conferences with equipment to record talks… Conferences only pay for shipping the equipment, captioning and one additional student volunteer per track for recording” (emphasis ours). An external company (rev.com) is used to write up the captions, and SIGCHI’s video program chairs ensure they are uploaded with the videos.

Currently, more than 400 videos have been captioned (all keynotes and award talks, all talks from CHI 2015 and 2016). Additionally, this program has been expanded from just CHI to include all upcoming specialized conferences that use SIGCHI video equipment. Conferences added this year include UIST, ISS and SUI.

To summarize, progress has been very successful for videos, but is slow on pdfs, and nothing is known about websites. The SIGCHI Accessibility community believes that the primary barrier to meeting these goals is having enough volunteers to focus on them, and having a commitment to hire professional support to execute them. The success of the videos in comparison to papers is evidence for this.  In addition, it is likely that better participation in R1 will be needed before process goals in R2 can be met effectively.

R3: Handle Accessibility Requests

Create a process for handling accessibility requests within SIGCHI1

 

  • [Partly met] Create a single point of contact for accessibility questions and advertise it SIGCHI wide. The community has created a single point of contact for support and discussion through the facebook sigchi-access group6. However, there is still not a well known, established mailing list for more formal requests, conference chair support, and so on.

 

  • [Not Met] Study the legal context. No effort has been put into this at the moment. However, external work to the SIGCHI Accessibility community has resulted in a first publication on this topic [Kirkham, 2015].

The progress on R3 is very slow. Visibility among the SIGCHI leadership, which has recently increased, should make it easier to establish a communication structure for supporting conferences and we are optimistic that this will change before the next report two years from now. Studies of the legal context is a costly proposition that may be harder to act on, and consideration should be put into what would best lay the groundwork to make this possible.

On the positive side, the SIGCHI Accessibility community has met its long term goal of establishing a position focused on Accessibility among the SIGCHI leadership, specifically with respect to representation on the CHI Steering committee mentioned above. Other long-term goals such as a SIGCHI-wide budget for accessibility have not been met yet.

R4: Increase Representation

Increase representation of people with disabilities within SIGCHI1

  • [Goal Met] Run a networking event. This goal was met in 2016 in the form of a SIG on accessibility at CHI. In 2017 this was formalized through the addition of accessibility as a component of the Diversity and Inclusion lunch at CHI. Finally, SIGACCESS and the SIGCHI Accessibility community held a joint pizza party at CHI 2017. We hope to see these types of events continue on a yearly basis.

Our short term goal was met for R4, however the larger issue of inclusion is not resolved. It has been gratifying, though the SIGCHI Accessibility community takes no credit, to see the increased use of Beams to address accessibility, as well as the addition of remote committees to the CHI program meeting and the announcement of funding to increase diversity and inclusion events. Both of these help to increase access. However what is not clear is whether there is an increase in participation by people with disabilities or how we would measure that. In addition, there are other long term goals mentioned in the original report that we can consider working on in the next few years, such as fellowships, mentoring, and increased outreach to related stakeholders.  

R5: Assess Success

Assess SIGCHI’s success in meeting accessibility guidelines at least once every 2 years1.

  • [Goal Met] Produce regular reports. This report represents our success on the basic goal of reporting on progress, based on post-conference surveys and other data.

Our immediate goal of assessing and reporting on progress is met by this report. However, we have not established a sustainable approach to collecting the appropriate data on a regular basis, it is still very ad-hoc. Some metrics are ill defined or not tracked.

Assessment of program committees, awards, and so on, has not yet begun. However, we have been working with SIGCHI to ensure that conferences are assessed in a consistent way from year to year. The tentative survey questions we have designed for this purpose are:

Did you request any accessibility-related accommodations for the conference? Yes or No?

If yes, were your requests met? No, a little, some, a lot, Yes

What did you request and how could we improve your experience?

Possible additional questions if the chair wants:

Because we cannot know who did not attend the conference for accessibility reasons, it would be helpful to know about the existence of such issues. Please tell us if you are aware of a situation we should consider, while respecting the privacy of relevant parties.

Finally, the long term goal of taking the burden of reporting off the shoulders of volunteers has not been met. The SIGCHI Accessibility community should consider raising funds to support these efforts over the next few years.  

Appendix A: Conference Survey Questions

At conferences, the following questions were asked:

CHI 2014/2015 Did you request any accessibility-related accommodations for the conference? If yes, were your requests met in a timely manner?

CHI 2014-2016 Do you know of any researchers, practitioners, educators or students  with disabilities who wanted to attend the CHI conference, but could not because they required specific accommodations? What accommodations could we provide to help them attend?

CSCW 2015 Do you have any suggestions for things the CSCW 2016 planning team should particularly attend to in order to make sure the conference is accessible to people with disabilities?

All years: Sometimes generic questions about the conference received accessibility-spceific responses, which we also analyzed.

Appendix B: Changes to ACM SIGCHI Conferences Policy

http://www.sigchi.org/conferences/organising-a-sigchi-sponsored-or-co-sponsored-conference under accessibility was changed to say:

ACM strives to make conferences as accessible as possible and provide reasonable accommodations so individuals with disabilities can fully participate. In many countries this is a legal requirement, and compliance should not be taken lightly. We recommend appointing an Accessibility Chair for your conference.  SIGACCESS and SIGCHI’s accessibility community have developed guidance for conference organizers that steps you through all stages of conference planning from site selection to the event itself.  The SIGCHI accessibility community (reachable at SIGCHIaccess on facebook and sigchi-accessibility@googlegroups.com),  SIGACCESS  (reachable at  chair_sigaccess@acm.org) and ACM support staff stand ready to help you make your event as inclusive as possible.

[Mankoff, 2016] Mankoff, J. (2016). The wicked problem of making SIGCHI accessible. interactions, 23(3), 6-7. DOI 10.1145/2903528

[Brady, 2015] Erin L. BradyYu ZhongJeffrey P. BighamCreating accessible PDFs for conference proceedings. W4A 2015: 34:1-34:4

[Bigham, 2016] Jeffrey P. BighamErin L. BradyCole GleasonAnhong GuoDavid A. ShammaAn Uninteresting Tour Through Why Our Research Papers Aren’t Accessible. CHI Extended Abstracts 2016: 621-631

[Lazar, 2017] Jonathan LazarElizabeth F. ChurchillTovi GrossmanGerrit C. van der VeerPhilippe A. PalanqueJohn MorrisJennifer MankoffMaking the field of computing more inclusive. Commun. ACM 60(3): 50-59 (2017)

[Rode, 2016] Jennifer Ann RodeErin BradyErin BuehlerShaun K. KaneRichard E. LadnerKathryn E. RinglandJennifer MankoffSIG on the State of Accessibility at CHI. CHI Extended Abstracts 2016: 1100-1103

[Kirkham, 2015] Reuben KirkhamJohn VinesPatrick OlivierBeing Reasonable: A Manifesto for Improving the Inclusion of Disabled People in SIGCHI Conferences. CHI Extended Abstracts 2015: 601-612

 

FABulous CHI 2016

At the CHI 2016 conference this week, there were a slew of presentations on the topic of fabrication. First of course I have to highlight our own Megan Hofmann who presented our paper, Helping Handsa study of participatory design of assistive technology that highlights the role of rapid prototyping techniques and 3D printing. In addition, Andrew Spielberg (MIT & Disney Intern) presented RapID (best), a joint project with Disney Research Pittsburgh which explores the use of RFID tags as a platform for rapidly prototyping interactive physical devices by leveraging probabilistic modeling to support rapid recognition of tag coverage and motion.

I was really excited by the breadth and depth of the interest at CHI in fabrication, which went far beyond these two papers. Although I only attended by robot (perhaps a topic for another blog post), attending got me to comb through the proceedings looking for things to go to — and there were far more than I could possibly find time for! Several papers looked qualitatively at the experiences and abilities of novices, from Hudson’s paper on newcomers to 3D printing to Booth’s paper on problems end users face constructing working circuits (video; couldn’t find a pdf) to Bennett’s study of e-NABLE hand users and identity.

There were a number of innovative technology papers, including Ramaker’s Rube Goldbergesque RetroFab, Groeger’s HotFlex (which supports post-processing at relatively low temperatures), and Peng’s incremental printing while modifying. These papers fill two full sessions (I have only listed about half).   Other interesting and relevant presentations (not from this group) included a slew of fabrication papers, a study of end user programming of physical interfaces, and studies of assistive technology use including the e-NABLE community.

Two final papers I have to call out because they are so beautiful: Kazi’s Chronofab (look at the video) and a study in patience, Efrat’s Hybrid Bricolage for designing hand-smocked artifacts (video, again can’t find the paper online).