A picture of a knit lampshade in blue and green surrounding a lit lamp on an orange table.

UIST 2020 Trip Report

I have just finished attending UIST and loved the format this year — it’s been outstanding to attend UIST remotely, and the format of short talks and Q&A format has been very engaging. I think the use of both discord and zoom worked really well together.

A little background — I haven’t been able to attend UIST regularly for quite a while due to a variety of personal and family obligations, and disability concerns. So for me, this was an enormous improvement, going from 0 to 70% or so. I imagine that for those who feel they’re going from 100% to 70% it may have been less ideal, but the attendance at the conference demonstrates that I was definitely not the only person gaining 70% instead of losing 30%

I want to speak to two things that surprised me about the online format. First, the value of immediate connection making when I think of something, and the space and time to make note of that and follow up on it right away, was striking. I was sending things to various students all morning, particularly on Thursday, when I went to so many demos and talks.

A second value was that of making connections even for people not attending. For example, I posted a question in a talk thread that came from a student who wasn’t attending UIST, and the speaker ended up emailing that student and making the connection deeper before the day ended. I don’t think this would have happened at an in-person event.

I also want to reflect on some of the content. What I was inspired by at UIST this year was the variety of work that had really interesting accessibility implications. Maybe that just happened to be my own lens, but the connections were very strong. In many cases, the technology facilitated accessibility but wasn’t directly used that way, in others the application to accessibility was directly explored. Some examples, in the order I happened upon them

This video demonstrates an interesting combination of programming and graphics. The work treats data queries as a shared representation between the code and interactive visualizations. A very interesting question is whether there could be the possibility of also generating nonvisual, accessible visualizations in addition to visual ones.

Bubble Visualization Overlay in Online Communication for Increased Speed Awareness and Better Turn Taking explores how to help second language speakers adjust their speed awareness and turn taking. However this would also be very valuable when a sign language translator is present. On the topic of audio captioning, one of the papers/demos that received an honorable mention focused on live captioning of speech, and was live every time I saw the authors, with a google glass-like interface. The major contributions of this work include low-power modular architecture to enable all-day active streaming of transcribed speech in a lightweight, socially-unobtrusive HWD; Technical evaluation and characterization of power, bandwidth and latency; and usability evaluations in a pilot and two studies with 24
deaf and hard-of-hearing participants to understand the physical and social comfort of the prototype in a range of scenarios, which align with a large-scale survey of 501 respondents. This is exemplary work that featured a user among its authors and lots of experimentation. The project uses the google speech API and live transcribe engine and can also do real time translation and non-speech sound events.

Another system, Unmasked, used accelerometers on the lips to capture facial expressions and display them using a visualization of lips outside a mask to make speaking while wearing a mask more expressive (video). It would be interesting to know whether this improved lip reading at all. Very impressive. Finally, the video below shows a system for authoring audio description of videos, a very difficult problem without the right tools. An interesting question my student Venkatesh raised is whether this could be described with crowdsourcing to partly automate descriptions.

Interface design was another theme, sometimes connected directly to accessibility (as in this poster on Tangible Web Layout Design for Blind and Visually Impaired People) and sometimes indirectly: This project allows multimodal web GUI production (video) and this project converts a GUI interface to a physical interface. It is an interesting twist on helping people build interfaces, as well as supporting physical computing and in general converting GUIs from one modality to another has interesting accessibility implications.

Next, “multiwheel” is a 3D printed mouse for nonvisual interaction (video) while swipe&switch is a novel gaze interaction interface that improves gaze input (traditionally very difficult to deal with) by speeding it up (video). Turning to interaction “with the world” instead of the computer, this system has the important advantage of giving people who are blind agency (a key tenet of a disability justice focused approach) in deciding what they want to hear about when navigating the world, by letting them use a joystick to explore their surroundings by “scrubbing” (video). The system is currently implemented in Unity, it will be interesting to see how it performs in real world environments.

On the fabrication front, several projects explored accessibility applications. In the sports domain, this demo showed a custom prosthetic end effector for basketball (video). A second project simulated short arms and small hands. While this was not intended for accessibility uses, the use of simulation is something that accessibility folks often critique and the project does not problematize that choice, focusing instead on the technical innovations necessary to create the experience (video). Another fabrication paper allowed embedding transformable parts to robotically augment default functionalities (video) and the paper that won best demo award:

This very cool demo is a tool for creating custom inflatable motorized vehicles. A bike like vehicle and a wheelchair like vehicle are demonstrated. Vehicles can easily be customized to user’s skeletal characteristics.

We chatted briefly about the potential of partial inflation for multiple purposes, pressurizing on demand, and how to add texture either at manufacture time or using a “tape on” technique, e.g. for off-roading.

Some of the fabrication work I was most excited about wasn’t directly accessibility but had interesting implications for accessibility. For example, the robotic hermit crab project (video) tied one robot to many functions making a really fun set of opportunities for actuation available. I could imagine making an active, tangible desktop a reality using such a system. Two papers provided extra support when assembling circuits and physical objects, with I think obvious potential accessibility applications, and one is a very cool general mechanism for addressing uncertainty in size during laser cutting. This can allow users with less experience to share and produce laser cut objects. Another beautiful piece of work on supports making wooden joints. Finally Defextiles supported printing of cloth using consumer-grade printer, an advance in materials and flexibility. All of these innovations help to broaden the set of people who can repeatably make physical objects, including accessibility objects. And of course I have to call out our own paper on KnitGist: optimization-based design of knit objects as falling into this category as well (video). Lastly, there was some very interesting work on velcro that can be recognized when you tear one thing off another, and laser cut, based on the shape of the velcro (video). Could you embed tactile signals in the velcro for blind people (we can after all 3D print velcro now)?

Another exciting session focused entirely on program synthesis. This paper looks at ambiguous user examples in the context of a regexp; small step live programming shows a program’s output is shown live as it is edited, and in this article users can edit the results (instead of the program) and the synthesizer suggests code that would generate those results and the last one addresses loop understanding.

That concludes a very long list of inspiring work that I enjoyed at UIST this year. I sometimes think that an advantage of missing multiple years of a conference is how fresh and exciting it all seems when you get back to it. That said, I truly think UIST was also just fresh and exciting this year. Kudos to everyone involved in making it such a success!