I attended my second virtual conference in a week, ASSETS 2020. Once again, kudos to the organizers for pulling off a wonderful experience. It was very similar to UIST (discord+zoom), with some different choices for format — a slightly slower pace with more opportunities to take breaks. I’m not sure I have a strong preference there, but I was definitely more tired after the longer days.
One other significant difference was the lack of a video option in Discord — this choice was made for accessibility reasons, because interpreters would only be possible on zoom, and it was (somewhat) made up for by the many zoom social events. Still, I did miss the more unplanned nature of the UIST social events. I wonder if there’s a way to have the best of both worlds — spontaneity and accessibility.
Of course there was a range of papers exploring tactile graphics. For example, PantoGuide is a hand-mounted haptic display that presented meta data as the user explores (video). I particularly loved Gong et al’s study, which takes a nuanced approach to image understanding that values a variety of ways of understanding graphics:
While this quick tour by no means covers every relevant project, it does highlight the wide range of ASSETS work at the intersection of accessibility and fabrication. I look forward to seeing this area expand in years to come!
I have just finished attending UIST and loved the format this year — it’s been outstanding to attend UIST remotely, and the format of short talks and Q&A format has been very engaging. I think the use of both discord and zoom worked really well together.
A little background — I haven’t been able to attend UIST regularly for quite a while due to a variety of personal and family obligations, and disability concerns. So for me, this was an enormous improvement, going from 0 to 70% or so. I imagine that for those who feel they’re going from 100% to 70% it may have been less ideal, but the attendance at the conference demonstrates that I was definitely not the only person gaining 70% instead of losing 30%
I want to speak to two things that surprised me about the online format. First, the value of immediate connection making when I think of something, and the space and time to make note of that and follow up on it right away, was striking. I was sending things to various students all morning, particularly on Thursday, when I went to so many demos and talks.
A second value was that of making connections even for people not attending. For example, I posted a question in a talk thread that came from a student who wasn’t attending UIST, and the speaker ended up emailing that student and making the connection deeper before the day ended. I don’t think this would have happened at an in-person event.
I also want to reflect on some of the content. What I was inspired by at UIST this year was the variety of work that had really interesting accessibility implications. Maybe that just happened to be my own lens, but the connections were very strong. In many cases, the technology facilitated accessibility but wasn’t directly used that way, in others the application to accessibility was directly explored. Some examples, in the order I happened upon them
Bubble Visualization Overlay in Online Communication for Increased Speed Awareness and Better Turn Taking explores how to help second language speakers adjust their speed awareness and turn taking. However this would also be very valuable when a sign language translator is present. On the topic of audio captioning, one of the papers/demos that received an honorable mention focused on live captioning of speech, and was live every time I saw the authors, with a google glass-like interface. The major contributions of this work include low-power modular architecture to enable all-day active streaming of transcribed speech in a lightweight, socially-unobtrusive HWD; Technical evaluation and characterization of power, bandwidth and latency; and usability evaluations in a pilot and two studies with 24 deaf and hard-of-hearing participants to understand the physical and social comfort of the prototype in a range of scenarios, which align with a large-scale survey of 501 respondents. This is exemplary work that featured a user among its authors and lots of experimentation. The project uses the google speech API and live transcribe engine and can also do real time translation and non-speech sound events.
Another system, Unmasked, used accelerometers on the lips to capture facial expressions and display them using a visualization of lips outside a mask to make speaking while wearing a mask more expressive (video). It would be interesting to know whether this improved lip reading at all. Very impressive. Finally, the video below shows a system for authoring audio description of videos, a very difficult problem without the right tools. An interesting question my student Venkatesh raised is whether this could be described with crowdsourcing to partly automate descriptions.
Next, “multiwheel” is a 3D printed mouse for nonvisual interaction (video) while swipe&switch is a novel gaze interaction interface that improves gaze input (traditionally very difficult to deal with) by speeding it up (video). Turning to interaction “with the world” instead of the computer, this system has the important advantage of giving people who are blind agency (a key tenet of a disability justice focused approach) in deciding what they want to hear about when navigating the world, by letting them use a joystick to explore their surroundings by “scrubbing” (video). The system is currently implemented in Unity, it will be interesting to see how it performs in real world environments.
We chatted briefly about the potential of partial inflation for multiple purposes, pressurizing on demand, and how to add texture either at manufacture time or using a “tape on” technique, e.g. for off-roading.
Some of the fabrication work I was most excited about wasn’t directly accessibility but had interesting implications for accessibility. For example, the robotic hermit crab project (video) tied one robot to many functions making a really fun set of opportunities for actuation available. I could imagine making an active, tangible desktop a reality using such a system. Two papers provided extra support when assembling circuits and physical objects, with I think obvious potential accessibility applications, and one is a very cool general mechanism for addressing uncertainty in size during laser cutting. This can allow users with less experience to share and produce laser cut objects. Another beautiful piece of work on supports making wooden joints. Finally Defextiles supported printing of cloth using consumer-grade printer, an advance in materials and flexibility. All of these innovations help to broaden the set of people who can repeatably make physical objects, including accessibility objects. And of course I have to call out our own paper on KnitGist: optimization-based design of knit objects as falling into this category as well (video). Lastly, there was some very interesting work on velcro that can be recognized when you tear one thing off another, and laser cut, based on the shape of the velcro (video). Could you embed tactile signals in the velcro for blind people (we can after all 3D print velcro now)?
That concludes a very long list of inspiring work that I enjoyed at UIST this year. I sometimes think that an advantage of missing multiple years of a conference is how fresh and exciting it all seems when you get back to it. That said, I truly think UIST was also just fresh and exciting this year. Kudos to everyone involved in making it such a success!
At the CHI 2016 conference this week, there were a slew of presentations on the topic of fabrication. First of course I have to highlight our own Megan Hofmann who presented our paper, Helping Hands, a study of participatory design of assistive technology that highlights the role of rapid prototyping techniques and 3D printing. In addition, Andrew Spielberg (MIT & Disney Intern) presented RapID (), a joint project with Disney Research Pittsburgh which explores the use of RFID tags as a platform for rapidly prototyping interactive physical devices by leveraging probabilistic modeling to support rapid recognition of tag coverage and motion.
I was really excited by the breadth and depth of the interest at CHI in fabrication, which went far beyond these two papers. Although I only attended by robot (perhaps a topic for another blog post), attending got me to comb through the proceedings looking for things to go to — and there were far more than I could possibly find time for! Several papers looked qualitatively at the experiences and abilities of novices, from Hudson’s paper on newcomers to 3D printing to Booth’s paper on problems end users face constructing working circuits (video; couldn’t find a pdf) to Bennett’s study of e-NABLE hand users and identity.
There were a number of innovative technology papers, including Ramaker’s Rube Goldbergesque RetroFab, Groeger’s HotFlex (which supports post-processing at relatively low temperatures), and Peng’s incremental printing while modifying. These papers fill two full sessions (I have only listed about half). Other interesting and relevant presentations (not from this group) included a slew of fabrication papers, a study of end user programming of physical interfaces, and studies of assistive technology use including the e-NABLE community.
Two final papers I have to call out because they are so beautiful: Kazi’s Chronofab (look at the video) and a study in patience, Efrat’s Hybrid Bricolage for designing hand-smocked artifacts (video, again can’t find the paper online).