COVID-19 Risk Negotation

During the COVID-19 pandemic, risk negotiation became an important precursor to in-person contact. For young adults, social planning generally occurs through computer-mediated communication. Given the importance of social connectedness for mental health and academic engagement, we sought to understand how young adults plan in-person meetups over computer-mediated communication in the context of the pandemic. We present a qualitative study that explores young adults’ risk negotiation during the COVID-19 pandemic, a period of conflicting public health guidance. Inspired by cultural probe studies, we invited participants to express their preferred precautions for one week as they planned in-person meetups. We interviewed and surveyed participants about their experiences. Through qualitative analysis, we identify strategies for risk negotiation, social complexities that impede risk negotiation, and emotional consequences of risk negotiation. Our findings have implications for AI-mediated support for risk negotiation and assertive communication more generally. We explore tensions between risks and potential benefits of such systems.

Margaret E. MorrisJennifer BrownPaula S. NuriusSavanna Yee, Jennifer MankoffSunny Consolvo:
“I Just Wanted to Triple Check… They were all Vaccinated”: Supporting Risk Negotiation in the Context of COVID-19.ACM Trans. Comput. Hum. Interact. 30(4): 60:1-60:31 (2023)

Generative Artificial Intelligence’s Utility for Accessibility

With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular.

To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.

Glazko, K. S., Yamagami, M., Desai, A., Mack, K. A., Potluri, V., Xu, X., & Mankoff, J. An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility. ASSETS 2023. https://dl.acm.org/doi/abs/10.1145/3597638.3614548

News: Can AI help boost accessibility? These researchers tested it for themselves

Presentation (starts at about 20mins)

https://youtube.com/watch?v=S40-jPBH820%3Fsi%3DCm17oTaMaDnoQGvK%3F%23t%3D20m26s

How Do People with Limited Movement Personalize Upper-Body Gestures?

Personalized upper-body gestures that can enable input from diverse body parts (e.g., head, neck, shoulders, arms, hands, and fingers), and match the abilities of each user, might make gesture systems more accessible for people with upper-body motor disabilities. Static gesture sets that make ability assumptions about the user (e.g., touch thumb and index finger together in midair) may not be accessible. In our work, we characterize the personalized gesture sets designed by 25 participants with upper-body motor disabilities. We found that the personalized gesture sets that participants designed were specific to their abilities and needs. Six participants mentioned that their inspiration for designing the gestures was based on “how I would do [the gesture] with the abilities that I have”. We suggest three considerations when designing accessible upper-body gesture interfaces: 

1) Track the whole upper body. Our participants used their whole upper-body to perform the gestures, and some switched back and forth from the left to the right hand to combat fatigue.

2) Use sensing mechanisms that are agnostic to the location and orientation of the body. About half of our participants kept their hand on or barely took their hand off of the armrest to decrease arm movement and fatigue.

3) Use sensors that can sense muscle activations without movement. Our participants activated their muscles but did not visibly move in 10% of the personalized gestures.   

Our work highlights the need for personalized upper-body gesture interfaces supported by multimodal biosignal sensors (e.g., accelerometers, sensors that can sense muscle activity like EMG). 

Julie Zhang

Julie Zhang is a freshman at the University of Washington intending to major in Computer Science. She has prior coding experience with data analysis and front-end web development. She hopes to learn more about qualitative coding, human-computer interactions, and fabrication technology to improve accessibility. In her free time, she enjoys running, crocheting, and gardening. She’s excited to work on mobility devices with Make4All!

Brianna Lynn Wimer

Brianna is a Ph.D. student in Computer Science and Engineering at the University of Notre Dame and a visiting researcher at the University of Washington. She’s advised by Dr. Ronald Metoyer (Notre Dame) and Dr. Jennifer Mankoff (Washington). Brianna earned her Bachelor’s in Computer Science from the University of Alabama in 2021, advised by Prof. Chris Crawford. She is also a Google Ph.D. Fellow.

Her research centers on improving data visualizations for accessibility, particularly for those with visual impairments. She works on identifying accessibility challenges and crafting more user-friendly interactive visualization experiences.

Visit Brianna’s homepage at: https://www.briannawimer.com/

Kate Glazko

Kate is a PhD student in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. She is advised by Professor Jennifer Mankoff. She completed her undergraduate studies at USC, where she double-majored in Computer Science and Business Administration, as well as received her master’s degree in Computer Science. She is an NSF CSGrad4US fellow.

She is interested in studying the intersection of digital and physical technologies that empower those with disabilities or illnesses. Her recent research focuses on generative AI and accessibility, seeking to gain a deeper understanding of the opportunities for improving access as well as identifying areas for improvement.

Her website is here: https://kateglazko.com

Andrew Jeon

Hello! I am a Masters student in the school of Electrical & Computer Engineering. 

I am broadly interested in Technology, the world and philosophy. Although my specific research interests are still maturing, HCI and AI are the fields that captivate me currently.

Race, Disability and Accessibility Technology

Working at the Intersection of Race, Disability, and Accessibility

This paper asks how research in accessibility can do a better job of including all disabled person, rather than separating disability from a person’s race and ethnicity. Most of the accessibility research that was published in the past does not mention race, or treats it as a simple label rather than asking how it impacts disability experiences. This eliminates whole areas of need and vital perspectives from the work we do.

We present a series of case studies exploring positive examples of work that looks more deeply at this intersection and reflect on teaching at the intersection of race, disability, and technology. This paper highlights the value of considering how constructs of race and disability work alongside each other within accessibility research studies, designs of socio-technical systems, and education. Our analysis provides recommendations towards establishing this research direction.

Christina N. HarringtonAashaka DesaiAaleyah LewisSanika MoharanaAnne Spencer Ross, Jennifer Mankoff: Working at the Intersection of Race, Disability and Accessibility. ASSETS 2023: 26:1-26:18 (pdf)

https://youtube.com/watch?v=qRMYjdSTnZs%3Fsi%3D0yhLkUyGKu-WO4Na

Azimuth: Designing Accessible Dashboards for Screen Reader Users

Dashboards are frequently used to monitor and share data across a breadth of domains including business, finance, sports, public policy, and healthcare, just to name a few. The combination of different components (e.g., key performance indicators, charts, filtering widgets) and the interactivity between components makes dashboards powerful interfaces for data monitoring and analysis. However, these very characteristics also often make dashboards inaccessible to blind and low vision (BLV) users. Through a co-design study with two screen reader users, we investigate challenges faced by BLV users and identify design goals to support effective screen reader-based interactions with dashboards. Operationalizing the findings from the co-design process, we present a prototype system, Azimuth, that generates dashboards optimized for screen reader-based navigation along with complementary descriptions to support dashboard comprehension and interaction. Based on a follow-up study with five BLV participants, we showcase how our generated dashboards support BLV users and enable them to perform both targeted and open-ended analysis. Reflecting on our design process and study feedback, we discuss opportunities for future work on supporting interactive data analysis, understanding dashboard accessibility at scale, and investigating alternative devices and modalities for designing accessible visualization dashboards.

Arjun Srinivasan, Tim Harshbarger, Darrell Hilliker and Jennifer Mankoff: University of Washington (2023): “Azimuth: Designing Accessible Dashboards for Screen Reader Users” ASSETS 2023.

Benjamin Epstein

Ben is an incoming second-year undergraduate at the University of Washington, majoring in computer science. He has prior programming experience in mobile development, machine learning, and data visualization. He is excited to learn more about data science and how it can be used to inform decisions for everyday life. In the near future, he also hopes to dive into computer vision and databases. His outside interests include playing and watching basketball, listening to music, and running. He will be working on analyzing the associations between various student groups’ behavior, academics, and well-being.