Cripping Data Visualizations

Stacy Hsueh, Beatrice Vincenzi, Akshata Murdeshwar, and Marianela Ciolf Felice. 2023. Cripping Data Visualizations: Crip Technoscience as a Critical Lensfor Designing Digital Access. In The 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’23), October 22–25, 2023, New York, NY, USA. ACM, New York, NY, USA, 16 pages. https://doi.org/10. 1145/3597638.3608427

Data visualizations have become the primary mechanism for engaging with quantitative information. However, many of these visualizations are inaccessible to blind and low vision people. This paper investigates the challenge of designing accessible data visualizations through the lens of crip technoscience. We present four speculative design case studies that conceptually explore four qualities of access built on crip wisdom: access as an ongoing process, a frictional practice, an aesthetic experience, and transformation. Each speculative study embodies inquiry and futuring, making visible common assumptions about access and exploring how an alternative crip-informed framework can shape designs that foreground the creativity of disabled people. We end by presenting tactics for designing digital access that de-centers the innovation discourse.

Shaping Lace

Glazko, K., Portnova-Fahreeva, A., Mankoff-Dey, A., Psarra, A., & Mankoff, J. (2024, July). Shaping Lace: Machine embroidered metamaterials. In Proceedings of the 9th ACM Symposium on Computational Fabrication (pp. 1-12).

The ability to easily create embroidered lace textile objects that can be manipulated in structured ways, i.e., metamaterials, could enable a variety of applications from interactive tactile graphics to physical therapy devices. However, while machine embroidery has been used to create sensors and digitally enhanced fabrics, its use for creating metamaterials is an understudied area. This article reviews recent advances in metamaterial textiles and conducts a design space exploration of metamaterial freestanding lace embroidery. We demonstrate that freestanding lace embroidery can be used to create out-of-plane kirigami and auxetic effects. We provide examples of applications of these effects to create a variety of prototypes and demonstrations.

Identifying and improving disability bias in GPT-based resume screening

Glazko, K., Mohammed, Y., Kosa, B., Potluri, V., & Mankoff, J. (2024, June). Identifying and improving disability bias in GPT-based resume screening. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 687-700).

As Generative AI rises in adoption, its use has expanded to include domains such as hiring and recruiting. However, without examining the potential of bias, this may negatively impact marginalized populations, including people with disabilities. To address this important concern, we present a resume audit study, in which we ask ChatGPT (specifically, GPT-4) to rank a resume against the same resume enhanced with an additional leadership award, scholarship, panel presentation, and membership that are disability-related. We find that GPT-4 exhibits prejudice towards these enhanced CVs. Further, we show that this prejudice can be quantifiably reduced by training a custom GPTs on principles of DEI and disability justice. Our study also includes a unique qualitative analysis of the types of direct and indirect ableism GPT-4 uses to justify its biased decisions and suggest directions for additional bias mitigation work. Additionally, since these justifications are presumably drawn from training data containing real-world biased statements made by humans, our analysis suggests additional avenues for understanding and addressing human bias.

Towards AI-driven Sign Language Generation with Non-manual Markers

Han Zhang, Rotem Shalev-Arkushin, Vasileios Baltatzis, Connor Gillis, Gierad Laput, Raja Kushalnagar, Lorna Quandt, Leah Findlater, Abdelkareem Bedri, and Colin Lea. 2025. Towards AI-driven Sign Language Generation with Non-manual Markers. In Proceedings of the CHI Conference on Human Factors in Computing Systems.

Sign languages are essential for the Deaf and Hard-of-Hearing (DHH) community. Sign language generation systems have the potential to support communication by translating from written languages, such as English, into signed videos. However, current systems often fail to meet user needs due to poor translation of grammatical structures, the absence of facial cues and body language, and insufficient visual and motion fidelity. We address these challenges by building on recent advances in LLMs and video generation models to translate English sentences into natural-looking AI ASL signers. The text component of our model extracts information for manual and non-manual components of ASL, which are used to synthesize skeletal pose sequences and corresponding video frames. Our findings from a user study with 30 DHH participants and thorough technical evaluations demonstrate significant progress and identify critical areas necessary to meet user needs.

Notably Inaccessible

Venkatesh Potluri, Sudheesh Singanamalla, Nussara Tieanklin, Jennifer Mankoff: Notably Inaccessible – Data Driven Understanding of Data Science Notebook (In)Accessibility. ASSETS 2023: 13:1-13:19

Computational notebooks are tools that help people explore, analyze data, and create stories about that data. They are the most popular choice for data scientists. People use software like Jupyter, Datalore, and Google Colab to work with these notebooks in universities and companies.

There is a lot of research on how data scientists use these notebooks and how to help them work together better. But there is not much information about the problems faced by blind and visually impaired (BVI) users. BVI users have difficulty using these notebooks because:

  • The interfaces are not accessible.
  • The way data is shown is not user-friendly for them.
  • Popular libraries do not provide outputs they can use.

We analyzed 100,000 Jupyter notebooks to find accessibility problems. We looked for issues that affect how these notebooks are created and read. From our study, we give advice on how to make notebooks more accessible. We suggest ways for people to write better notebooks and changes to make the notebook software work better for everyone.

Touchpad Mapper

Ather Sharif, Venkatesh Potluri, Jazz Rui Xia Ang, Jacob O. Wobbrock, Jennifer Mankoff: Touchpad Mapper: Examining Information Consumption From 2D Digital Content Using Touchpads by Screen-Reader Users: ASSETS ’24 (best poster!) and W4A ’24 (open access)

Touchpads are common, but they are not very useful for people who use screen readers. We created and tested a tool called Touchpad Mapper to let Blind and visually impaired people make better use of touchpads. Touchpad Mapper lets screen-reader users use touchpads to interact with digital content like images and videos.

Touchpad mapping could be used in many apps. We built two examples:

  1. Users can use the touchpad to identify where things are in an image.
  2. Users can control a video’s progress with the touchpad, including rewinding and fast-forwarding.

We tested Touchpad Mapper with three people who use screen readers. They said they got information more quickly with our tool than with a regular keyboard.

Bespoke Slides for Fluctuating Access Needs

Kelly Avery Mack, Kate S. Glazko, Jamil Islam, Megan Hofmann, Jennifer Mankoff: “It’s like Goldilocks: ” Bespoke Slides for Fluctuating Audience Access Needs. ASSETS 2024: 71:1-71:15

Slide deck accessibility is usually thought to mainly impact people who are blind or visually impaired. However, many other people might need modifications to access a slide deck.

We talked with 17 people who have disabilities and use slide decks and learned their needs did not always overlap. For some people, their own needs even changed at times. For example, some needed lower contrast colors at night.

Next, we explored how a tool could help change a presentation to fit their needs. We tested this tool with 14 of the people we talked to earlier. Then, we interviewed four people who make and present slide decks to get their thoughts on this tool.

Finally, we tried to make a working version of this tool. It has some of the features that the people we talked to wanted, but we learned that when apps are not designed for access, and not open source, they make full access hard to add.

A Multi-StakeholderAnalysis of Accessibility in Higher Education

People with disabilities face extra hardship in institutions of higher education because of accessibility barriers built into the educational system. While prior work investigates the needs of individual stakeholders, this work ofers insights into the communication and collaboration between key stakeholders in creating access in institutions of higher education. The authors present refectionsfrom their experiences working with disability service ofces to meet their access needs and the results from interviewing 6 professors and 6 other disabled students about their experience in achieving access. Our results indicate that there are rich opportunities for technological solutions to support these stakeholders in communicating about and creating access

Kelly Avery Mack, Natasha A Sidik, Aashaka Desai, Emma J. McDonnell, Kunal Mehta, Christina Zhang, Jennifer Mankoff: Maintaining the Accessibility Ecosystem: a Multi-Stakeholder Analysis of Accessibility in Higher Education. ASSETS 2023: 100:1-100:6

Generative Artificial Intelligence’s Utility for Accessibility

With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular.

To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.

Glazko, K. S., Yamagami, M., Desai, A., Mack, K. A., Potluri, V., Xu, X., & Mankoff, J. An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility. ASSETS 2023. https://dl.acm.org/doi/abs/10.1145/3597638.3614548

News: Can AI help boost accessibility? These researchers tested it for themselves

Presentation (starts at about 20mins)

https://youtube.com/watch?v=S40-jPBH820%3Fsi%3DCm17oTaMaDnoQGvK%3F%23t%3D20m26s

How Do People with Limited Movement Personalize Upper-Body Gestures?

Personalized upper-body gestures that can enable input from diverse body parts (e.g., head, neck, shoulders, arms, hands, and fingers), and match the abilities of each user, might make gesture systems more accessible for people with upper-body motor disabilities. Static gesture sets that make ability assumptions about the user (e.g., touch thumb and index finger together in midair) may not be accessible. In our work, we characterize the personalized gesture sets designed by 25 participants with upper-body motor disabilities. We found that the personalized gesture sets that participants designed were specific to their abilities and needs. Six participants mentioned that their inspiration for designing the gestures was based on “how I would do [the gesture] with the abilities that I have”. We suggest three considerations when designing accessible upper-body gesture interfaces: 

1) Track the whole upper body. Our participants used their whole upper-body to perform the gestures, and some switched back and forth from the left to the right hand to combat fatigue.

2) Use sensing mechanisms that are agnostic to the location and orientation of the body. About half of our participants kept their hand on or barely took their hand off of the armrest to decrease arm movement and fatigue.

3) Use sensors that can sense muscle activations without movement. Our participants activated their muscles but did not visibly move in 10% of the personalized gestures.   

Our work highlights the need for personalized upper-body gesture interfaces supported by multimodal biosignal sensors (e.g., accelerometers, sensors that can sense muscle activity like EMG).