Autoethnographic Insights from Neurodivergent GAI “Power Users”

Kate Glazko, JunHyeok Cha, Aaleyah Lewis, Ben Kosa, Brianna Wimer, Andrew Zheng, Yiwei Zheng, and Jennifer Mankoff. 2025. Autoethnographic Insights from Neurodivergent GAI “Power Users”. In CHI Conference on Human Factors in Computing Systems (CHI ’25), April 26-May 1, 2025, Yokohama, Japan. ACM, New York, NY, USA, 20 pages. https://doi.org/10.1145/ 3706598.3713670

Generative AI has become ubiquitous in both daily and professional life, with emerging research demonstrating its potential as a tool for accessibility. Neurodivergent people, often left out by existing accessibility technologies, develop their own ways of navigating normative expectations. GAI offers new opportunities for access, but it is important to understand how neurodivergent “power users”—successful early adopters—engage with it and the challenges they face. Further, we must understand how marginalization and intersectional identities influence their interactions with GAI. Our autoethnography, enhanced by privacy-preserving GAI-based diaries and interviews, reveals the intricacies of using GAI to navigate normative environments and expectations. Our findings demonstrate how GAI can both support and complicate tasks like code-switching, emotional regulation, and accessing information. We show that GAI can help neurodivergent users to reclaim their agency in systems that diminish their autonomy and self-determination. However, challenges such as balancing authentic self-expression with societal conformity, alongside other risks, create barriers to realizing GAI’s full potential for accessibility.

Toward Language Justice

Aashaka Desai, Rahaf Alharbi, Stacy Hsueh, Richard E. Ladner, and Jennifer Mankoff. 2025. Toward Language Justice: Exploring Multilingual Captioning for Accessibility. In CHI Conference on Human Factors in Computing Systems (CHI ’25), April 26–May 01, 2025, Yokohama, Japan. ACM, New York, NY, USA, 18 pages. https://doi.org/10.1145/3706598.3713622

A growing body of research investigates how to make captioning experiences more accessible and enjoyable to disabled people. However, prior work has focused largely on English captioning, neglecting the majority of people who are multilingual (i.e., understand or express themselves in more than one language). To address this gap, we conducted semi-structured interviews and diary logs with 13 participants who used multilingual captions for accessibility. Our findings highlight the linguistic and cultural dimensions of captioning, detailing how language features (scripts and orthography) and the inclusion/negation of cultural context shape the accessibility of captions. Despite lack of quality and availability, participants emphasized the importance of multilingual captioning to learn a new language, build community, and preserve cultural heritage. Moving toward a future where all ways of communicating are celebrated, we present ways to orient captioning research to a language justice agenda that decenters English and engages with varied levels of fluency.

Cripping Data Visualizations

Stacy Hsueh, Beatrice Vincenzi, Akshata Murdeshwar, and Marianela Ciolf Felice. 2023. Cripping Data Visualizations: Crip Technoscience as a Critical Lensfor Designing Digital Access. In The 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’23), October 22–25, 2023, New York, NY, USA. ACM, New York, NY, USA, 16 pages. https://doi.org/10. 1145/3597638.3608427

Data visualizations have become the primary mechanism for engaging with quantitative information. However, many of these visualizations are inaccessible to blind and low vision people. This paper investigates the challenge of designing accessible data visualizations through the lens of crip technoscience. We present four speculative design case studies that conceptually explore four qualities of access built on crip wisdom: access as an ongoing process, a frictional practice, an aesthetic experience, and transformation. Each speculative study embodies inquiry and futuring, making visible common assumptions about access and exploring how an alternative crip-informed framework can shape designs that foreground the creativity of disabled people. We end by presenting tactics for designing digital access that de-centers the innovation discourse.

Shaping Lace

Glazko, K., Portnova-Fahreeva, A., Mankoff-Dey, A., Psarra, A., & Mankoff, J. (2024, July). Shaping Lace: Machine embroidered metamaterials. In Proceedings of the 9th ACM Symposium on Computational Fabrication (pp. 1-12).

The ability to easily create embroidered lace textile objects that can be manipulated in structured ways, i.e., metamaterials, could enable a variety of applications from interactive tactile graphics to physical therapy devices. However, while machine embroidery has been used to create sensors and digitally enhanced fabrics, its use for creating metamaterials is an understudied area. This article reviews recent advances in metamaterial textiles and conducts a design space exploration of metamaterial freestanding lace embroidery. We demonstrate that freestanding lace embroidery can be used to create out-of-plane kirigami and auxetic effects. We provide examples of applications of these effects to create a variety of prototypes and demonstrations.

Identifying and improving disability bias in GPT-based resume screening

Glazko, K., Mohammed, Y., Kosa, B., Potluri, V., & Mankoff, J. (2024, June). Identifying and improving disability bias in GPT-based resume screening. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 687-700).

As Generative AI rises in adoption, its use has expanded to include domains such as hiring and recruiting. However, without examining the potential of bias, this may negatively impact marginalized populations, including people with disabilities. To address this important concern, we present a resume audit study, in which we ask ChatGPT (specifically, GPT-4) to rank a resume against the same resume enhanced with an additional leadership award, scholarship, panel presentation, and membership that are disability-related. We find that GPT-4 exhibits prejudice towards these enhanced CVs. Further, we show that this prejudice can be quantifiably reduced by training a custom GPTs on principles of DEI and disability justice. Our study also includes a unique qualitative analysis of the types of direct and indirect ableism GPT-4 uses to justify its biased decisions and suggest directions for additional bias mitigation work. Additionally, since these justifications are presumably drawn from training data containing real-world biased statements made by humans, our analysis suggests additional avenues for understanding and addressing human bias.

Towards AI-driven Sign Language Generation with Non-manual Markers

Han Zhang, Rotem Shalev-Arkushin, Vasileios Baltatzis, Connor Gillis, Gierad Laput, Raja Kushalnagar, Lorna Quandt, Leah Findlater, Abdelkareem Bedri, and Colin Lea. 2025. Towards AI-driven Sign Language Generation with Non-manual Markers. In Proceedings of the CHI Conference on Human Factors in Computing Systems.

Sign languages are essential for the Deaf and Hard-of-Hearing (DHH) community. Sign language generation systems have the potential to support communication by translating from written languages, such as English, into signed videos. However, current systems often fail to meet user needs due to poor translation of grammatical structures, the absence of facial cues and body language, and insufficient visual and motion fidelity. We address these challenges by building on recent advances in LLMs and video generation models to translate English sentences into natural-looking AI ASL signers. The text component of our model extracts information for manual and non-manual components of ASL, which are used to synthesize skeletal pose sequences and corresponding video frames. Our findings from a user study with 30 DHH participants and thorough technical evaluations demonstrate significant progress and identify critical areas necessary to meet user needs.

Jazette Johnson

Jazette Johnson is a Postdoctoral Scholar at the University of Washington’s CREATE (Center for Research and Education on Accessible Technology and Experiences) working with Jen Mankoff. Her research sits at the intersection of Human-Computer Interaction (HCI), accessibility, and health equity. She partners with disabled and historically marginalized communities to explore how technology, particularly generative AI and online platforms, can support inclusive health communication, build trust, and amplify community voice. Jazette’s work is deeply community-engaged, centering co-design, lived experience, and culturally responsive methods to inform the development of accessible, real-world solutions.

Visit Jazette’s homepage at: www.Jazettejohnson.com

Grace Zhou

Grace is a third-year computer engineering and applied math student at the University of Washington. They are interested in applications of CS towards accessibility, fabrication, and artistic practices. In their free time, they enjoy reading about art history, going to the gym, and painting.

Currently, in the Make4All lab, they are excited to be working on the multi-axis 3D printing project!

Xiaoyi Wang

Xiaoyi Wang is a third-year undergraduate studying Computer Science and Mathematics. She is passionate about Robotics, 3D printing, full-stack development, and mathematical modeling.

Sanjana Satagopan

Sanjana Satagopan is a first-year student in computer science at the University of Washington. She has experience programming for various startups, including game development, AI, and full-stack dev. She is excited to learn more about sensors, robotics, and more, and is excited to be finding ways to use technology for accessibility research.

On the side she’s interested in environmentalism and loves tennis and music!

Visit Sanjana’s website here: https://www.linkedin.com/in/sanjanasatagopan/