Identifying and improving disability bias in GPT-based resume screening

Glazko, K., Mohammed, Y., Kosa, B., Potluri, V., & Mankoff, J. (2024, June). Identifying and improving disability bias in GPT-based resume screening. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 687-700).

As Generative AI rises in adoption, its use has expanded to include domains such as hiring and recruiting. However, without examining the potential of bias, this may negatively impact marginalized populations, including people with disabilities. To address this important concern, we present a resume audit study, in which we ask ChatGPT (specifically, GPT-4) to rank a resume against the same resume enhanced with an additional leadership award, scholarship, panel presentation, and membership that are disability-related. We find that GPT-4 exhibits prejudice towards these enhanced CVs. Further, we show that this prejudice can be quantifiably reduced by training a custom GPTs on principles of DEI and disability justice. Our study also includes a unique qualitative analysis of the types of direct and indirect ableism GPT-4 uses to justify its biased decisions and suggest directions for additional bias mitigation work. Additionally, since these justifications are presumably drawn from training data containing real-world biased statements made by humans, our analysis suggests additional avenues for understanding and addressing human bias.

Notably Inaccessible

Venkatesh Potluri, Sudheesh Singanamalla, Nussara Tieanklin, Jennifer Mankoff: Notably Inaccessible – Data Driven Understanding of Data Science Notebook (In)Accessibility. ASSETS 2023: 13:1-13:19

Computational notebooks are tools that help people explore, analyze data, and create stories about that data. They are the most popular choice for data scientists. People use software like Jupyter, Datalore, and Google Colab to work with these notebooks in universities and companies.

There is a lot of research on how data scientists use these notebooks and how to help them work together better. But there is not much information about the problems faced by blind and visually impaired (BVI) users. BVI users have difficulty using these notebooks because:

  • The interfaces are not accessible.
  • The way data is shown is not user-friendly for them.
  • Popular libraries do not provide outputs they can use.

We analyzed 100,000 Jupyter notebooks to find accessibility problems. We looked for issues that affect how these notebooks are created and read. From our study, we give advice on how to make notebooks more accessible. We suggest ways for people to write better notebooks and changes to make the notebook software work better for everyone.

Touchpad Mapper

Ather Sharif, Venkatesh Potluri, Jazz Rui Xia Ang, Jacob O. Wobbrock, Jennifer Mankoff: Touchpad Mapper: Examining Information Consumption From 2D Digital Content Using Touchpads by Screen-Reader Users: ASSETS ’24 (best poster!) and W4A ’24 (open access)

Touchpads are common, but they are not very useful for people who use screen readers. We created and tested a tool called Touchpad Mapper to let Blind and visually impaired people make better use of touchpads. Touchpad Mapper lets screen-reader users use touchpads to interact with digital content like images and videos.

Touchpad mapping could be used in many apps. We built two examples:

  1. Users can use the touchpad to identify where things are in an image.
  2. Users can control a video’s progress with the touchpad, including rewinding and fast-forwarding.

We tested Touchpad Mapper with three people who use screen readers. They said they got information more quickly with our tool than with a regular keyboard.

Generative Artificial Intelligence’s Utility for Accessibility

With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular.

To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.

Glazko, K. S., Yamagami, M., Desai, A., Mack, K. A., Potluri, V., Xu, X., & Mankoff, J. An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility. ASSETS 2023. https://dl.acm.org/doi/abs/10.1145/3597638.3614548

News: Can AI help boost accessibility? These researchers tested it for themselves

Presentation (starts at about 20mins)

https://youtube.com/watch?v=S40-jPBH820%3Fsi%3DCm17oTaMaDnoQGvK%3F%23t%3D20m26s

PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data

Venkatesh Potluri, John Thompson, James Devine, Bongshin Lee, Nora Morsi, Peli De Halleux, Steve Hodges, and Jennifer Mankoff. 2022. PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (UIST ’22). Association for Computing Machinery, New York, NY, USA, Article 46, 1–13. https://doi.org/10.1145/3526113.3545700

We present the first toolkit that equips blind and visually impaired (BVI) developers with the tools to create accessible data displays. Called PSST (Physical Computing Streaming Sensor data Toolkit), it enables BVI developers to understand the data generated by sensors from a mouse to a micro: bit physical computing platform. By assuming visual abilities, earlier efforts to make physical computing accessible fail to address the need for BVI developers to access sensor data. PSST enables BVI developers to understand real-time, real-world sensor data by providing control over what should be displayed, as well as when to display and how to display sensor data. PSST supports filtering based on raw or calculated values, highlighting, and transformation of data. Output formats include tonal sonification, nonspeech audio files, speech, and SVGs for laser cutting. We validate PSST through a series of demonstrations and a user study with BVI developers.

The demo video can be found here: https://youtu.be/UDIl9krawxg.

Mixed Abilities and Varied Experiences in a Virtual Summer Internship


The COVID-19 pandemic forced many people to convert their daily work lives to a “virtual” format where everyone connected remotely from their home, which affected the accessibility of work environments. We the authors, full time and intern members of an accessibility-focused team at Microsoft Research, reflect on our virtual work experiences as a team consisting of members with a variety of abilities, positions, and seniority during the summer intern season. We reflect on our summer experiences, noting the successful strategies we used to promote access and the areas in which we could have further improved access.

Mixed Abilities and Varied Experiences: a group autoethnography of a virtual summer internship. Kelly Mack, Maitraye Das, Dhruv Jain, Danielle Bragg, John Tang, Andrew Begel, Erin Beneteau, Josh Urban Davis, Abraham Glasser, Joon Sung Park, and Venkatesh Potluri. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility, pp. 1-13. 2021.

Anticipate and Adjust: Cultivating Access in Human-Centered Methods

In order for “human-centered research” to include all humans, we need to make sure that research practices are accessible for both participants and researchers with disabilities. Yet, people rarely discuss how to make common methods accessible. We interviewed 17 accessibility experts who were researchers or community organizers about their practices. Our findings emphasize the importance of considering accessibility at all stages of the research process and across different dimensions of studies like communication, materials, time, and space. We explore how technology or processes could reflect a norm of accessibility and offer a practical structure for planning accessible research.

Anticipate and Adjust: Cultivating Access in Human-Centered Methods. Kelly Mac, Emma J. McDonnell, Venkatesh Potluri, Maggie Xu, Jailyn Zabala, Jeffrey P. Bigham, Jennifer Mankoff, and Cynthia L. Bennett. CHI 2022

BLV Understanding of Visual Semantics


Venkatesh Potluri
Tadashi E. GrindelandJon E. Froehlich, Jennifer Mankoff: Examining Visual Semantic Understanding in Blind and Low-Vision Technology Users. CHI 2021: 35:1-35:14

Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efficiently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual affordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could benefit from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.