With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular.
To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.
Glazko, K. S., Yamagami, M., Desai, A., Mack, K. A., Potluri, V., Xu, X., & Mankoff, J. An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility. ASSETS 2023. https://dl.acm.org/doi/abs/10.1145/3597638.3614548
Personalized upper-body gestures that can enable input from diverse body parts (e.g., head, neck, shoulders, arms, hands, and fingers), and match the abilities of each user, might make gesture systems more accessible for people with upper-body motor disabilities. Static gesture sets that make ability assumptions about the user (e.g., touch thumb and index finger together in midair) may not be accessible. In our work, we characterize the personalized gesture sets designed by 25 participants with upper-body motor disabilities. We found that the personalized gesture sets that participants designed were specific to their abilities and needs. Six participants mentioned that their inspiration for designing the gestures was based on “how I would do [the gesture] with the abilities that I have”. We suggest three considerations when designing accessible upper-body gesture interfaces:
1) Track the whole upper body. Our participants used their whole upper-body to perform the gestures, and some switched back and forth from the left to the right hand to combat fatigue.
2) Use sensing mechanisms that are agnostic to the location and orientation of the body. About half of our participants kept their hand on or barely took their hand off of the armrest to decrease arm movement and fatigue.
3) Use sensors that can sense muscle activations without movement. Our participants activated their muscles but did not visibly move in 10% of the personalized gestures.
Our work highlights the need for personalized upper-body gesture interfaces supported by multimodal biosignal sensors (e.g., accelerometers, sensors that can sense muscle activity like EMG).
Working at the Intersection of Race, Disability, and Accessibility
This paper asks how research in accessibility can do a better job of including all disabled person, rather than separating disability from a person’s race and ethnicity. Most of the accessibility research that was published in the past does not mention race, or treats it as a simple label rather than asking how it impacts disability experiences. This eliminates whole areas of need and vital perspectives from the work we do.
We present a series of case studies exploring positive examples of work that looks more deeply at this intersection and reflect on teaching at the intersection of race, disability, and technology. This paper highlights the value of considering how constructs of race and disability work alongside each other within accessibility research studies, designs of socio-technical systems, and education. Our analysis provides recommendations towards establishing this research direction.
Dashboards are frequently used to monitor and share data across a breadth of domains including business, finance, sports, public policy, and healthcare, just to name a few. The combination of different components (e.g., key performance indicators, charts, filtering widgets) and the interactivity between components makes dashboards powerful interfaces for data monitoring and analysis. However, these very characteristics also often make dashboards inaccessible to blind and low vision (BLV) users. Through a co-design study with two screen reader users, we investigate challenges faced by BLV users and identify design goals to support effective screen reader-based interactions with dashboards. Operationalizing the findings from the co-design process, we present a prototype system, Azimuth, that generates dashboards optimized for screen reader-based navigation along with complementary descriptions to support dashboard comprehension and interaction. Based on a follow-up study with five BLV participants, we showcase how our generated dashboards support BLV users and enable them to perform both targeted and open-ended analysis. Reflecting on our design process and study feedback, we discuss opportunities for future work on supporting interactive data analysis, understanding dashboard accessibility at scale, and investigating alternative devices and modalities for designing accessible visualization dashboards.
Arjun Srinivasan, Tim Harshbarger, Darrell Hilliker and Jennifer Mankoff: University of Washington (2023): “Azimuth: Designing Accessible Dashboards for Screen Reader Users” ASSETS 2023.
Many individuals with disabilities and/or chronic conditions experience symptoms that may require intermittent or on-going medical care. However, healthcare is often overlooked as an area where accessibility needs to be addressed to improve physical and digital interactions between patients and healthcare providers. We discuss the challenges faced by individuals with disabilities and chronic conditions in accessing physical therapy and how technology can help improve access. We interviewed 15 people and found both social (e.g. financial constraints, lack of accessible transportation) and physiological (e.g. chronic pain) barriers to accessing physical therapy. Our study suggests that technology interventions that are adaptable, support movement tracking, and community building may support access to physical therapy. Rethinking access to physical therapy for people with disabilities or chronic conditions from a lens that includes social and physiological barriers presents opportunities to integrate accessibility and adaptability into physical therapy technology.
Speechreading is the art of using visual and contextual cues in the environment to support listening. Often used by d/Deaf and Hard-of-Hearing (d/DHH) individuals, it highlights nuances of rich communication. However, lived experiences of speechreaders are underdocumented in the literature, and the impact of online environment and interaction of captioning with speechreading has not been explored. To bridge these gaps, we conducted a three-part study consisting of formative interviews, design probes and design sessions with 12 d/DHH individuals who speechread.
There is a growing body of research revealing that longitudinal passive sensing data from smartphones and wearable devices can capture daily behavior signals for human behavior modeling, such as depression detection. Most prior studies build and evaluate machine learning models using data collected from a single population. However, to ensure that a behavior model can work for a larger group of users, its generalizability needs to be verified on multiple datasets from different populations. We present the first work evaluating cross-dataset generalizability of longitudinal behavior models, using depression detection as an application. We collect multiple longitudinal passive mobile sensing datasets with over 500 users from two institutes over a two-year span, leading to four institute-year datasets. Using the datasets, we closely re-implement and evaluated nine prior depression detection algorithms. Our experiment reveals the lack of model generalizability of these methods. We also implement eight recently popular domain generalization algorithms from the machine learning community. Our results indicate that these methods also do not generalize well on our datasets, with barely any advantage over the naive baseline of guessing the majority. We then present two new algorithms with better generalizability. Our new algorithm, Reorder, significantly and consistently outperforms existing methods on most cross-dataset generalization setups. However, the overall advantage is incremental and still has great room for improvement. Our analysis reveals that the individual differences (both within and between populations) may play the most important role in the cross-dataset generalization challenge. Finally, we provide an open-source benchmark platform GLOBEM – short for Generalization of LOngitudinal BEhavior Modeling – to consolidate all 19 algorithms. GLOBEM can support researchers in using, developing, and evaluating different longitudinal behavior modeling methods. We call for researchers’ attention to model generalizability evaluation for future longitudinal human behavior modeling studies.
Venkatesh Potluri, John Thompson, James Devine, Bongshin Lee, Nora Morsi, Peli De Halleux, Steve Hodges, and Jennifer Mankoff. 2022. PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (UIST ’22). Association for Computing Machinery, New York, NY, USA, Article 46, 1–13. https://doi.org/10.1145/3526113.3545700
We present the first toolkit that equips blind and visually impaired (BVI) developers with the tools to create accessible data displays. Called PSST (Physical Computing Streaming Sensor data Toolkit), it enables BVI developers to understand the data generated by sensors from a mouse to a micro: bit physical computing platform. By assuming visual abilities, earlier efforts to make physical computing accessible fail to address the need for BVI developers to access sensor data. PSST enables BVI developers to understand real-time, real-world sensor data by providing control over what should be displayed, as well as when to display and how to display sensor data. PSST supports filtering based on raw or calculated values, highlighting, and transformation of data. Output formats include tonal sonification, nonspeech audio files, speech, and SVGs for laser cutting. We validate PSST through a series of demonstrations and a user study with BVI developers.
The U.S. National Institute of Health (NIH) 3D Print Exchange is a public, open-source repository for 3D printable medical device designs with contributions from clinicians, expert-amateur makers, and people from industry and academia. In response to the COVID-19 pandemic, the NIH formed a collection to foster submissions of low-cost, locally-manufacturable personal protective equipment (PPE). We evaluated the 623 submissions in this collection to understand: what makers contributed, how they were made, who made them, and key characteristics of their designs. We found an immediate design convergence to manufacturing-focused remixes of a few initial designs affiliated with NIH partners and major for-profit groups. The NIH worked to review safe, effective designs but was overloaded by manufacturing-focused design adaptations. Our work contributes insights into: the outcomes of distributed, community-based medical making; the features that the community accepted as “safe” making; and how platforms can support regulated maker activities in high-risk domains.
Accessible design and technology could support the large and growing group of people with chronic illnesses. However, human computer interactions(HCI) has largely approached people with chronic illnesses through a lens of medical tracking or treatment rather than accessibility. We describe and demonstrate a framework for designing technology in ways that center the chronically ill experience. First, we identify guiding tenets: 1) treating chronically ill people not as patients but as people with access needs and expertise, 2) recognizing the way that variable ability shapes accessibility considerations, and 3) adopting a theoretical understanding of chronic illness that attends to the body. We then illustrate these tenets through autoethnographic case studies of two chronically ill authors using technology. Finally, we discuss implications for technology design, including designing for consequence-based accessibility, considering how to engage care communities, and how HCI research can engage chronically ill participants in research.
Kelly Mack*, Emma J. McDonnell*, Leah Findlater, and Heather D. Evans. In The 24th International ACM SIGACCESS Conference on Computers and Accessibility.