Working at the Intersection of Race, Disability, and Accessibility
This paper asks how research in accessibility can do a better job of including all disabled person, rather than separating disability from a person’s race and ethnicity. Most of the accessibility research that was published in the past does not mention race, or treats it as a simple label rather than asking how it impacts disability experiences. This eliminates whole areas of need and vital perspectives from the work we do.
We present a series of case studies exploring positive examples of work that looks more deeply at this intersection and reflect on teaching at the intersection of race, disability, and technology. This paper highlights the value of considering how constructs of race and disability work alongside each other within accessibility research studies, designs of socio-technical systems, and education. Our analysis provides recommendations towards establishing this research direction.
Dashboards are frequently used to monitor and share data across a breadth of domains including business, finance, sports, public policy, and healthcare, just to name a few. The combination of different components (e.g., key performance indicators, charts, filtering widgets) and the interactivity between components makes dashboards powerful interfaces for data monitoring and analysis. However, these very characteristics also often make dashboards inaccessible to blind and low vision (BLV) users. Through a co-design study with two screen reader users, we investigate challenges faced by BLV users and identify design goals to support effective screen reader-based interactions with dashboards. Operationalizing the findings from the co-design process, we present a prototype system, Azimuth, that generates dashboards optimized for screen reader-based navigation along with complementary descriptions to support dashboard comprehension and interaction. Based on a follow-up study with five BLV participants, we showcase how our generated dashboards support BLV users and enable them to perform both targeted and open-ended analysis. Reflecting on our design process and study feedback, we discuss opportunities for future work on supporting interactive data analysis, understanding dashboard accessibility at scale, and investigating alternative devices and modalities for designing accessible visualization dashboards.
Arjun Srinivasan, Tim Harshbarger, Darrell Hilliker and Jennifer Mankoff: University of Washington (2023): “Azimuth: Designing Accessible Dashboards for Screen Reader Users” ASSETS 2023.
There is a growing body of research revealing that longitudinal passive sensing data from smartphones and wearable devices can capture daily behavior signals for human behavior modeling, such as depression detection. Most prior studies build and evaluate machine learning models using data collected from a single population. However, to ensure that a behavior model can work for a larger group of users, its generalizability needs to be verified on multiple datasets from different populations. We present the first work evaluating cross-dataset generalizability of longitudinal behavior models, using depression detection as an application. We collect multiple longitudinal passive mobile sensing datasets with over 500 users from two institutes over a two-year span, leading to four institute-year datasets. Using the datasets, we closely re-implement and evaluated nine prior depression detection algorithms. Our experiment reveals the lack of model generalizability of these methods. We also implement eight recently popular domain generalization algorithms from the machine learning community. Our results indicate that these methods also do not generalize well on our datasets, with barely any advantage over the naive baseline of guessing the majority. We then present two new algorithms with better generalizability. Our new algorithm, Reorder, significantly and consistently outperforms existing methods on most cross-dataset generalization setups. However, the overall advantage is incremental and still has great room for improvement. Our analysis reveals that the individual differences (both within and between populations) may play the most important role in the cross-dataset generalization challenge. Finally, we provide an open-source benchmark platform GLOBEM – short for Generalization of LOngitudinal BEhavior Modeling – to consolidate all 19 algorithms. GLOBEM can support researchers in using, developing, and evaluating different longitudinal behavior modeling methods. We call for researchers’ attention to model generalizability evaluation for future longitudinal human behavior modeling studies.
Tactile maps can help people who are blind or have low vision navigate and familiarize themselves with unfamiliar locations. Ideally, tactile maps are created by considering an individual’s unique needs and abilities because of their limited space for representation. However, significant customization is not supported by existing tools for generating tactile maps. We present the Maptimizer system which generates tactile maps that are customized to a user’s preferences and requirements, while making simplified and easy to read tactile maps. Maptimizer uses a two stage optimization process to pair representations with geographic information and tune those representations to present that information more clearly. In a user study with six blind/low-vision participants, Maptimizer helped participants more successfully and efficiently identify locations of interest in unknown areas. These results demonstrate the utility of optimization techniques and generative design in complex accessibility domains that require significant customization by the end user.
Smartphone overuse is related to a variety of issues such as lack of sleep and anxiety. We explore the application of Self-Affirmation Theory on smartphone overuse intervention in a just-in-time manner. We present TypeOut, a just-in-time intervention technique that integrates two components: an in-situ typing-based unlock process to improve user engagement, and self-affirmation-based typing content to enhance effectiveness. We hypothesize that the integration of typing and self-affirmation content can better reduce smartphone overuse. We conducted a 10-week within-subject field experiment (N=54) and compared TypeOut against two baselines: one only showing the self-affirmation content (a common notification-based intervention), and one only requiring typing non-semantic content (a state-of-the-art method). TypeOut reduces app usage by over 50%, and both app opening frequency and usage duration by over 25%, all significantly outperforming baselines. TypeOut can potentially be used in other domains where an intervention may benefit from integrating self-affirmation exercises with an engaging just-in-time mechanism.
Typeout: Leveraging just-in-time self-affirmation for smartphone overuse reduction. Xuhai Xu, Tianyuan Zou, Xiao Han, Yanzhang Li, Ruolin Wang, Tianyi Yuan, Yuntao Wang, Yuanchun Shi, Jennifer Mankoff,and Anind K. Dey. 2022. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). ACM, New York, NY, USA.
In order for “human-centered research” to include all humans, we need to make sure that research practices are accessible for both participants and researchers with disabilities. Yet, people rarely discuss how to make common methods accessible. We interviewed 17 accessibility experts who were researchers or community organizers about their practices. Our findings emphasize the importance of considering accessibility at all stages of the research process and across different dimensions of studies like communication, materials, time, and space. We explore how technology or processes could reflect a norm of accessibility and offer a practical structure for planning accessible research.
The onset of COVID-19 led many makers to dive deeply into the potential applications of their work to help with the pandemic. Our group’s efforts on this front, all of which were collaborations with a variety of people from multiple universities, led me to this reflective talk about the additional work that is needed for us to take the next step towards democratizing fabrication.
Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efficiently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual affordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could benefit from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.
Knitting is a popular craft that can be used to create customized fabric objects such as household items, clothing and toys. Additionally, many knitters find knitting to be a relaxing and calming exercise. Little is known about how disabled knitters use and benefit from knitting, and what accessibility solutions and challenges they create and encounter. We conducted interviews with 16 experienced, disabled knitters and analyzed 20 threads from six forums that discussed accessible knitting to identify how and why disabled knitters knit, and what accessibility concerns remain. We additionally conducted an iterative design case study developing knitting tools for a knitter who found existing solutions insufficient. Our innovations improved the range of stitches she could produce. We conclude by arguing for the importance of improving tools for both pattern generation and modification as well as adaptations or modifications to existing tools such as looms to make it easier to track progress
Automatic knitting machines are robust, digital fabrication devices that enable rapid and reliable production of attractive, functional objects by combining stitches to produce unique physical properties. However, no existing design tools support optimization for desirable physical and aesthetic knitted properties. We present KnitGIST (Generative Instantiation Synthesis Toolkit for knitting), a program synthesis pipeline and library for generating hand- and machine-knitting patterns by intuitively mapping objectives to tactics for texture design. KnitGIST generates a machine-knittable program in a domain-specific programming language.