Generative Artificial Intelligence’s Utility for Accessibility

With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular.

To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.

Glazko, K. S., Yamagami, M., Desai, A., Mack, K. A., Potluri, V., Xu, X., & Mankoff, J. An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility. ASSETS 2023. https://dl.acm.org/doi/abs/10.1145/3597638.3614548

News: Can AI help boost accessibility? These researchers tested it for themselves

Presentation (starts at about 20mins)

https://youtube.com/watch?v=S40-jPBH820%3Fsi%3DCm17oTaMaDnoQGvK%3F%23t%3D20m26s

Cross-Dataset Generalization for Human Behavior Modeling

Overview; Data; Code

Overview of The Contributions of This Work. We systematically evaluate cross-dataset generalizability of 19 algorithms: 9 prior behavior modeling algorithm for depression detection, 8 recent domain generalization algorithms, and 2 two new algorithms proposed in this paper. Our open-source platform GLOBEM consolidates these 19 algorithms and support using, developing, evaluating various algorithms.

There is a growing body of research revealing that longitudinal passive sensing data from smartphones and wearable devices can capture daily behavior signals for human behavior modeling, such as depression detection. Most prior studies build and evaluate machine learning models using data collected from a single population. However, to ensure that a behavior model can work for a larger group of users, its generalizability needs to be verified on multiple datasets from different populations. We present the first work evaluating cross-dataset generalizability of longitudinal behavior models, using depression detection as an application. We collect multiple longitudinal passive mobile sensing datasets with over 500 users from two institutes over a two-year span, leading to four institute-year datasets. Using the datasets, we closely re-implement and evaluated nine prior depression detection algorithms. Our experiment reveals the lack of model generalizability of these methods. We also implement eight recently popular domain generalization algorithms from the machine learning community. Our results indicate that these methods also do not generalize well on our datasets, with barely any advantage over the naive baseline of guessing the majority. We then present two new algorithms with better generalizability. Our new algorithm, Reorder, significantly and consistently outperforms existing methods on most cross-dataset generalization setups. However, the overall advantage is incremental and still has great room for improvement. Our analysis reveals that the individual differences (both within and between populations) may play the most important role in the cross-dataset generalization challenge. Finally, we provide an open-source benchmark platform GLOBEM – short for Generalization of LOngitudinal BEhavior Modeling – to consolidate all 19 algorithms. GLOBEM can support researchers in using, developing, and evaluating different longitudinal behavior modeling methods. We call for researchers’ attention to model generalizability evaluation for future longitudinal human behavior modeling studies.

Xuhai Xu, Xin Liu, Han Zhang, Weichen Wang, Subigya Nepal, Yasaman S. Sefidgar, Woosuk Seo, Kevin S. Kuehn, Jeremy F. Huckins, Margaret E. Morris, Paula S. Nurius, Eve A. Riskin, Shwetak N. Patel, Tim Althoff, Andrew Campbell, Anind K. Dey, and Jennifer Mankoff. GlOBEM: Cross-Dataset Generalization of Longitudinal Human Behavior Modeling. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6(4): 190:1-190:34 (2022).

Xuhai XuHan ZhangYasaman S. SefidgarYiyi RenXin LiuWoosuk SeoJennifer BrownKevin S. KuehnMike A. MerrillPaula S. NuriusShwetak N. PatelTim AlthoffMargaret MorrisEve A. Riskin, Jennifer Mankoff, Anind K. Dey:
GLOBEM Dataset: Multi-Year Datasets for Longitudinal Human Behavior Modeling Generalization. NeurIPS 2022

Practices and Needs of Mobile Sensing Researchers

Passive mobile sensing for the purpose of human state modeling is a fast-growing area. It has been applied to solve a wide range of behavior-related problems, including physical and mental health monitoring, affective computing, activity recognition, routine modeling, etc. However, in spite of the emerging literature that has investigated a wide range of application scenarios, there is little work focusing on the lessons learned by researchers, and on guidance for researchers to this approach. How do researchers conduct these types of research studies? Is there any established common practice when applying mobile sensing across different application areas? What are the pain points and needs that they frequently encounter? Answering these questions is an important step in the maturing of this growing sub-field of ubiquitous computing, and can benefit a wide range of audiences. It can serve to educate researchers who have growing interests in this area but have little to no previous experience. Intermediate researchers may also find the results interesting and helpful for reference to improve their skills. Moreover, it can further shed light on the design guidelines for a future toolkit that could facilitate research processes being used. In this paper, we fill this gap and answer these questions by conducting semi-structured interviews with ten experienced researchers from four countries to understand their practices and pain points when conducting their research. Our results reveal a common pipeline that researchers have adopted, and identify major challenges that do not appear in published work but that researchers often encounter. Based on the results of our interviews, we discuss practical suggestions for novice researchers and high-level design principles for a toolkit that can accelerate passive mobile sensing research.

Understanding practices and needs of researchers in human state modeling by passive mobile sensing. Xu, Xuhai, Jennifer Mankoff, and Anind K. Dey. CCF Transactions on Pervasive Computing and Interaction (2021): 1-23.