Knitting is a popular craft that can be used to create customized fabric objects such as household items, clothing and toys. Additionally, many knitters find knitting to be a relaxing and calming exercise. Little is known about how disabled knitters use and benefit from knitting, and what accessibility solutions and challenges they create and encounter. We conducted interviews with 16 experienced, disabled knitters and analyzed 20 threads from six forums that discussed accessible knitting to identify how and why disabled knitters knit, and what accessibility concerns remain. We additionally conducted an iterative design case study developing knitting tools for a knitter who found existing solutions insufficient. Our innovations improved the range of stitches she could produce. We conclude by arguing for the importance of improving tools for both pattern generation and modification as well as adaptations or modifications to existing tools such as looms to make it easier to track progress
Aashaka is a PhD candidate in the UW Paul G. Allen School of Computer Science and Engineering. She is advised by Dr. Jennifer Mankoff and Dr. Richard Ladner. Her research focuses on d/Deaf and hard-of-hearing communication accessibility and explores how can we support all ways of communicating. She explores a range of modalities (speechreading, signing, captioning) as well as languages (multilingualism) in my work. She aims to both document the fluidity of language/communication as well as build technologies that support minoritized communication practices.
Automatic knitting machines are robust, digital fabrication devices that enable rapid and reliable production of attractive, functional objects by combining stitches to produce unique physical properties. However, no existing design tools support optimization for desirable physical and aesthetic knitted properties. We present KnitGIST (Generative Instantiation Synthesis Toolkit for knitting), a program synthesis pipeline and library for generating hand- and machine-knitting patterns by intuitively mapping objectives to tactics for texture design. KnitGIST generates a machine-knittable program in a domain-specific programming language.
The prevalence of mobile phones and wearable devices enables the passive capturing and modeling of human behavior at an unprecedented resolution and scale. Past research has demonstrated the capability of mobile sensing to model aspects of physical health, mental health, education, and work performance, etc. However, most of the algorithms and models proposed in previous work follow a one-size-fits-all (i.e., population modeling) approach that looks for common behaviors amongst all users, disregarding the fact that individuals can behave very differently, resulting in reduced model performance. Further, black-box models are often used that do not allow for interpretability and human behavior understanding. We present a new method to address the problems of personalized behavior classification and interpretability, and apply it to depression detection among college students. Inspired by the idea of collaborative-filtering, our method is a type of memory-based learning algorithm. It leverages the relevance of mobile-sensed behavior features among individuals to calculate personalized relevance weights, which are used to impute missing data and select features according to a specific modeling goal (e.g., whether the student has depressive symptoms) in different time epochs, i.e., times of the day and days of the week. It then compiles features from epochs using majority voting to obtain the final prediction. We apply our algorithm on a depression detection dataset collected from first-year college students with low data-missing rates and show that our method outperforms the state-of-the-art machine learning model by 5.1% in accuracy and 5.5% in F1 score. We further verify the pipeline-level generalizability of our approach by achieving similar results on a second dataset, with an average improvement of 3.4% across performance metrics. Beyond achieving better classification performance, our novel approach is further able to generate personalized interpretations of the models for each individual. These interpretations are supported by existing depression-related literature and can potentially inspire automated and personalized depression intervention design in the future.The prevalence of mobile phones and wearable devices enables the passive capturing and modeling of human behavior at an unprecedented resolution and scale. Past research has demonstrated the capability of mobile sensing to model aspects of physical health, mental health, education, and work performance, etc. However, most of the algorithms and models proposed in previous work follow a one-size-fits-all (i.e., population modeling) approach that looks for common behaviors amongst all users, disregarding the fact that individuals can behave very differently, resulting in reduced model performance. Further, black-box models are often used that do not allow for interpretability and human behavior understanding. We present a new method to address the problems of personalized behavior classification and interpretability, and apply it to depression detection among college students. Inspired by the idea of collaborative-filtering, our method is a type of memory-based learning algorithm. It leverages the relevance of mobile-sensed behavior features among individuals to calculate personalized relevance weights, which are used to impute missing data and select features according to a specific modeling goal (e.g., whether the student has depressive symptoms) in different time epochs, i.e., times of the day and days of the week. It then compiles features from epochs using majority voting to obtain the final prediction. We apply our algorithm on a depression detection dataset collected from first-year college students with low data-missing rates and show that our method outperforms the state-of-the-art machine learning model by 5.1% in accuracy and 5.5% in F1 score. We further verify the pipeline-level generalizability of our approach by achieving similar results on a second dataset, with an average improvement of 3.4% across performance metrics. Beyond achieving better classification performance, our novel approach is further able to generate personalized interpretations of the models for each individual. These interpretations are supported by existing depression-related literature and can potentially inspire automated and personalized depression intervention design in the future.
The rate of depression in college students is rising, which is known to increase suicide risk, lower academic performance and double the likelihood of dropping out. Researchers have used passive mobile sensing technology to assess mental health. Existing work on finding relationships between mobile sensing and depression, as well as identifying depression via sensing features, mainly utilize single data channels or simply concatenate multiple channels. There is an opportunity to identify better features by reasoning about co-occurrence across multiple sensing channels. We present a new method to extract contextually filtered features on passively collected, time-series data from mobile devices via rule mining algorithms. We first employ association rule mining algorithms on two different user groups (e.g., depression vs. non-depression). We then introduce a new metric to select a subset of rules that identifies distinguishing behavior patterns between the two groups. Finally, we consider co-occurrence across the features that comprise the rules in a feature extraction stage to obtain contextually filtered features with which to train classifiers. Our results reveal that the best model with these features significantly outperforms a standard model that uses unimodal features by an average of 9.7% across a variety of metrics. We further verified the generalizability of our approach on a second dataset, and achieved very similar results.
We present a machine learning approach that uses data from smartphones and ftness trackers of 138 college students to identify students that experienced depressive symptoms at the end of the semester and students whose depressive symptoms worsened over the semester. Our novel approach is a feature extraction technique that allows us to select meaningful features indicative of depressive symptoms from longitudinal data. It allows us to detect the presence of post-semester depressive symptoms with an accuracy of 85.7% and change in symptom severity with an accuracy of 85.4%. It also predicts these outcomes with an accuracy of >80%, 11-15 weeks before the end of the semester, allowing ample time for preemptive interventions. Our work has signifcant implications for the detection of health outcomes using longitudinal behavioral data and limited ground truth. By detecting change and predicting symptoms several weeks before their onset, our work also has implications for preventing depression.
Bar chart shows value of baseline, bluetooth, calls, campus map, location, phone usage, sleep and step features on detecting change in depression. the best set leads to 85.4% accuracy; all features except bluetooth and calls improve on baseline accuracy of 65.9%
This study examines 154,305 Google reviews from across the United States for all medical specialties. Many patients use online physician reviews but we need to understand effects of gender on review content. Reviewer gender was inferred from names.
Reviews were coded for overall patient experience (negative or positive) by collapsing a 5-star scale and for general categories (process, positive/negative soft skills). We estimated binary regression models to examine relationships between physician rating, patient experience themes, physician gender, and reviewer gender.
We found considerable bias against female physicians: Reviews of female physicians were considerably more negative than those of male physicians (OR 1.99; P<.001). Critiques of female physicians more often focused on soft skills such as amicability, disrespect and candor. Negative reviews typically have words such as “rude, arrogant, and condescending”
Reviews written by female patients were also more likely to mention disrespect (OR 1.27, P<.001), but female patients were less likely to report disrespect from female doctors than expected.
Finally, patient experiences with the bureaucratic process also impacted reviews. This includes issues like cost of care. Overall, lower patient satisfaction is correlated with high physician dominance (e.g., poor information sharing or using medical jargon)
Limitations of our work include the lack of definitive (or non-binary) information about gender; and the fact that we do not know about the actual outcomes of treatment for reviewers.
Even so, it seems critical that readers attend to the who the reviewers are when reading online reviews. Review sites may also want to provide information about gender differences, control for gender when presenting composite ratings for physicians, and helping users write less biased reviews. Reviewers should be aware of their own gender biases and assess reviews for this (http://slowe.github.io/genderbias/).
It was my honor this year to participate in an auto-ethnographic effort to explore accessibility research from a combination of personal and theoretical perspectives. In the process, and thanks to my amazing co-authors, I learned so much about myself, disability studies, ableism and accessibility.
Abstract: Accessibility research and disability studies are intertwined fields focused on, respectively, building a world more inclusive of people with disability and understanding and elevating the lived experiences of disabled people. Accessibility research tends to focus on creating technology related to impairment, while disability studies focuses on understanding disability and advocating against ableist systems. Our paper presents a reflexive analysis of the experiences of three accessibility researchers and one disability studies scholar. We focus on moments when our disability was misunderstood and causes such as expecting clearly defined impairments. We derive three themes: ableism in research, oversimplification of disability, and human relationships around disability. From these themes, we suggest paths toward more strongly integrating disability studies perspectives and disabled people into accessibility research.
Han is a PhD student in the Paul G. Allen School of Computer Science & Engineering. She is advised by Prof Jennifer Mankoff (Computer Science) and Prof Anind K. Dey (Information School).
Her research is human-centered, focusing on understanding human behaviors and designing AI systems that promote well-being, accessibility, and learning. For more details, please visit her personal website.
Taylor is a second-year PhD student in the Paul G Allen School of Computer Science and Engineering. She is advised by Professor Jennifer Mankoff. In 2017, she graduated from the University of California, Santa Cruz with bachelor’s degrees in Computer Engineering and Cognitive Science. She then earned her Masters in Human Computer Interaction from Rochester Institute of Technology in 2019.
Her research interests focus on trying to make fabrication more accessible for people with disabilities. Her prior research explored how to make the e-textile circuit development process more accessible for adults with intellectual disabilities. Her recent projects focus on understanding the kinds of difficulties that people with disabilities face while knitting, and developing technologies to help users overcome some of these difficulties.
Avery is a Phd Student in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. They are advised by Prof. Jennifer Mankoff. They completed their bachelors in Computer Science at the University of Illinois at Urbana-Champaign in 2019, where Prof. Aditya Parameswaran and Prof. Karrie Karahalios advised them. They are an NSF Fellow and an ARCS Scholar.
Their research focuses on applying computer science to create or improve technologies that serve people with disabilities. Their current work focuses on 1) representation of people with disabilities in digital technologies like avatars and generative AI tools, and 2) how to support people with fluctuating access needs like neurodiverse people and people with chronic or mental health conditions.
Feelings of loneliness are associated with poor physical and mental health. Detection of loneliness through passive sensing on personal devices can lead to the development of interventions aimed at decreasing rates of loneliness.
Objective: The aim of this study was to explore the potential of using passive sensing to infer levels of loneliness and to identify the corresponding behavioral patterns.
Methods: Data were collected from smartphones and Fitbits (Flex 2) of 160 college students over a semester. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire at the beginning and end of the semester. For a classification purpose, the scores were categorized into high (questionnaire score>40) and low (≤40) levels of loneliness. Daily features were extracted from both devices to capture activity and mobility, communication and phone usage, and sleep behaviors. The features were then averaged to generate semester-level features. We used 3 analytic methods: (1) statistical analysis to provide an overview of loneliness in college students, (2) data mining using the Apriori algorithm to extract behavior patterns associated with loneliness, and (3) machine learning classification to infer the level of loneliness and the change in levels of loneliness using an ensemble of gradient boosting and logistic regression algorithms with feature selection in a leave-one-student-out cross-validation manner.
Results: The average loneliness score from the presurveys and postsurveys was above 43 (presurvey SD 9.4 and postsurvey SD 10.4), and the majority of participants fell into the high loneliness category (scores above 40) with 63.8% (102/160) in the presurvey and 58.8% (94/160) in the postsurvey. Scores greater than 1 standard deviation above the mean were observed in 12.5% (20/160) of the participants in both pre- and postsurvey scores. The majority of scores, however, fell between 1 standard deviation below and above the mean (pre=66.9% [107/160] and post=73.1% [117/160]).
Our machine learning pipeline achieved an accuracy of 80.2% in detecting the binary level of loneliness and an 88.4% accuracy in detecting change in the loneliness level. The mining of associations between classifier-selected behavioral features and loneliness indicated that compared with students with low loneliness, students with high levels of loneliness were spending less time outside of campus during evening hours on weekends and spending less time in places for social events in the evening on weekdays (support=17% and confidence=92%). The analysis also indicated that more activity and less sedentary behavior, especially in the evening, was associated with a decrease in levels of loneliness from the beginning of the semester to the end of it (support=31% and confidence=92%).
Conclusions: Passive sensing has the potential for detecting loneliness in college students and identifying the associated behavioral patterns. These findings highlight intervention opportunities through mobile technology to reduce the impact of loneliness on individuals’ health and well-being.