HulaMove: Waist Interaction

Xuhai XuJiahao LiTianyi YuanLiang HeXin LiuYukang YanYuntao WangYuanchun Shi, Jennifer Mankoff, Anind K. Dey:
HulaMove: Using Commodity IMU for Waist Interaction. CHI 2021: 503:1-503:16

We present HulaMove, a novel interaction technique that leverages the movement of the waist as a new eyes-free and hands-free input method for both the physical world and the virtual world. We first conducted a user study (N=12) to understand users’ ability to control their waist. We found that users could easily discriminate eight shifting directions and two rotating orientations, and quickly confirm actions by returning to the original position (quick return). We developed a design space with eight gestures for waist interaction based on the results and implemented an IMU-based real-time system. Using a hierarchical machine learning model, our system could recognize waist gestures at an accuracy of 97.5%. Finally, we conducted a second user study (N=12) for usability testing in both real-world scenarios and virtual reality settings. Our usability study indicated that HulaMove significantly reduced interaction time by 41.8% compared to a touch screen method, and greatly improved users’ sense of presence in the virtual world. This novel technique provides an additional input method when users’ eyes or hands are busy, accelerates users’ daily operations, and augments their immersive experience in the virtual world.

Understanding Disabled Knitters


Taylor Gotfrid
Kelly MackKathryn J. LumEvelyn YangJessica K. HodginsScott E. Hudson, Jennifer Mankoff: Stitching Together the Experiences of Disabled Knitters. CHI 2021: 488:1-488:14

Knitting is a popular craft that can be used to create customized fabric objects such as household items, clothing and toys. Additionally, many knitters find knitting to be a relaxing and calming exercise. Little is known about how disabled knitters use and benefit from knitting, and what accessibility solutions and challenges they create and encounter. We conducted interviews with 16 experienced, disabled knitters and analyzed 20 threads from six forums that discussed accessible knitting to identify how and why disabled knitters knit, and what accessibility concerns remain. We additionally conducted an iterative design case study developing knitting tools for a knitter who found existing solutions insufficient. Our innovations improved the range of stitches she could produce. We conclude by arguing for the importance of improving tools for both pattern generation and modification as well as adaptations or modifications to existing tools such as looms to make it easier to track progress

KnitGIST: Generative Texture Design

Hofmann, M., Mankoff, J., & Hudson, S. E. (2020, October). KnitGIST: A Programming Synthesis Toolkit for Generating Functional Machine-Knitting Textures. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (pp. 1234-1247).

Automatic knitting machines are robust, digital fabrication devices that enable rapid and reliable production of attractive, functional objects by combining stitches to produce unique physical properties. However, no existing design tools support optimization for desirable physical and aesthetic knitted properties. We present KnitGIST (Generative Instantiation Synthesis Toolkit for knitting), a program synthesis pipeline and library for generating hand- and machine-knitting patterns by intuitively mapping objectives to tactics for texture design. KnitGIST generates a machine-knittable program in a domain-specific programming language.

Detecting Depression △

A series of research projects based on the UWEXP study have focused on detecting depression in various ways. Three such papers are listed below.

Xuhai XuPrerna ChikersalJanine M. DutcherYasaman S. SefidgarWoosuk SeoMichael J. TumminiaDaniella K. VillalbaSheldon CohenKasey G. CreswellJ. David CreswellAfsaneh DoryabPaula S. NuriusEve A. RiskinAnind K. Dey, Jennifer Mankoff:
Leveraging Collaborative-Filtering for Personalized Behavior Modeling: A Case Study of Depression Detection among College Students. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5(1): 41:1-41:27 (2021)

The prevalence of mobile phones and wearable devices enables the passive capturing and modeling of human behavior at an unprecedented resolution and scale. Past research has demonstrated the capability of mobile sensing to model aspects of physical health, mental health, education, and work performance, etc. However, most of the algorithms and models proposed in previous work follow a one-size-fits-all (i.e., population modeling) approach that looks for common behaviors amongst all users, disregarding the fact that individuals can behave very differently, resulting in reduced model performance. Further, black-box models are often used that do not allow for interpretability and human behavior understanding. We present a new method to address the problems of personalized behavior classification and interpretability, and apply it to depression detection among college students. Inspired by the idea of collaborative-filtering, our method is a type of memory-based learning algorithm. It leverages the relevance of mobile-sensed behavior features among individuals to calculate personalized relevance weights, which are used to impute missing data and select features according to a specific modeling goal (e.g., whether the student has depressive symptoms) in different time epochs, i.e., times of the day and days of the week. It then compiles features from epochs using majority voting to obtain the final prediction. We apply our algorithm on a depression detection dataset collected from first-year college students with low data-missing rates and show that our method outperforms the state-of-the-art machine learning model by 5.1% in accuracy and 5.5% in F1 score. We further verify the pipeline-level generalizability of our approach by achieving similar results on a second dataset, with an average improvement of 3.4% across performance metrics. Beyond achieving better classification performance, our novel approach is further able to generate personalized interpretations of the models for each individual. These interpretations are supported by existing depression-related literature and can potentially inspire automated and personalized depression intervention design in the future.The prevalence of mobile phones and wearable devices enables the passive capturing and modeling of human behavior at an unprecedented resolution and scale. Past research has demonstrated the capability of mobile sensing to model aspects of physical health, mental health, education, and work performance, etc. However, most of the algorithms and models proposed in previous work follow a one-size-fits-all (i.e., population modeling) approach that looks for common behaviors amongst all users, disregarding the fact that individuals can behave very differently, resulting in reduced model performance. Further, black-box models are often used that do not allow for interpretability and human behavior understanding. We present a new method to address the problems of personalized behavior classification and interpretability, and apply it to depression detection among college students. Inspired by the idea of collaborative-filtering, our method is a type of memory-based learning algorithm. It leverages the relevance of mobile-sensed behavior features among individuals to calculate personalized relevance weights, which are used to impute missing data and select features according to a specific modeling goal (e.g., whether the student has depressive symptoms) in different time epochs, i.e., times of the day and days of the week. It then compiles features from epochs using majority voting to obtain the final prediction. We apply our algorithm on a depression detection dataset collected from first-year college students with low data-missing rates and show that our method outperforms the state-of-the-art machine learning model by 5.1% in accuracy and 5.5% in F1 score. We further verify the pipeline-level generalizability of our approach by achieving similar results on a second dataset, with an average improvement of 3.4% across performance metrics. Beyond achieving better classification performance, our novel approach is further able to generate personalized interpretations of the models for each individual. These interpretations are supported by existing depression-related literature and can potentially inspire automated and personalized depression intervention design in the future.

Leveraging Routine Behavior and Contextually-Filtered Features for Depression Detection among College Students. Xuhai Xu, Prerna Chikersal, Afsaneh Doryab, Daniella Villaalba, Janine M. Dutcher, Michael J. Tumminia, Tim Althoff, Sheldon Cohen, Kasey Creswell, David Creswell, Jennifer Mankoff and Anind K. Dey. IMWUT, Article No 116. 10.1145/3351274

The rate of depression in college students is rising, which is known to increase suicide risk, lower academic performance and double the likelihood of dropping out. Researchers have used passive mobile sensing technology to assess mental health. Existing work on finding relationships between mobile sensing and depression, as well as identifying depression via sensing features, mainly utilize single data channels or simply concatenate multiple channels. There is an opportunity to identify better features by reasoning about co-occurrence across multiple sensing channels. We present a new method to extract contextually filtered features on passively collected, time-series data from mobile devices via rule mining algorithms. We first employ association rule mining algorithms on two different user groups (e.g., depression vs. non-depression). We then introduce a new metric to select a subset of rules that identifies distinguishing behavior patterns between the two groups. Finally, we consider co-occurrence across the features that comprise the rules in a feature extraction stage to obtain contextually filtered features with which to train classifiers. Our results reveal that the best model with these features significantly outperforms a standard model that uses unimodal features by an average of 9.7% across a variety of metrics. We further verified the generalizability of our approach on a second dataset, and achieved very similar results.

Chikersal, P., Doryab, A., Tumminia, M., Villalba, D., Dutcher, J., Liu, X., Cohen, S., Creswell, K., Mankoff, J., Creswell, D., Goel, M., & Dey, A. “Detecting Depression and Predicting its Onset Using Longitudinal Symptoms Captured by Passive Sensing: A Machine Learning Approach With Robust Feature Selection.” ACM Transactions on Computer-Human Interaction (TOCHI), 2020.

We present a machine learning approach that uses data from smartphones and ftness trackers of 138 college students to identify students that experienced depressive symptoms at the end of the semester and students whose depressive symptoms worsened over the semester. Our novel approach is a feature extraction technique that allows us to select meaningful features indicative of depressive symptoms from longitudinal data. It allows us to detect the presence of post-semester depressive symptoms with an accuracy of 85.7% and change in symptom severity with an accuracy of 85.4%. It also predicts these outcomes with an accuracy of >80%, 11-15 weeks before the end of the semester, allowing ample time for preemptive interventions. Our work has signifcant implications for the detection of health outcomes using longitudinal behavioral data and limited ground truth. By detecting change and predicting symptoms several weeks before their onset, our work also has implications for preventing depression.

Shows barchart of import of different features onetecting change in depression
Bar chart shows value of baseline, bluetooth, calls, campus map, location, phone usage, sleep and step features on detecting change in depression. the best set leads to 85.4% accuracy; all features except bluetooth and calls improve on baseline accuracy of 65.9%

Gender in Online Doctor Reviews

Dunivin Z, Zadunayski L, Baskota U, Siek K, Mankoff J. Gender, Soft Skills, and Patient Experience in Online Physician Reviews: A Large-Scale Text Analysis. Journal of Medical Internet Research. 2020;22(7):e14455.

This study examines 154,305 Google reviews from across the United States for all medical specialties. Many patients use online physician reviews but we need to understand effects of gender on review content. Reviewer gender was inferred from names.

Reviews were coded for overall patient experience (negative or positive) by collapsing a 5-star scale and for general categories (process, positive/negative soft skills). We estimated binary regression models to examine relationships between physician rating, patient experience themes, physician gender, and reviewer gender.

We found considerable bias against female physicians: Reviews of female physicians were considerably more negative than those of male physicians (OR 1.99; P<.001). Critiques of female physicians more often focused on soft skills such as amicability, disrespect and candor. Negative reviews typically have words such as “rude, arrogant, and condescending”

Reviews written by female patients were also more likely to mention disrespect (OR 1.27, P<.001), but female patients were less likely to report disrespect from female doctors than expected.

Finally, patient experiences with the bureaucratic process also impacted reviews. This includes issues like cost of care. Overall, lower patient satisfaction is correlated with high physician dominance (e.g., poor information sharing or using medical jargon)

Limitations of our work include the lack of definitive (or non-binary) information about gender; and the fact that we do not know about the actual outcomes of treatment for reviewers.

Even so, it seems critical that readers attend to the who the reviewers are when reading online reviews. Review sites may also want to provide information about gender differences, control for gender when presenting composite ratings for physicians, and helping users write less biased reviews. Reviewers should be aware of their own gender biases and assess reviews for this (http://slowe.github.io/genderbias/).

Living Disability Theory

A picture of a carved wooden cane in greens and blues

It was my honor this year to participate in an auto-ethnographic effort to explore accessibility research from a combination of personal and theoretical perspectives. In the process, and thanks to my amazing co-authors, I learned so much about myself, disability studies, ableism and accessibility.

Best Paper Award Hoffman, M., Kasnitz, D., Mankoff, J. and Bennett, C. l. (2020) Living Disability Theory: Reflections on Access, Research, and Design. In Proceedings of ASSETS 2020, 4:1-4:13

Abstract: Accessibility research and disability studies are intertwined fields focused on, respectively, building a world more inclusive of people with disability and understanding and elevating the lived experiences of disabled people. Accessibility research tends to focus on creating technology related to impairment, while disability studies focuses on understanding disability and advocating against ableist systems. Our paper presents a reflexive analysis of the experiences of three accessibility researchers and one disability studies scholar. We focus on moments when our disability was misunderstood and causes such as expecting clearly defined impairments. We derive three themes: ableism in research, oversimplification of disability, and human relationships around disability. From these themes, we suggest paths toward more strongly integrating disability studies perspectives and disabled people into accessibility research.

Detecting Loneliness

Feelings of loneliness are associated with poor physical and mental health. Detection of loneliness through passive sensing on personal devices can lead to the development of interventions aimed at decreasing rates of loneliness.

Doryab, Afsaneh, et al. “Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data.” JMIR mHealth and uHealth 7.7 (2019): e13209.

Objective: The aim of this study was to explore the potential of using passive sensing to infer levels of loneliness and to identify the corresponding behavioral patterns.

Methods: Data were collected from smartphones and Fitbits (Flex 2) of 160 college students over a semester. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire at the beginning and end of the semester. For a classification purpose, the scores were categorized into high (questionnaire score>40) and low (≤40) levels of loneliness. Daily features were extracted from both devices to capture activity and mobility, communication and phone usage, and sleep behaviors. The features were then averaged to generate semester-level features. We used 3 analytic methods: (1) statistical analysis to provide an overview of loneliness in college students, (2) data mining using the Apriori algorithm to extract behavior patterns associated with loneliness, and (3) machine learning classification to infer the level of loneliness and the change in levels of loneliness using an ensemble of gradient boosting and logistic regression algorithms with feature selection in a leave-one-student-out cross-validation manner.

Results: The average loneliness score from the presurveys and postsurveys was above 43 (presurvey SD 9.4 and postsurvey SD 10.4), and the majority of participants fell into the high loneliness category (scores above 40) with 63.8% (102/160) in the presurvey and 58.8% (94/160) in the postsurvey. Scores greater than 1 standard deviation above the mean were observed in 12.5% (20/160) of the participants in both pre- and postsurvey scores. The majority of scores, however, fell between 1 standard deviation below and above the mean (pre=66.9% [107/160] and post=73.1% [117/160]).

Our machine learning pipeline achieved an accuracy of 80.2% in detecting the binary level of loneliness and an 88.4% accuracy in detecting change in the loneliness level. The mining of associations between classifier-selected behavioral features and loneliness indicated that compared with students with low loneliness, students with high levels of loneliness were spending less time outside of campus during evening hours on weekends and spending less time in places for social events in the evening on weekdays (support=17% and confidence=92%). The analysis also indicated that more activity and less sedentary behavior, especially in the evening, was associated with a decrease in levels of loneliness from the beginning of the semester to the end of it (support=31% and confidence=92%).

Conclusions: Passive sensing has the potential for detecting loneliness in college students and identifying the associated behavioral patterns. These findings highlight intervention opportunities through mobile technology to reduce the impact of loneliness on individuals’ health and well-being.

News: Smartphones and Fitbits can spot loneliness in its tracks, Science 101

The Limits of Expert Text Entry Speed

Improving mobile keyboard typing speed increases in value as more tasks move to a mobile setting. Autocorrect is a powerful way to reduce the time it takes to manually fix typing errors, which results in typing speed increase. However, recent user studies of autocorrect uncovered an unexplored side-effect: participants’ aversion to typing errors despite autocorrect. We present the first computational model of typing on keyboards with autocorrect, which enables precise study of expert typists’ aversion to typing errors on such keyboards. Unlike empirical typing studies that last days, our model evaluates the effects of typists’ aversion to typing errors for any autocorrect accuracy in seconds. We show that typists’ aversion to typing errors adds a self-imposed limit on upper bound typing speeds, which decreases the value of highly accurate autocorrect. Our findings motivate future designs of keyboards with autocorrect that reduce typists’ aversion to typing errors to increase typing speeds.

The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect Nikola Banovic, Ticha Sethapakdi, Yasasvi Hari, Anind K. Dey, Jennifer Mankoff. Mobile HCI 2019.

A picture of a samsung phone. The screen says: Block 2. Trial 6 of 10. this camera takes nice photographs. The user has begun typing with errors: "this camera tankes l" Error correction offers 'tankes' 'tankers' and 'takes' and a soft keyboard is shown before that.

An example mobile device with a soft keyboard: A) text entry area, which in our study contained study progress, the current phrase to transcribe, and an area for transcribed characters, B) automatically suggested words, and C) a miniQWERTY soft keyboard with autocorrect.

A bar plat showing typing speed (WPM, y axis) against acuracy (0 to 1). The bars start at 32 WPM (for 0 accuracy) and go up to approx 32 (for accuracy of 1).
Our model estimated expected mean typing speeds (lines) for different levels of typing error rate aversion (e) compared to mean empirical typing speed with automatic correction and suggestion (bar plot) in WPM across Accuracy. Error bars represent 95% confidence intervals.
4 bar plats showing error rate in uncorrected, corrected, autocorrected, and manual corrected conditions. Error rates for uncorrected are (approximately) 0 to 0.05 as accuracy increases; error rates for corrected are .10 to .005 for corrected condition as accuracy goes from 0 to 1. Error rates are  0 to about .1 for uncorrected as accuracy goes from 0 to 1. Error rates are variable but all below 0.05 for manual as accuracy goes from 0 to 1
Median empirical error rates across Accuracy in session 3 with automated correction and suggestion. Error bars represent minimum and maximum error rate values, and dots represent outliers

KnitPick: Manipulating Texture

Knitting creates complex, soft objects with unique and controllable texture properties that can be used to create interactive objects. However, little work addresses the challenges of using knitted textures. We present KnitPick: a pipeline for interpreting pre-existing hand-knitting texture patterns into a directed-graph representation of knittable structures (KnitGraphs) which can be output to machine and hand-knitting instructions. Using KnitPick, we contribute a measured and photographed data set of 300 knitted textures. Based on findings from this data set, we contribute two algorithms for manipulating KnitGraphs. KnitCarving shapes a graph while respecting a texture, and KnitPatching combines graphs with disparate textures while maintaining a consistent shape. Using these algorithms and textures in our data set we are able to create three Knitting based interactions: roll, tug, and slide. KnitPick is the first system to bridge the gap between hand- and machine-knitting when creating complex knitted textures.

KnitPick: Programming and Modifying Complex Knitted Textures for Machine and Hand Knitting, Megan Hofmann, Lea Albaugh, Ticha Sethapakdi, Jessica Hodgins, Scott e. Hudson, James McCann, Jennifer Mankoff. UIST 2019. The KnitPick Data set can be found here.

A picture of a knit speak file which is compiled into a knit graph (which can be modified using carving and patching) and then compiled to knitout, which can be printed on a knitting machine. Below the graph is a picture of different sorts of lace textures supported by knitpick.
KnitPick converts KnitSpeak into KnitGraphs which can be carved, patched and output to knitted results
A photograph of the table with our data measurement setup, along with piles of patches that are about to be measured and have recently been measured. One patch is attached to the rods and clips used for stretching.
Data set measurement setup, including camera, scale, and stretching rig
A series of five images, each progressively skinnier than the previous. Each image is a knitted texture with 4 stars on it. They are labeled (a) original swatch (b) 6 columns removed (c) 9 columns removed (d) 12 columns removed (e) 15 columns removed
The above images show a progression from the original Star texture to the same texture with 15 columns removed by texture carving. These photographs were shown to crowd-workers who rated their similarity. Even with a whole repetition width removed from the Stars, the pattern remains a recognizable star pattern.

Passively-sensing Discrimination

See the UW News article featuring this study!

A deeper understanding of how discrimination impacts psychological health and well-being of students would allow us to better protect individuals at risk and support those who encounter discrimination. While the link between discrimination and diminished psychological and physical well-being is well established, existing research largely focuses on chronic discrimination and long-term outcomes. A better understanding of the short-term behavioral correlates of discrimination events could help us to concretely quantify the experience, which in turn could support policy and intervention design. In this paper we specifically examine, for the first time, what behaviors change and in what ways in relation to discrimination. We use actively-reported and passively-measured markers of health and well-being in a sample of 209 first-year college students over the course of two academic quarters. We examine changes in indicators of psychological state in relation to reports of unfair treatment in terms of five categories of behaviors: physical activity, phone usage, social interaction, mobility, and sleep. We find that students who encounter unfair treatment become more physically active, interact more with their phone in the morning, make more calls in the evening, and spend less time in bed on the day of the event. Some of these patterns continue the next day.

Passively-sensed Behavioral Correlates of Discrimination Events in College Students. Yasaman S. Sefidgar, Woosuk Seo, Kevin S. Kuehn, Tim Althoff, Anne Browning, Eve Ann Riskin, Paula S. Nurius, Anind K Dey, Jennifer Mankoff. CSCW 2019.

A bar plot sorted by number of reports, with about 100 reports of unfair treatment based on national origin, 90 based on intelligence, 70 based on gender, 60 based on apperance, 50 on age, 45 on sexual orientation, 35 on major, 30 on weight, 30 on height, 20 on income, 10 on disability, 10 on religion, and 10 on learning
Breakdown of 448 reports of unfair treatment by type. National, Orientation, and Learning refer to ancestry or national origin, sexual orientation, and learning disability respectively. See Table 3 for details of all categories. Participants were able to report multiple incidents of unfair treatment, possibly of different types, in each report. As described in the paper, we do not have data on unfair treatment based on race.
A heatplot showing sensor data collected by day in 5 categories: Activity, screen, locations, fitbit, and calls.
A heatplot showing compliance with sensor data collection. Sensor data availability for each day of the study is shown in terms of the number of participants whose data is available on a given day. Weeks of the study are marked on the horizontal axis while different sensors appear on the vertical axis. Important calendar dates (e.g., start / end of the quarter and exam periods) are highlighted as are the weeks of daily surveys. The brighter the cells for a sensor the larger the number of people contributing data for that sensor. Event-based sensors (e.g., calls) are not as bright as sensors continuously sampled (e.g., location) as expected. There was a technical issue in the data collection application in the middle of study, visible as a dark vertical line around the beginning of April.
A diagram showing compliance in surveys, organized by nweek of study. One line shows compliance in the large surveys given at pre, mid and post, which drops from 99% to 94% to 84%. The other line shows average weekly compliance in EMAs, which goes up in the second week to 93% but then drops slowly (with some variability) to 89%
Timeline and completion rate of pre, mid, and post questionnaires as well as EMA surveys. Y axis
shows the completion rates and is narrowed to the range 50-100%. The completion rate of pre, mid, and post questionnaires are percentages of the original pool of 209 participants, whereas EMA completion rates are based on the 176 participants who completed the study. EMA completion rates are computed as the average completion rate of the surveys administered in a certain week of the study. School-related events (i.e., start and end of quarters as well as exam periods) are marked. Dark blue bars (Daily Survey) show the weeks when participants answered surveys every day, four times a day
Barplot showing significance of morning screen use, calls, minutes asleep, time in bed, range of activities, number of steps, anxiety, depression, and frustration on the day before, of, and after unfair treatment. All but minutes asleep are significant at p=.05 or below on the day of discrimination, but this drops off after.
Patterns of feature significance from the day before to two days after the discrimination event. The
shortest bars represent the highest significance values (e.g., depressed and frustrated on day 0; depressed on day 1; morning screen use on day 2). There are no significant differences the day before. Most short-term relationships exist on the day of the event, a few appear on the next day (day 1). On the third day one
significant difference, repeated, from the first day is observed.