Practices and Needs of Mobile Sensing Researchers

Passive mobile sensing for the purpose of human state modeling is a fast-growing area. It has been applied to solve a wide range of behavior-related problems, including physical and mental health monitoring, affective computing, activity recognition, routine modeling, etc. However, in spite of the emerging literature that has investigated a wide range of application scenarios, there is little work focusing on the lessons learned by researchers, and on guidance for researchers to this approach. How do researchers conduct these types of research studies? Is there any established common practice when applying mobile sensing across different application areas? What are the pain points and needs that they frequently encounter? Answering these questions is an important step in the maturing of this growing sub-field of ubiquitous computing, and can benefit a wide range of audiences. It can serve to educate researchers who have growing interests in this area but have little to no previous experience. Intermediate researchers may also find the results interesting and helpful for reference to improve their skills. Moreover, it can further shed light on the design guidelines for a future toolkit that could facilitate research processes being used. In this paper, we fill this gap and answer these questions by conducting semi-structured interviews with ten experienced researchers from four countries to understand their practices and pain points when conducting their research. Our results reveal a common pipeline that researchers have adopted, and identify major challenges that do not appear in published work but that researchers often encounter. Based on the results of our interviews, we discuss practical suggestions for novice researchers and high-level design principles for a toolkit that can accelerate passive mobile sensing research.

Understanding practices and needs of researchers in human state modeling by passive mobile sensing. Xu, Xuhai, Jennifer Mankoff, and Anind K. Dey. CCF Transactions on Pervasive Computing and Interaction (2021): 1-23.

College during COVID

Mental health of UW students during Spring 2020 varied tremendously: the challenges of online learning during the pandemic were entwined with social isolation, family demands and socioeconomic pressures. In this context, individual differences in coping mechanisms had a big impact. The findings of this paper underline the need for interventions oriented towards problem-focused coping and suggest opportunities for peer role modeling.

College from home during COVID-19: A mixed-methods study of heterogeneous experiences. Morris ME, Kuehn KS, Brown J, Nurius PS, Zhang H, Sefidgar YS, Xuhai X, Riskin EA, Dey A, Consolvo S, Mankoff JC. (2021) PLoS ONE 16(6): e0251580. (reported in UW News and the Hechtinger Report)

A lineplot showing anxiousness (Y axis, varying from 0 to 4) over time (X axis). Each student in the study is plotted as a different line over each day of the quarter. The plot overall looks very messy, but two things are clear; Every student has a very different trajectory from every other, with all of them going up and down multiple times. And the average, overall, shown is a fit line, is fairly low and slightly increasing (from about .75 to just under 1).
Heterogeneity in individuals’ levels of anxiety (reported in ESM). Individual trajectories of anxiety are shown in different line types and colors (dotted versus solid lines represent different participants). Although the mean level of anxiety is 1 on a scale of 0–4, the significant variation in responses invites examination of individuals and subgroups.

This mixed-method study examined the experiences of college students during the COVID-19 pandemic through surveys, experience sampling data collected over two academic quarters (Spring 2019 n1 = 253; Spring 2020 n2 = 147), and semi-structured interviews with 27 undergraduate students. 

There were no marked changes in mean levels of depressive symptoms, anxiety, stress, or loneliness between 2019 and 2020, or over the course of the Spring 2020 term. Students in both the 2019 and 2020 cohort who indicated psychosocial vulnerability at the initial assessment showed worse psychosocial functioning throughout the entire Spring term relative to other students. However, rates of distress increased faster in 2020 than in 2019 for these individuals. Across individuals, homogeneity of variance tests and multi-level models revealed significant heterogeneity, suggesting the need to examine not just means but the variations in individuals’ experiences. 

Thematic analysis of interviews characterizes these varied experiences, describing the contexts for students’ challenges and strategies. This analysis highlights the interweaving of psychosocial and academic distress: Challenges such as isolation from peers, lack of interactivity with instructors, and difficulty adjusting to family needs had both an emotional and academic toll. Strategies for adjusting to this new context included initiating remote study and hangout sessions with peers, as well as self-learning. In these and other strategies, students used technologies in different ways and for different purposes than they had previously. Supporting qualitative insight about adaptive responses were quantitative findings that students who used more problem-focused forms of coping reported fewer mental health symptoms over the course of the pandemic, even though they perceived their stress as more severe. 

Example quotes:

I like to build things and stuff like that. I like to see it in person and feel it. So the fact that everything was online…. I’m just basically reading all the time. I just couldn’t learn that way

Insomnia has been pretty hard for me . . .  I would spend a lot of time lying in bed not doing anything when I had a lot of homework to do the next day. So then I would become stressed about whether I’ll be able to finish that homework or not.”

“It was challenging … being independent and then being pushed back home. It’s a huge change because now you have more rules again”

For a few of my classes I feel like actually [I] was self-learning because sometimes it’s hard to sit through hours of lectures and watch it.”

I would initiate… we have a study group chat and every day I would be like ‘Hey I’m going to be on at this time starting at this time.’ So then I gave them time to all have the room open for Zoom and stuff. Okay and then any time after that they can join and then said I [would] wait like maybe 30 minutes or even an hour…. And then people join and then we work maybe … till midnight, a little bit past midnight

Detecting Loneliness

Feelings of loneliness are associated with poor physical and mental health. Detection of loneliness through passive sensing on personal devices can lead to the development of interventions aimed at decreasing rates of loneliness.

Doryab, Afsaneh, et al. “Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data.” JMIR mHealth and uHealth 7.7 (2019): e13209.

Objective: The aim of this study was to explore the potential of using passive sensing to infer levels of loneliness and to identify the corresponding behavioral patterns.

Methods: Data were collected from smartphones and Fitbits (Flex 2) of 160 college students over a semester. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire at the beginning and end of the semester. For a classification purpose, the scores were categorized into high (questionnaire score>40) and low (≤40) levels of loneliness. Daily features were extracted from both devices to capture activity and mobility, communication and phone usage, and sleep behaviors. The features were then averaged to generate semester-level features. We used 3 analytic methods: (1) statistical analysis to provide an overview of loneliness in college students, (2) data mining using the Apriori algorithm to extract behavior patterns associated with loneliness, and (3) machine learning classification to infer the level of loneliness and the change in levels of loneliness using an ensemble of gradient boosting and logistic regression algorithms with feature selection in a leave-one-student-out cross-validation manner.

Results: The average loneliness score from the presurveys and postsurveys was above 43 (presurvey SD 9.4 and postsurvey SD 10.4), and the majority of participants fell into the high loneliness category (scores above 40) with 63.8% (102/160) in the presurvey and 58.8% (94/160) in the postsurvey. Scores greater than 1 standard deviation above the mean were observed in 12.5% (20/160) of the participants in both pre- and postsurvey scores. The majority of scores, however, fell between 1 standard deviation below and above the mean (pre=66.9% [107/160] and post=73.1% [117/160]).

Our machine learning pipeline achieved an accuracy of 80.2% in detecting the binary level of loneliness and an 88.4% accuracy in detecting change in the loneliness level. The mining of associations between classifier-selected behavioral features and loneliness indicated that compared with students with low loneliness, students with high levels of loneliness were spending less time outside of campus during evening hours on weekends and spending less time in places for social events in the evening on weekdays (support=17% and confidence=92%). The analysis also indicated that more activity and less sedentary behavior, especially in the evening, was associated with a decrease in levels of loneliness from the beginning of the semester to the end of it (support=31% and confidence=92%).

Conclusions: Passive sensing has the potential for detecting loneliness in college students and identifying the associated behavioral patterns. These findings highlight intervention opportunities through mobile technology to reduce the impact of loneliness on individuals’ health and well-being.

News: Smartphones and Fitbits can spot loneliness in its tracks, Science 101

The Limits of Expert Text Entry Speed

Improving mobile keyboard typing speed increases in value as more tasks move to a mobile setting. Autocorrect is a powerful way to reduce the time it takes to manually fix typing errors, which results in typing speed increase. However, recent user studies of autocorrect uncovered an unexplored side-effect: participants’ aversion to typing errors despite autocorrect. We present the first computational model of typing on keyboards with autocorrect, which enables precise study of expert typists’ aversion to typing errors on such keyboards. Unlike empirical typing studies that last days, our model evaluates the effects of typists’ aversion to typing errors for any autocorrect accuracy in seconds. We show that typists’ aversion to typing errors adds a self-imposed limit on upper bound typing speeds, which decreases the value of highly accurate autocorrect. Our findings motivate future designs of keyboards with autocorrect that reduce typists’ aversion to typing errors to increase typing speeds.

The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect Nikola Banovic, Ticha Sethapakdi, Yasasvi Hari, Anind K. Dey, Jennifer Mankoff. Mobile HCI 2019.

A picture of a samsung phone. The screen says: Block 2. Trial 6 of 10. this camera takes nice photographs. The user has begun typing with errors: "this camera tankes l" Error correction offers 'tankes' 'tankers' and 'takes' and a soft keyboard is shown before that.

An example mobile device with a soft keyboard: A) text entry area, which in our study contained study progress, the current phrase to transcribe, and an area for transcribed characters, B) automatically suggested words, and C) a miniQWERTY soft keyboard with autocorrect.

A bar plat showing typing speed (WPM, y axis) against acuracy (0 to 1). The bars start at 32 WPM (for 0 accuracy) and go up to approx 32 (for accuracy of 1).
Our model estimated expected mean typing speeds (lines) for different levels of typing error rate aversion (e) compared to mean empirical typing speed with automatic correction and suggestion (bar plot) in WPM across Accuracy. Error bars represent 95% confidence intervals.
4 bar plats showing error rate in uncorrected, corrected, autocorrected, and manual corrected conditions. Error rates for uncorrected are (approximately) 0 to 0.05 as accuracy increases; error rates for corrected are .10 to .005 for corrected condition as accuracy goes from 0 to 1. Error rates are  0 to about .1 for uncorrected as accuracy goes from 0 to 1. Error rates are variable but all below 0.05 for manual as accuracy goes from 0 to 1
Median empirical error rates across Accuracy in session 3 with automated correction and suggestion. Error bars represent minimum and maximum error rate values, and dots represent outliers

Automatically Tracking and Executing Green Actions

We believe that self-reporting is a limiting factor in the original vision of StepGreen.org, and this component of our research has begun to explore alternatives. For example, we showed that financial data can be used to extract footprint information [1], and in collaboration with researchers at Intel and University of Washington, we used a mobile device to track and visualize green transportation behavior in the Ubigreen project (published at CHI 2009 [2]). We have also worked on algorithms to predict the indoor location of work and home arrival time of residential building occupants so as to automatically minimize thermostat use [3, 4]. Finally, we moved away from individual behavioral remedies to structural remedies by exploring tools that could help tenants to pick greener apartments [5]

[1] J. Schwartz, J. Mankoff, H. Scott Matthews. Reflections of everyday activity in spending data. In Proceedings of CHI 2009.  (Note). (pdf)

[2] J. Froehlich, T. Dillahunt, P. Klasnja, J. Mankoff, S. Consolvo, B. Harrison, J. A. Landay, UbiGreen: Investigating a Mobile Tool for Tracking and Supporting Green Transportation Habits. In Proceedings of CHI 2009. (Full paper) (pdf)

[3] Indoor-ALPS: an adaptive indoor location prediction system Christian Koehler, Nikola Banovic, Ian Oakley, Jennifer Mankoff, Anind K. Dey
UbiComp ’14 Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014

[4] TherML: occupancy prediction for thermostat control Christian Koehler, Brian D. Ziebart, Jennifer Mankoff, Anind K. Dey UbiComp ’13 Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, 2013

[5] Jennifer Mankoff, Dimeji Onafuwa, Kirstin Early, Nidhi Vyas, Vikram Kamath Cannanure: Understanding the Needs of Prospective Tenants. COMPASS 2018: 36:1-36:10

Understanding gender equity in author order assignment

Academic success and promotion are heavily influenced by publication record. In many fields, including computer science, multi-author papers are the norm. Evidence from other fields shows that norms for ordering author names can influence the assignment of credit. We interviewed 38 students and faculty in human- computer interaction (HCI) and machine learning (ML) at two institutions to determine factors related to assignment of author order in collaborative publication in the field of computer science. We found that women were concerned with author order earlier in the process:

Our female interviews reported raising author order in discussion earlier in the process than men.

Interview outcomes informed metrics for our bibliometric analysis of gender and collaboration in papers published between 1996 and 2016 in three top HCI and ML conferences. We found expected results overall — being the most junior author increased the likelihood of first authorship, while being the most senior author increased the likelihood of last authorship. However, these effects disappeared or even reversed for women authors:

Comparison of regression weights for author rank (blue) with author rank crossed with gender (orange). Regression was predicting author position (first, middle, last)

Based on our findings, we make recommendations for assignment of credit in multi-author papers and interpretation of author order, particularly with respect to how these factors affect women.

EDigs

eDigs logoJennifer MankoffDimeji OnafuwaKirstin EarlyNidhi VyasVikram Kamath:
Understanding the Needs of Prospective Tenants. COMPASS 2018: 36:1-36:10

EDigs is a research project group in Carnegie Mellon University working on sustainability. Our research is focused on helping people find a perfect rental through machine learning and user research.

We sometimes study how our members use EDigs in order to learn how to build software support for successful social communities.

eDigs websiteScreenshot of edigs.org showing a mobile app, facebook and twitter feeds, and information about it.

Aversion to Typing Errors

Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry

Text entry is an increasingly important activity for mobile device users. As a result, increasing text entry speed of expert typists is an important design goal for physical and soft keyboards. Mathematical models that predict text entry speed can help with keyboard design and optimization. Making typing errors when entering text is inevitable. However, current models do not consider how typists themselves reduce the risk of making typing errors (and lower error frequency) by typing more slowly. We demonstrate that users respond to costly typing errors by reducing their typing speed to minimize typing errors. We present a model that estimates the effects of risk aversion to errors on typing speed. We estimate the magnitude of this speed change, and show that disregarding the adjustments to typing speed that expert typists use to reduce typing errors leads to overly optimistic estimates of maximum errorless expert typing speeds.

promoNikola Banovic, Varun Rao, Abinaya Saravanan, Anind K. Dey, and Jennifer Mankoff. 2017. Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.

Modeling & Generating Routines

Leveraging Human Routine Models to Detect and Generate Human Behaviors

An ability to detect behaviors that negatively impact people’s wellbeing and show people how they can correct those behaviors could enable technology that improves people’s lives. Existing supervised machine learning approaches to detect and generate such behaviors require lengthy and expensive data labeling by domain experts. In this work, we focus on the domain of routine behaviors, where we model routines as a series of frequent actions that people perform in specific situations. We present an approach that bypasses labeling each behavior instance that a person exhibits. Instead, we weakly label instances using people’s demonstrated routine. We classify and generate new instances based on the probability that they belong to the routine model. We illustrate our approach on an example system that helps drivers become aware of and understand their aggressive driving behaviors. Our work enables technology that can trigger interventions and help people reflect on their behaviors when those behaviors are likely to negatively impact them.

drivingsimulator_no_labelNikola Banovic, Anqi Wang, Yanfeng Jin, Christie Chang, Julian Ramos, Anind K. Dey, and Jennifer Mankoff. 2017. Leveraging Human Routine Models to Detect and Generate Human Behaviors. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.

Watch-ya-doin

Watch-ya-doin is an innovative experienced based sampling framework for longitudinal data collection and analysis. Our system consists of a smartwatch and an android device working unobtrusively to track data. Our goal is to train on and recognize a specific activity over time. We use a simple wrist-worn accelerometer to predict eating behavior and other activities. These are inexpensive to deploy and easy to maintain, since battery life is a whole week using our application.
document3
     Our primary application area is AT abandonment. About 700,000 people in the United States have an upper limb amputation, and about 6.8 million face fine motor and/or arm dexterity limitations[1]. Assistive technology (AT), ranging from myo-electric prosthetics to passive prosthetics to a variety of orthotics can help in the rehabilitation and improve independence and ability to perform everyday tasks. Yet AT is not used to its full potential, with abandonment rates ranging from 23% to 90% for prosthetics users, and high abandonment of orthotics as well. Given the cost of these devices, this is an enormous waste of a significant financial investment in developing, fabricating, and providing the device, as well as potentially leading to frustration, insufficient rehabilitation, increased risk of limb-loss associated co-morbidities, and overall a reduced quality of life for the recipient.
       To address this, we need objective and accurate information about AT use. Current data is limited primarily to questionnaires, or skill testing during office visits. Apart from being limited by subjectivity and evaluator bias, survey tools are also not appropriate to estimate quality of use. A patient may – more or less accurately – report his or her AT use for a certain number of hours a day, but this does not indicate which tasks it was used for, which makes it difficult to evaluate how appropriate or helpful they were. In addition, neither reported use time nor skill testing can be sufficiently used to predict abandonment once AT is deployed.

      Our next steps include generalizing our approach to AT (such as upper limb prosthetics), and expanding it to include a wider variety of tracked activities. In addition, we will develop a longitudinal data set that includes examples of abandonment. This will allow the creation algorithms that can characterize the type and quality of use over the lifecycle of AT and predict abandonment.

[1] U.S. Census 2001