Detecting Loneliness

Feelings of loneliness are associated with poor physical and mental health. Detection of loneliness through passive sensing on personal devices can lead to the development of interventions aimed at decreasing rates of loneliness.

Doryab, Afsaneh, et al. “Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data.” JMIR mHealth and uHealth 7.7 (2019): e13209.

Objective: The aim of this study was to explore the potential of using passive sensing to infer levels of loneliness and to identify the corresponding behavioral patterns.

Methods: Data were collected from smartphones and Fitbits (Flex 2) of 160 college students over a semester. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire at the beginning and end of the semester. For a classification purpose, the scores were categorized into high (questionnaire score>40) and low (≤40) levels of loneliness. Daily features were extracted from both devices to capture activity and mobility, communication and phone usage, and sleep behaviors. The features were then averaged to generate semester-level features. We used 3 analytic methods: (1) statistical analysis to provide an overview of loneliness in college students, (2) data mining using the Apriori algorithm to extract behavior patterns associated with loneliness, and (3) machine learning classification to infer the level of loneliness and the change in levels of loneliness using an ensemble of gradient boosting and logistic regression algorithms with feature selection in a leave-one-student-out cross-validation manner.

Results: The average loneliness score from the presurveys and postsurveys was above 43 (presurvey SD 9.4 and postsurvey SD 10.4), and the majority of participants fell into the high loneliness category (scores above 40) with 63.8% (102/160) in the presurvey and 58.8% (94/160) in the postsurvey. Scores greater than 1 standard deviation above the mean were observed in 12.5% (20/160) of the participants in both pre- and postsurvey scores. The majority of scores, however, fell between 1 standard deviation below and above the mean (pre=66.9% [107/160] and post=73.1% [117/160]).

Our machine learning pipeline achieved an accuracy of 80.2% in detecting the binary level of loneliness and an 88.4% accuracy in detecting change in the loneliness level. The mining of associations between classifier-selected behavioral features and loneliness indicated that compared with students with low loneliness, students with high levels of loneliness were spending less time outside of campus during evening hours on weekends and spending less time in places for social events in the evening on weekdays (support=17% and confidence=92%). The analysis also indicated that more activity and less sedentary behavior, especially in the evening, was associated with a decrease in levels of loneliness from the beginning of the semester to the end of it (support=31% and confidence=92%).

Conclusions: Passive sensing has the potential for detecting loneliness in college students and identifying the associated behavioral patterns. These findings highlight intervention opportunities through mobile technology to reduce the impact of loneliness on individuals’ health and well-being.

News: Smartphones and Fitbits can spot loneliness in its tracks, Science 101

Leveraging Routine Behavior and Contextually-Filtered Features for Depression Detection among College Students

The rate of depression in college students is rising, which is known to increase suicide risk, lower academic performance and double the likelihood of dropping out. Researchers have used passive mobile sensing technology to assess mental health. Existing work on finding relationships between mobile sensing and depression, as well as identifying depression via sensing features, mainly utilize single data channels or simply concatenate multiple channels. There is an opportunity to identify better features by reasoning about co-occurrence across multiple sensing channels. We present a new method to extract contextually filtered features on passively collected, time-series data from mobile devices via rule mining algorithms. We first employ association rule mining algorithms on two different user groups (e.g., depression vs. non-depression). We then introduce a new metric to select a subset of rules that identifies distinguishing behavior patterns between the two groups. Finally, we consider co-occurrence across the features that comprise the rules in a feature extraction stage to obtain contextually filtered features with which to train classifiers. Our results reveal that the best model with these features significantly outperforms a standard model that uses unimodal features by an average of 9.7% across a variety of metrics. We further verified the generalizability of our approach on a second dataset, and achieved very similar results.

Leveraging Routine Behavior and Contextually-Filtered Features for Depression Detection among College Students. Xuhai Xu, Prerna Chikersal, Afsaneh Doryab, Daniella Villaalba, Janine M. Dutcher, Michael J. Tumminia, Tim Althoff, Sheldon Cohen, Kasey Creswell, David Creswell, Jennifer Mankoff and Anind K. Dey. IMWUT, Article No 116. 10.1145/3351274

A pipeline starting with data collection (including from mobile phone sensors, campus map, and fitbit) which feeds into feature extraction. This is piped into association rule mining, and features plus rules are combined to create contextually filtered features, which are then piped into a machine learning classifier. Ground truth comes from the BDI-II questionnaire.
The high-level pipeline of the integration of rule mining algorithms and machine learning models. The dashed frame highlights the novel contribution of the paper. We designed a new metric to select the top rules from the rule set generated by ARM. We also proposed a new approach to extract contextually filtered features based on the top rules. Finally, we use these features to train classifiers.

Orson (Xuhai) Xu

Orson is a first-year Ph.D. student working with Jennifer Mankoff  and Anind K. Dey in the Information School at the University of Washington – Seattle. Prior to joining UW, he obtained his Bachelor’s degrees in Industrial Engineering (major) and Computer Science (minor) from Tsinghua University in 2018. While at Tsinghua, he received Best Paper Honorable Mentioned Award (CHI 2018), Person of the Year Award and Outstanding Undergraduate Awards. His research focuses on two aspects in the intersection of human-computer interaction, ubiquitous computing and machine learning: 1) the modeling of human behavior such as routine behavior and 2) novel interaction techniques.

Visit Orson’s homepage at : orsonxu.com

Some recent projects (see more)

EDigs

eDigs logoJennifer MankoffDimeji OnafuwaKirstin EarlyNidhi VyasVikram Kamath:
Understanding the Needs of Prospective Tenants. COMPASS 2018: 36:1-36:10

EDigs is a research project group in Carnegie Mellon University working on sustainability. Our research is focused on helping people find a perfect rental through machine learning and user research.

We sometimes study how our members use EDigs in order to learn how to build software support for successful social communities.

eDigs websiteScreenshot of edigs.org showing a mobile app, facebook and twitter feeds, and information about it.

Modeling Human Routines

Modeling and Understanding Human Routine Behavior

Human routines are blueprints of behavior, which allow people to accomplish their purposeful repetitive tasks and activities. People express their routines through actions that they perform in the particular situations that triggered those actions. An ability to model routines and understand the situations in which they are likely to occur could allow technology to help people improve their bad habits, inexpert behavior, and other suboptimal routines. In this project we explore generalizable routine modeling approaches that encode patterns of routine behavior in ways that allow systems, such as smart agents, to classify, predict, and reason about human actions under the inherent uncertainty present in human behavior. Such technologies can have a positive effect on society by making people healthier, safer, and more efficient in their routine tasks.

Routines_Viz_Tool

Modeling and Understanding Human Routine Behavior
Nikola Banovic, Tofi Buzali, Fanny Chevalier, Jennifer Mankoff, and Anind K. Dey
In Proceedings of the 2016 ACM annual conference on Human Factors in Computing Systems(CHI ’16). ACM, New York, NY, USA.
Honorable Mention Award

Dynamic question ordering

In recent years, surveys have been shifting online, offering the possibility for adaptive questions, where later questions depend on responses to earlier questions. We present a general framework for dynamically ordering questions, based on previous responses, to engage respondents, improving survey completion and imputation of unknown items. Our work considers two scenarios for data collection from survey-takers. In the first, we want to maximize survey completion (and the quality of necessary imputations) and so we focus on ordering questions to engage the respondent and collect hopefully all the information we seek, or at least the information that most characterizes the respondent so imputed values will be accurate. In the second scenario, our goal is to give the respondent a personalized prediction, based on information they provide. Since it is possible to give a reasonable prediction with only a subset of questions, we are not concerned with motivating the user to answer all questions. Instead, we want to order questions so that the user provides information that most reduces the uncertainty of our prediction, while not being too burdensome to answer.

Publications
Kirstin Early, Stephen E. Fienberg, Jennifer Mankoff. (2016). Test time feature ordering with FOCUS: Interactive predictions with minimal user burden. In Proceedings of 2016 ACM Conference on Pervasive and Ubiquitous ComputingHonorable Mention: Top 5% of submissions. Talk slides.

Infant Oxygen Monitoring

Hospitalized children on continuous oxygen monitors generate >40,000 data points per patient each day. These data do not show context or reveal trends over time, techniques proven to improve comprehension and use. Management of oxygen in hospitalized patients is suboptimal—premature infants spend >40% of each day outside of evidence-based oxygen saturation ranges and weaning oxygen is delayed in infants with bronchiolitis who are physiologically ready. Data visualizations may improve user knowledge of data trends and inform better decisions in managing supplemental oxygen delivery.

First, we studied the workflows and breakdowns for nurses and respiratory therapists (RTs) in the supplemental oxygen delivery of infants with respiratory disease. Secondly, using end-user design we developed a data display that informed decision-making in this context. Our ultimate goal is to improve the overall work process using a combination of visualization and machine learning.

Visualization mockup for displaying O2 saturation over time to nurses.
Visualization mockup for displaying O2 saturation over time to nurses.