TypeOut: Just-in-Time Self-Affirmation for Reducing Phone Use

Smartphone overuse is related to a variety of issues such as lack of sleep and anxiety. We explore the application of Self-Affirmation Theory on smartphone overuse intervention in a just-in-time manner. We present TypeOut, a just-in-time intervention technique that integrates two components: an in-situ typing-based unlock process to improve user engagement, and self-affirmation-based typing content to enhance effectiveness. We hypothesize that the integration of typing and self-affirmation content can better reduce smartphone overuse. We conducted a 10-week within-subject field experiment (N=54) and compared TypeOut against two baselines: one only showing the self-affirmation content (a common notification-based intervention), and one only requiring typing non-semantic content (a state-of-the-art method). TypeOut reduces app usage by over 50%, and both app opening frequency and usage duration by over 25%, all significantly outperforming baselines. TypeOut can potentially be used in other domains where an intervention may benefit from integrating self-affirmation exercises with an engaging just-in-time mechanism.

Typeout: Leveraging just-in-time self-affirmation for smartphone overuse reduction. Xuhai Xu, Tianyuan Zou, Xiao Han, Yanzhang Li, Ruolin Wang, Tianyi Yuan, Yuntao Wang, Yuanchun Shi, Jennifer Mankoff,and Anind K. Dey. 2022. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). ACM, New York, NY, USA.

Practices and Needs of Mobile Sensing Researchers

Passive mobile sensing for the purpose of human state modeling is a fast-growing area. It has been applied to solve a wide range of behavior-related problems, including physical and mental health monitoring, affective computing, activity recognition, routine modeling, etc. However, in spite of the emerging literature that has investigated a wide range of application scenarios, there is little work focusing on the lessons learned by researchers, and on guidance for researchers to this approach. How do researchers conduct these types of research studies? Is there any established common practice when applying mobile sensing across different application areas? What are the pain points and needs that they frequently encounter? Answering these questions is an important step in the maturing of this growing sub-field of ubiquitous computing, and can benefit a wide range of audiences. It can serve to educate researchers who have growing interests in this area but have little to no previous experience. Intermediate researchers may also find the results interesting and helpful for reference to improve their skills. Moreover, it can further shed light on the design guidelines for a future toolkit that could facilitate research processes being used. In this paper, we fill this gap and answer these questions by conducting semi-structured interviews with ten experienced researchers from four countries to understand their practices and pain points when conducting their research. Our results reveal a common pipeline that researchers have adopted, and identify major challenges that do not appear in published work but that researchers often encounter. Based on the results of our interviews, we discuss practical suggestions for novice researchers and high-level design principles for a toolkit that can accelerate passive mobile sensing research.

Understanding practices and needs of researchers in human state modeling by passive mobile sensing. Xu, Xuhai, Jennifer Mankoff, and Anind K. Dey. CCF Transactions on Pervasive Computing and Interaction (2021): 1-23.

College during COVID

Mental health of UW students during Spring 2020 varied tremendously: the challenges of online learning during the pandemic were entwined with social isolation, family demands and socioeconomic pressures. In this context, individual differences in coping mechanisms had a big impact. The findings of this paper underline the need for interventions oriented towards problem-focused coping and suggest opportunities for peer role modeling.

College from home during COVID-19: A mixed-methods study of heterogeneous experiences. Morris ME, Kuehn KS, Brown J, Nurius PS, Zhang H, Sefidgar YS, Xuhai X, Riskin EA, Dey A, Consolvo S, Mankoff JC. (2021) PLoS ONE 16(6): e0251580. (reported in UW News and the Hechtinger Report)

A lineplot showing anxiousness (Y axis, varying from 0 to 4) over time (X axis). Each student in the study is plotted as a different line over each day of the quarter. The plot overall looks very messy, but two things are clear; Every student has a very different trajectory from every other, with all of them going up and down multiple times. And the average, overall, shown is a fit line, is fairly low and slightly increasing (from about .75 to just under 1).
Heterogeneity in individuals’ levels of anxiety (reported in ESM). Individual trajectories of anxiety are shown in different line types and colors (dotted versus solid lines represent different participants). Although the mean level of anxiety is 1 on a scale of 0–4, the significant variation in responses invites examination of individuals and subgroups.

This mixed-method study examined the experiences of college students during the COVID-19 pandemic through surveys, experience sampling data collected over two academic quarters (Spring 2019 n1 = 253; Spring 2020 n2 = 147), and semi-structured interviews with 27 undergraduate students. 

There were no marked changes in mean levels of depressive symptoms, anxiety, stress, or loneliness between 2019 and 2020, or over the course of the Spring 2020 term. Students in both the 2019 and 2020 cohort who indicated psychosocial vulnerability at the initial assessment showed worse psychosocial functioning throughout the entire Spring term relative to other students. However, rates of distress increased faster in 2020 than in 2019 for these individuals. Across individuals, homogeneity of variance tests and multi-level models revealed significant heterogeneity, suggesting the need to examine not just means but the variations in individuals’ experiences. 

Thematic analysis of interviews characterizes these varied experiences, describing the contexts for students’ challenges and strategies. This analysis highlights the interweaving of psychosocial and academic distress: Challenges such as isolation from peers, lack of interactivity with instructors, and difficulty adjusting to family needs had both an emotional and academic toll. Strategies for adjusting to this new context included initiating remote study and hangout sessions with peers, as well as self-learning. In these and other strategies, students used technologies in different ways and for different purposes than they had previously. Supporting qualitative insight about adaptive responses were quantitative findings that students who used more problem-focused forms of coping reported fewer mental health symptoms over the course of the pandemic, even though they perceived their stress as more severe. 

Example quotes:

I like to build things and stuff like that. I like to see it in person and feel it. So the fact that everything was online…. I’m just basically reading all the time. I just couldn’t learn that way

Insomnia has been pretty hard for me . . .  I would spend a lot of time lying in bed not doing anything when I had a lot of homework to do the next day. So then I would become stressed about whether I’ll be able to finish that homework or not.”

“It was challenging … being independent and then being pushed back home. It’s a huge change because now you have more rules again”

For a few of my classes I feel like actually [I] was self-learning because sometimes it’s hard to sit through hours of lectures and watch it.”

I would initiate… we have a study group chat and every day I would be like ‘Hey I’m going to be on at this time starting at this time.’ So then I gave them time to all have the room open for Zoom and stuff. Okay and then any time after that they can join and then said I [would] wait like maybe 30 minutes or even an hour…. And then people join and then we work maybe … till midnight, a little bit past midnight

Medical Making During COVID

The onset of COVID-19 led many makers to dive deeply into the potential applications of their work to help with the pandemic. Our group’s efforts on this front, all of which were collaborations with a variety of people from multiple universities, led me to this reflective talk about the additional work that is needed for us to take the next step towards democratizing fabrication.

This talk is based on a series of papers studying and working with people who make, including the following recent COVID-related papers:

Navigating Illness, Finding Place

Sylvia JanickiMatt Ziegler, Jennifer Mankoff:
Navigating Illness, Finding Place: Enhancing the Experience of Place for People Living with Chronic Illness. COMPASS 2021: 173-187

When chronic illness, such as Lyme disease, is viewed through a disability lens, equitable access to public spaces becomes an important area for consideration. Yet chronic illness is often viewed solely through an individualistic, medical model lens. We contribute to this field of study in four consecutive steps using Lyme disease as a case study: (1) we highlight urban design and planning literature to make the case for its relevance to chronic illness; (2) we explore the place-related impacts of living with chronic illness through an analysis of interviews with fourteen individuals living with Lyme disease; (3) we derive a set of design guidelines from our literature review and interviews that serve to support populations living with chronic illness; and (4) we present an interactive mapping prototype that applies our design guidelines to support individuals living with chronic illness in experiencing and navigating public and outdoor spaces.

(Left) Users can select accessibility qualities to filter the places shown on the map. (Right) Clicking on a place opens a popup window to see all of that place's coded accessibility qualities, pictures, reviews, and additional textual descriptions.

Detecting Depression △

A series of research projects based on the UWEXP study have focused on detecting depression in various ways. Three such papers are listed below.

Xuhai XuPrerna ChikersalJanine M. DutcherYasaman S. SefidgarWoosuk SeoMichael J. TumminiaDaniella K. VillalbaSheldon CohenKasey G. CreswellJ. David CreswellAfsaneh DoryabPaula S. NuriusEve A. RiskinAnind K. Dey, Jennifer Mankoff:
Leveraging Collaborative-Filtering for Personalized Behavior Modeling: A Case Study of Depression Detection among College Students. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5(1): 41:1-41:27 (2021)

The prevalence of mobile phones and wearable devices enables the passive capturing and modeling of human behavior at an unprecedented resolution and scale. Past research has demonstrated the capability of mobile sensing to model aspects of physical health, mental health, education, and work performance, etc. However, most of the algorithms and models proposed in previous work follow a one-size-fits-all (i.e., population modeling) approach that looks for common behaviors amongst all users, disregarding the fact that individuals can behave very differently, resulting in reduced model performance. Further, black-box models are often used that do not allow for interpretability and human behavior understanding. We present a new method to address the problems of personalized behavior classification and interpretability, and apply it to depression detection among college students. Inspired by the idea of collaborative-filtering, our method is a type of memory-based learning algorithm. It leverages the relevance of mobile-sensed behavior features among individuals to calculate personalized relevance weights, which are used to impute missing data and select features according to a specific modeling goal (e.g., whether the student has depressive symptoms) in different time epochs, i.e., times of the day and days of the week. It then compiles features from epochs using majority voting to obtain the final prediction. We apply our algorithm on a depression detection dataset collected from first-year college students with low data-missing rates and show that our method outperforms the state-of-the-art machine learning model by 5.1% in accuracy and 5.5% in F1 score. We further verify the pipeline-level generalizability of our approach by achieving similar results on a second dataset, with an average improvement of 3.4% across performance metrics. Beyond achieving better classification performance, our novel approach is further able to generate personalized interpretations of the models for each individual. These interpretations are supported by existing depression-related literature and can potentially inspire automated and personalized depression intervention design in the future.The prevalence of mobile phones and wearable devices enables the passive capturing and modeling of human behavior at an unprecedented resolution and scale. Past research has demonstrated the capability of mobile sensing to model aspects of physical health, mental health, education, and work performance, etc. However, most of the algorithms and models proposed in previous work follow a one-size-fits-all (i.e., population modeling) approach that looks for common behaviors amongst all users, disregarding the fact that individuals can behave very differently, resulting in reduced model performance. Further, black-box models are often used that do not allow for interpretability and human behavior understanding. We present a new method to address the problems of personalized behavior classification and interpretability, and apply it to depression detection among college students. Inspired by the idea of collaborative-filtering, our method is a type of memory-based learning algorithm. It leverages the relevance of mobile-sensed behavior features among individuals to calculate personalized relevance weights, which are used to impute missing data and select features according to a specific modeling goal (e.g., whether the student has depressive symptoms) in different time epochs, i.e., times of the day and days of the week. It then compiles features from epochs using majority voting to obtain the final prediction. We apply our algorithm on a depression detection dataset collected from first-year college students with low data-missing rates and show that our method outperforms the state-of-the-art machine learning model by 5.1% in accuracy and 5.5% in F1 score. We further verify the pipeline-level generalizability of our approach by achieving similar results on a second dataset, with an average improvement of 3.4% across performance metrics. Beyond achieving better classification performance, our novel approach is further able to generate personalized interpretations of the models for each individual. These interpretations are supported by existing depression-related literature and can potentially inspire automated and personalized depression intervention design in the future.

Leveraging Routine Behavior and Contextually-Filtered Features for Depression Detection among College Students. Xuhai Xu, Prerna Chikersal, Afsaneh Doryab, Daniella Villaalba, Janine M. Dutcher, Michael J. Tumminia, Tim Althoff, Sheldon Cohen, Kasey Creswell, David Creswell, Jennifer Mankoff and Anind K. Dey. IMWUT, Article No 116. 10.1145/3351274

The rate of depression in college students is rising, which is known to increase suicide risk, lower academic performance and double the likelihood of dropping out. Researchers have used passive mobile sensing technology to assess mental health. Existing work on finding relationships between mobile sensing and depression, as well as identifying depression via sensing features, mainly utilize single data channels or simply concatenate multiple channels. There is an opportunity to identify better features by reasoning about co-occurrence across multiple sensing channels. We present a new method to extract contextually filtered features on passively collected, time-series data from mobile devices via rule mining algorithms. We first employ association rule mining algorithms on two different user groups (e.g., depression vs. non-depression). We then introduce a new metric to select a subset of rules that identifies distinguishing behavior patterns between the two groups. Finally, we consider co-occurrence across the features that comprise the rules in a feature extraction stage to obtain contextually filtered features with which to train classifiers. Our results reveal that the best model with these features significantly outperforms a standard model that uses unimodal features by an average of 9.7% across a variety of metrics. We further verified the generalizability of our approach on a second dataset, and achieved very similar results.

Chikersal, P., Doryab, A., Tumminia, M., Villalba, D., Dutcher, J., Liu, X., Cohen, S., Creswell, K., Mankoff, J., Creswell, D., Goel, M., & Dey, A. “Detecting Depression and Predicting its Onset Using Longitudinal Symptoms Captured by Passive Sensing: A Machine Learning Approach With Robust Feature Selection.” ACM Transactions on Computer-Human Interaction (TOCHI), 2020.

We present a machine learning approach that uses data from smartphones and ftness trackers of 138 college students to identify students that experienced depressive symptoms at the end of the semester and students whose depressive symptoms worsened over the semester. Our novel approach is a feature extraction technique that allows us to select meaningful features indicative of depressive symptoms from longitudinal data. It allows us to detect the presence of post-semester depressive symptoms with an accuracy of 85.7% and change in symptom severity with an accuracy of 85.4%. It also predicts these outcomes with an accuracy of >80%, 11-15 weeks before the end of the semester, allowing ample time for preemptive interventions. Our work has signifcant implications for the detection of health outcomes using longitudinal behavioral data and limited ground truth. By detecting change and predicting symptoms several weeks before their onset, our work also has implications for preventing depression.

Shows barchart of import of different features onetecting change in depression
Bar chart shows value of baseline, bluetooth, calls, campus map, location, phone usage, sleep and step features on detecting change in depression. the best set leads to 85.4% accuracy; all features except bluetooth and calls improve on baseline accuracy of 65.9%

Gender in Online Doctor Reviews

Dunivin Z, Zadunayski L, Baskota U, Siek K, Mankoff J. Gender, Soft Skills, and Patient Experience in Online Physician Reviews: A Large-Scale Text Analysis. Journal of Medical Internet Research. 2020;22(7):e14455.

This study examines 154,305 Google reviews from across the United States for all medical specialties. Many patients use online physician reviews but we need to understand effects of gender on review content. Reviewer gender was inferred from names.

Reviews were coded for overall patient experience (negative or positive) by collapsing a 5-star scale and for general categories (process, positive/negative soft skills). We estimated binary regression models to examine relationships between physician rating, patient experience themes, physician gender, and reviewer gender.

We found considerable bias against female physicians: Reviews of female physicians were considerably more negative than those of male physicians (OR 1.99; P<.001). Critiques of female physicians more often focused on soft skills such as amicability, disrespect and candor. Negative reviews typically have words such as “rude, arrogant, and condescending”

Reviews written by female patients were also more likely to mention disrespect (OR 1.27, P<.001), but female patients were less likely to report disrespect from female doctors than expected.

Finally, patient experiences with the bureaucratic process also impacted reviews. This includes issues like cost of care. Overall, lower patient satisfaction is correlated with high physician dominance (e.g., poor information sharing or using medical jargon)

Limitations of our work include the lack of definitive (or non-binary) information about gender; and the fact that we do not know about the actual outcomes of treatment for reviewers.

Even so, it seems critical that readers attend to the who the reviewers are when reading online reviews. Review sites may also want to provide information about gender differences, control for gender when presenting composite ratings for physicians, and helping users write less biased reviews. Reviewers should be aware of their own gender biases and assess reviews for this (http://slowe.github.io/genderbias/).

Detecting Loneliness

Feelings of loneliness are associated with poor physical and mental health. Detection of loneliness through passive sensing on personal devices can lead to the development of interventions aimed at decreasing rates of loneliness.

Doryab, Afsaneh, et al. “Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data.” JMIR mHealth and uHealth 7.7 (2019): e13209.

Objective: The aim of this study was to explore the potential of using passive sensing to infer levels of loneliness and to identify the corresponding behavioral patterns.

Methods: Data were collected from smartphones and Fitbits (Flex 2) of 160 college students over a semester. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire at the beginning and end of the semester. For a classification purpose, the scores were categorized into high (questionnaire score>40) and low (≤40) levels of loneliness. Daily features were extracted from both devices to capture activity and mobility, communication and phone usage, and sleep behaviors. The features were then averaged to generate semester-level features. We used 3 analytic methods: (1) statistical analysis to provide an overview of loneliness in college students, (2) data mining using the Apriori algorithm to extract behavior patterns associated with loneliness, and (3) machine learning classification to infer the level of loneliness and the change in levels of loneliness using an ensemble of gradient boosting and logistic regression algorithms with feature selection in a leave-one-student-out cross-validation manner.

Results: The average loneliness score from the presurveys and postsurveys was above 43 (presurvey SD 9.4 and postsurvey SD 10.4), and the majority of participants fell into the high loneliness category (scores above 40) with 63.8% (102/160) in the presurvey and 58.8% (94/160) in the postsurvey. Scores greater than 1 standard deviation above the mean were observed in 12.5% (20/160) of the participants in both pre- and postsurvey scores. The majority of scores, however, fell between 1 standard deviation below and above the mean (pre=66.9% [107/160] and post=73.1% [117/160]).

Our machine learning pipeline achieved an accuracy of 80.2% in detecting the binary level of loneliness and an 88.4% accuracy in detecting change in the loneliness level. The mining of associations between classifier-selected behavioral features and loneliness indicated that compared with students with low loneliness, students with high levels of loneliness were spending less time outside of campus during evening hours on weekends and spending less time in places for social events in the evening on weekdays (support=17% and confidence=92%). The analysis also indicated that more activity and less sedentary behavior, especially in the evening, was associated with a decrease in levels of loneliness from the beginning of the semester to the end of it (support=31% and confidence=92%).

Conclusions: Passive sensing has the potential for detecting loneliness in college students and identifying the associated behavioral patterns. These findings highlight intervention opportunities through mobile technology to reduce the impact of loneliness on individuals’ health and well-being.

News: Smartphones and Fitbits can spot loneliness in its tracks, Science 101

Passively-sensing Discrimination

See the UW News article featuring this study!

A deeper understanding of how discrimination impacts psychological health and well-being of students would allow us to better protect individuals at risk and support those who encounter discrimination. While the link between discrimination and diminished psychological and physical well-being is well established, existing research largely focuses on chronic discrimination and long-term outcomes. A better understanding of the short-term behavioral correlates of discrimination events could help us to concretely quantify the experience, which in turn could support policy and intervention design. In this paper we specifically examine, for the first time, what behaviors change and in what ways in relation to discrimination. We use actively-reported and passively-measured markers of health and well-being in a sample of 209 first-year college students over the course of two academic quarters. We examine changes in indicators of psychological state in relation to reports of unfair treatment in terms of five categories of behaviors: physical activity, phone usage, social interaction, mobility, and sleep. We find that students who encounter unfair treatment become more physically active, interact more with their phone in the morning, make more calls in the evening, and spend less time in bed on the day of the event. Some of these patterns continue the next day.

Passively-sensed Behavioral Correlates of Discrimination Events in College Students. Yasaman S. Sefidgar, Woosuk Seo, Kevin S. Kuehn, Tim Althoff, Anne Browning, Eve Ann Riskin, Paula S. Nurius, Anind K Dey, Jennifer Mankoff. CSCW 2019.

A bar plot sorted by number of reports, with about 100 reports of unfair treatment based on national origin, 90 based on intelligence, 70 based on gender, 60 based on apperance, 50 on age, 45 on sexual orientation, 35 on major, 30 on weight, 30 on height, 20 on income, 10 on disability, 10 on religion, and 10 on learning
Breakdown of 448 reports of unfair treatment by type. National, Orientation, and Learning refer to ancestry or national origin, sexual orientation, and learning disability respectively. See Table 3 for details of all categories. Participants were able to report multiple incidents of unfair treatment, possibly of different types, in each report. As described in the paper, we do not have data on unfair treatment based on race.
A heatplot showing sensor data collected by day in 5 categories: Activity, screen, locations, fitbit, and calls.
A heatplot showing compliance with sensor data collection. Sensor data availability for each day of the study is shown in terms of the number of participants whose data is available on a given day. Weeks of the study are marked on the horizontal axis while different sensors appear on the vertical axis. Important calendar dates (e.g., start / end of the quarter and exam periods) are highlighted as are the weeks of daily surveys. The brighter the cells for a sensor the larger the number of people contributing data for that sensor. Event-based sensors (e.g., calls) are not as bright as sensors continuously sampled (e.g., location) as expected. There was a technical issue in the data collection application in the middle of study, visible as a dark vertical line around the beginning of April.
A diagram showing compliance in surveys, organized by nweek of study. One line shows compliance in the large surveys given at pre, mid and post, which drops from 99% to 94% to 84%. The other line shows average weekly compliance in EMAs, which goes up in the second week to 93% but then drops slowly (with some variability) to 89%
Timeline and completion rate of pre, mid, and post questionnaires as well as EMA surveys. Y axis
shows the completion rates and is narrowed to the range 50-100%. The completion rate of pre, mid, and post questionnaires are percentages of the original pool of 209 participants, whereas EMA completion rates are based on the 176 participants who completed the study. EMA completion rates are computed as the average completion rate of the surveys administered in a certain week of the study. School-related events (i.e., start and end of quarters as well as exam periods) are marked. Dark blue bars (Daily Survey) show the weeks when participants answered surveys every day, four times a day
Barplot showing significance of morning screen use, calls, minutes asleep, time in bed, range of activities, number of steps, anxiety, depression, and frustration on the day before, of, and after unfair treatment. All but minutes asleep are significant at p=.05 or below on the day of discrimination, but this drops off after.
Patterns of feature significance from the day before to two days after the discrimination event. The
shortest bars represent the highest significance values (e.g., depressed and frustrated on day 0; depressed on day 1; morning screen use on day 2). There are no significant differences the day before. Most short-term relationships exist on the day of the event, a few appear on the next day (day 1). On the third day one
significant difference, repeated, from the first day is observed.

Lyme Disease’s Heterogeneous Impact

An ongoing, and very personal thread of research that our group engages in (due to my own journey with Lyme Disease, which I occasionally blog about here) is research into the impacts of Lyme Disease and opportunities for helping to support patients with Lyme Disease. From a patient perspective, Lyme disease is as tough to deal with as many other more well known conditions [1].

Lyme disease can be difficult to navigate because of the disagreements about its diagnosis and the disease process. In addition, it is woefully underfunded and understudied, given that the CDC estimates around 300,000 new cases occur per year (similar to the rate of breast cancer) [2].

Bar chart showing that Lyme disease is woefully under studied.

As an HCI researcher, I started out trying to understand the relationship that Lyme Disease patients have with digital technologies. For example, we studied the impact of conflicting information online on patients [3] and how patients self-mediate the accessibility of online content [4]. It is my hope to eventually begin exploring technologies that can improve quality of life as well.

However, one thing patients need right away is peer reviewed evidence about the impact that Lyme disease has on patients (e.g. [3]) and the value of treatment for patients (e.g. [4]). Here, as a technologist, the opportunity is to work with big data (thousands of patient reports) to unpack trends and model outcomes in new ways. That research is still in the formative stages, but in our most recent publication [4] we use straightforward subgroup analysis to demonstrate that treatment effectiveness is not adequately captured simply by looking at averages.

This chart shows that there is a large subgroup (about a third) of respondents to our survey who reported positive response to treatment, even though the average response was not positive.

There are many opportunities and much need for further data analysis here, including documenting the impact of differences such as gender on treatment (and access to treatment), developing interventions that can help patients to track symptoms, manage interaction within and between doctors, and navigate accessibility and access issues.

[1] Johnson, L., Wilcox, S., Mankoff, J., & Stricker, R. B. (2014). Severity of chronic Lyme disease compared to other chronic conditions: a quality of life survey. PeerJ2, e322.

[2] Johnson, L., Shapiro, M. & Mankoff, J. Removing the mask of average treatment effects in chronic Lyme Disease research using big data and subgroup analysis.

[3] Mankoff, J., Kuksenok, K., Kiesler, S., Rode, J. A., & Waldman, K. (2011, May). Competing online viewpoints and models of chronic illness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 589-598). ACM.

[4] Kuksenok, K., Brooks, M., & Mankoff, J. (2013, April). Accessible online content creation by end users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 59-68). ACM.