Gender in Online Doctor Reviews

Dunivin Z, Zadunayski L, Baskota U, Siek K, Mankoff J. Gender, Soft Skills, and Patient Experience in Online Physician Reviews: A Large-Scale Text Analysis. Journal of Medical Internet Research. 2020;22(7):e14455.

This study examines 154,305 Google reviews from across the United States for all medical specialties. Many patients use online physician reviews but we need to understand effects of gender on review content. Reviewer gender was inferred from names.

Reviews were coded for overall patient experience (negative or positive) by collapsing a 5-star scale and for general categories (process, positive/negative soft skills). We estimated binary regression models to examine relationships between physician rating, patient experience themes, physician gender, and reviewer gender.

We found considerable bias against female physicians: Reviews of female physicians were considerably more negative than those of male physicians (OR 1.99; P<.001). Critiques of female physicians more often focused on soft skills such as amicability, disrespect and candor. Negative reviews typically have words such as “rude, arrogant, and condescending”

Reviews written by female patients were also more likely to mention disrespect (OR 1.27, P<.001), but female patients were less likely to report disrespect from female doctors than expected.

Finally, patient experiences with the bureaucratic process also impacted reviews. This includes issues like cost of care. Overall, lower patient satisfaction is correlated with high physician dominance (e.g., poor information sharing or using medical jargon)

Limitations of our work include the lack of definitive (or non-binary) information about gender; and the fact that we do not know about the actual outcomes of treatment for reviewers.

Even so, it seems critical that readers attend to the who the reviewers are when reading online reviews. Review sites may also want to provide information about gender differences, control for gender when presenting composite ratings for physicians, and helping users write less biased reviews. Reviewers should be aware of their own gender biases and assess reviews for this (http://slowe.github.io/genderbias/).

Living Disability Theory

A picture of a carved wooden cane in greens and blues

It was my honor this year to participate in an auto-ethnographic effort to explore accessibility research from a combination of personal and theoretical perspectives. In the process, and thanks to my amazing co-authors, I learned so much about myself, disability studies, ableism and accessibility.

Best Paper Award Hoffman, M., Kasnitz, D., Mankoff, J. and Bennett, C. l. (2020) Living Disability Theory: Reflections on Access, Research, and Design. In Proceedings of ASSETS 2020, 4:1-4:13

Abstract: Accessibility research and disability studies are intertwined fields focused on, respectively, building a world more inclusive of people with disability and understanding and elevating the lived experiences of disabled people. Accessibility research tends to focus on creating technology related to impairment, while disability studies focuses on understanding disability and advocating against ableist systems. Our paper presents a reflexive analysis of the experiences of three accessibility researchers and one disability studies scholar. We focus on moments when our disability was misunderstood and causes such as expecting clearly defined impairments. We derive three themes: ableism in research, oversimplification of disability, and human relationships around disability. From these themes, we suggest paths toward more strongly integrating disability studies perspectives and disabled people into accessibility research.

Han Zhang

Han is a PhD student in the Paul G. Allen School of Computer Science & Engineering. She is advised by Prof Jennifer Mankoff (Computer Science) and Prof Anind K. Dey (Information School).

Han’s research interests span the interdisciplinary areas of human-computer interaction, human-centered machine learning, and fairness, responsibility, accountability, transparency, and ethics in AI (FATE). She is passionate about designing responsible technologies to improve human performance and wellbeing. Her research focuses on uncovering nuanced human performance behavioral patterns through explainable machine learning and data science. Additionally, she researches comprehending human needs and perceptions of AI-learned patterns, informing the design and development of interactive tools to support humans in proactively shaping their behaviors.

If you share similar research interests with her or simply want to have a chat, please feel free to reach out via email: micohan [at] cs [dot] washington [dot] edu.

Taylor Gotfrid

Taylor is a second-year PhD student in the Paul G Allen School of Computer Science and Engineering. She is advised by Professor Jennifer Mankoff. In 2017, she graduated from the University of California, Santa Cruz with bachelor’s degrees in Computer Engineering and Cognitive Science. She then earned her Masters in Human Computer Interaction from Rochester Institute of Technology in 2019.

Her research interests focus on trying to make fabrication more accessible for people with disabilities. Her prior research explored how to make the e-textile circuit development process more accessible for adults with intellectual disabilities. Her recent projects focus on understanding the kinds of difficulties that people with disabilities face while knitting, and developing technologies to help users overcome some of these difficulties.

Kelly Avery Mack

Avery is a Phd Student in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. They are advised by Prof. Jennifer Mankoff. They completed their bachelors in Computer Science at the University of Illinois at Urbana-Champaign in 2019, where Prof. Aditya Parameswaran and Prof. Karrie Karahalios advised them. They are an NSF Fellow and an ARCS Scholar.

Their research focuses on applying computer science to create or improve technologies that serve people with disabilities. Their current work focuses on 1) representation of people with disabilities in digital technologies like avatars and generative AI tools, and 2) how to support people with fluctuating access needs like neurodiverse people and people with chronic or mental health conditions. 

Visit Avery’s homepage at https://kmack3.github.io.

Detecting Loneliness

Feelings of loneliness are associated with poor physical and mental health. Detection of loneliness through passive sensing on personal devices can lead to the development of interventions aimed at decreasing rates of loneliness.

Doryab, Afsaneh, et al. “Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data.” JMIR mHealth and uHealth 7.7 (2019): e13209.

Objective: The aim of this study was to explore the potential of using passive sensing to infer levels of loneliness and to identify the corresponding behavioral patterns.

Methods: Data were collected from smartphones and Fitbits (Flex 2) of 160 college students over a semester. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire at the beginning and end of the semester. For a classification purpose, the scores were categorized into high (questionnaire score>40) and low (≤40) levels of loneliness. Daily features were extracted from both devices to capture activity and mobility, communication and phone usage, and sleep behaviors. The features were then averaged to generate semester-level features. We used 3 analytic methods: (1) statistical analysis to provide an overview of loneliness in college students, (2) data mining using the Apriori algorithm to extract behavior patterns associated with loneliness, and (3) machine learning classification to infer the level of loneliness and the change in levels of loneliness using an ensemble of gradient boosting and logistic regression algorithms with feature selection in a leave-one-student-out cross-validation manner.

Results: The average loneliness score from the presurveys and postsurveys was above 43 (presurvey SD 9.4 and postsurvey SD 10.4), and the majority of participants fell into the high loneliness category (scores above 40) with 63.8% (102/160) in the presurvey and 58.8% (94/160) in the postsurvey. Scores greater than 1 standard deviation above the mean were observed in 12.5% (20/160) of the participants in both pre- and postsurvey scores. The majority of scores, however, fell between 1 standard deviation below and above the mean (pre=66.9% [107/160] and post=73.1% [117/160]).

Our machine learning pipeline achieved an accuracy of 80.2% in detecting the binary level of loneliness and an 88.4% accuracy in detecting change in the loneliness level. The mining of associations between classifier-selected behavioral features and loneliness indicated that compared with students with low loneliness, students with high levels of loneliness were spending less time outside of campus during evening hours on weekends and spending less time in places for social events in the evening on weekdays (support=17% and confidence=92%). The analysis also indicated that more activity and less sedentary behavior, especially in the evening, was associated with a decrease in levels of loneliness from the beginning of the semester to the end of it (support=31% and confidence=92%).

Conclusions: Passive sensing has the potential for detecting loneliness in college students and identifying the associated behavioral patterns. These findings highlight intervention opportunities through mobile technology to reduce the impact of loneliness on individuals’ health and well-being.

News: Smartphones and Fitbits can spot loneliness in its tracks, Science 101

The Limits of Expert Text Entry Speed

Improving mobile keyboard typing speed increases in value as more tasks move to a mobile setting. Autocorrect is a powerful way to reduce the time it takes to manually fix typing errors, which results in typing speed increase. However, recent user studies of autocorrect uncovered an unexplored side-effect: participants’ aversion to typing errors despite autocorrect. We present the first computational model of typing on keyboards with autocorrect, which enables precise study of expert typists’ aversion to typing errors on such keyboards. Unlike empirical typing studies that last days, our model evaluates the effects of typists’ aversion to typing errors for any autocorrect accuracy in seconds. We show that typists’ aversion to typing errors adds a self-imposed limit on upper bound typing speeds, which decreases the value of highly accurate autocorrect. Our findings motivate future designs of keyboards with autocorrect that reduce typists’ aversion to typing errors to increase typing speeds.

The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect Nikola Banovic, Ticha Sethapakdi, Yasasvi Hari, Anind K. Dey, Jennifer Mankoff. Mobile HCI 2019.

A picture of a samsung phone. The screen says: Block 2. Trial 6 of 10. this camera takes nice photographs. The user has begun typing with errors: "this camera tankes l" Error correction offers 'tankes' 'tankers' and 'takes' and a soft keyboard is shown before that.

An example mobile device with a soft keyboard: A) text entry area, which in our study contained study progress, the current phrase to transcribe, and an area for transcribed characters, B) automatically suggested words, and C) a miniQWERTY soft keyboard with autocorrect.

A bar plat showing typing speed (WPM, y axis) against acuracy (0 to 1). The bars start at 32 WPM (for 0 accuracy) and go up to approx 32 (for accuracy of 1).
Our model estimated expected mean typing speeds (lines) for different levels of typing error rate aversion (e) compared to mean empirical typing speed with automatic correction and suggestion (bar plot) in WPM across Accuracy. Error bars represent 95% confidence intervals.
4 bar plats showing error rate in uncorrected, corrected, autocorrected, and manual corrected conditions. Error rates for uncorrected are (approximately) 0 to 0.05 as accuracy increases; error rates for corrected are .10 to .005 for corrected condition as accuracy goes from 0 to 1. Error rates are  0 to about .1 for uncorrected as accuracy goes from 0 to 1. Error rates are variable but all below 0.05 for manual as accuracy goes from 0 to 1
Median empirical error rates across Accuracy in session 3 with automated correction and suggestion. Error bars represent minimum and maximum error rate values, and dots represent outliers

KnitPick: Manipulating Texture

Knitting creates complex, soft objects with unique and controllable texture properties that can be used to create interactive objects. However, little work addresses the challenges of using knitted textures. We present KnitPick: a pipeline for interpreting pre-existing hand-knitting texture patterns into a directed-graph representation of knittable structures (KnitGraphs) which can be output to machine and hand-knitting instructions. Using KnitPick, we contribute a measured and photographed data set of 300 knitted textures. Based on findings from this data set, we contribute two algorithms for manipulating KnitGraphs. KnitCarving shapes a graph while respecting a texture, and KnitPatching combines graphs with disparate textures while maintaining a consistent shape. Using these algorithms and textures in our data set we are able to create three Knitting based interactions: roll, tug, and slide. KnitPick is the first system to bridge the gap between hand- and machine-knitting when creating complex knitted textures.

KnitPick: Programming and Modifying Complex Knitted Textures for Machine and Hand Knitting, Megan Hofmann, Lea Albaugh, Ticha Sethapakdi, Jessica Hodgins, Scott e. Hudson, James McCann, Jennifer Mankoff. UIST 2019. The KnitPick Data set can be found here.

A picture of a knit speak file which is compiled into a knit graph (which can be modified using carving and patching) and then compiled to knitout, which can be printed on a knitting machine. Below the graph is a picture of different sorts of lace textures supported by knitpick.
KnitPick converts KnitSpeak into KnitGraphs which can be carved, patched and output to knitted results
A photograph of the table with our data measurement setup, along with piles of patches that are about to be measured and have recently been measured. One patch is attached to the rods and clips used for stretching.
Data set measurement setup, including camera, scale, and stretching rig
A series of five images, each progressively skinnier than the previous. Each image is a knitted texture with 4 stars on it. They are labeled (a) original swatch (b) 6 columns removed (c) 9 columns removed (d) 12 columns removed (e) 15 columns removed
The above images show a progression from the original Star texture to the same texture with 15 columns removed by texture carving. These photographs were shown to crowd-workers who rated their similarity. Even with a whole repetition width removed from the Stars, the pattern remains a recognizable star pattern.

Passively-sensing Discrimination

See the UW News article featuring this study!

A deeper understanding of how discrimination impacts psychological health and well-being of students would allow us to better protect individuals at risk and support those who encounter discrimination. While the link between discrimination and diminished psychological and physical well-being is well established, existing research largely focuses on chronic discrimination and long-term outcomes. A better understanding of the short-term behavioral correlates of discrimination events could help us to concretely quantify the experience, which in turn could support policy and intervention design. In this paper we specifically examine, for the first time, what behaviors change and in what ways in relation to discrimination. We use actively-reported and passively-measured markers of health and well-being in a sample of 209 first-year college students over the course of two academic quarters. We examine changes in indicators of psychological state in relation to reports of unfair treatment in terms of five categories of behaviors: physical activity, phone usage, social interaction, mobility, and sleep. We find that students who encounter unfair treatment become more physically active, interact more with their phone in the morning, make more calls in the evening, and spend less time in bed on the day of the event. Some of these patterns continue the next day.

Passively-sensed Behavioral Correlates of Discrimination Events in College Students. Yasaman S. Sefidgar, Woosuk Seo, Kevin S. Kuehn, Tim Althoff, Anne Browning, Eve Ann Riskin, Paula S. Nurius, Anind K Dey, Jennifer Mankoff. CSCW 2019.

A bar plot sorted by number of reports, with about 100 reports of unfair treatment based on national origin, 90 based on intelligence, 70 based on gender, 60 based on apperance, 50 on age, 45 on sexual orientation, 35 on major, 30 on weight, 30 on height, 20 on income, 10 on disability, 10 on religion, and 10 on learning
Breakdown of 448 reports of unfair treatment by type. National, Orientation, and Learning refer to ancestry or national origin, sexual orientation, and learning disability respectively. See Table 3 for details of all categories. Participants were able to report multiple incidents of unfair treatment, possibly of different types, in each report. As described in the paper, we do not have data on unfair treatment based on race.
A heatplot showing sensor data collected by day in 5 categories: Activity, screen, locations, fitbit, and calls.
A heatplot showing compliance with sensor data collection. Sensor data availability for each day of the study is shown in terms of the number of participants whose data is available on a given day. Weeks of the study are marked on the horizontal axis while different sensors appear on the vertical axis. Important calendar dates (e.g., start / end of the quarter and exam periods) are highlighted as are the weeks of daily surveys. The brighter the cells for a sensor the larger the number of people contributing data for that sensor. Event-based sensors (e.g., calls) are not as bright as sensors continuously sampled (e.g., location) as expected. There was a technical issue in the data collection application in the middle of study, visible as a dark vertical line around the beginning of April.
A diagram showing compliance in surveys, organized by nweek of study. One line shows compliance in the large surveys given at pre, mid and post, which drops from 99% to 94% to 84%. The other line shows average weekly compliance in EMAs, which goes up in the second week to 93% but then drops slowly (with some variability) to 89%
Timeline and completion rate of pre, mid, and post questionnaires as well as EMA surveys. Y axis
shows the completion rates and is narrowed to the range 50-100%. The completion rate of pre, mid, and post questionnaires are percentages of the original pool of 209 participants, whereas EMA completion rates are based on the 176 participants who completed the study. EMA completion rates are computed as the average completion rate of the surveys administered in a certain week of the study. School-related events (i.e., start and end of quarters as well as exam periods) are marked. Dark blue bars (Daily Survey) show the weeks when participants answered surveys every day, four times a day
Barplot showing significance of morning screen use, calls, minutes asleep, time in bed, range of activities, number of steps, anxiety, depression, and frustration on the day before, of, and after unfair treatment. All but minutes asleep are significant at p=.05 or below on the day of discrimination, but this drops off after.
Patterns of feature significance from the day before to two days after the discrimination event. The
shortest bars represent the highest significance values (e.g., depressed and frustrated on day 0; depressed on day 1; morning screen use on day 2). There are no significant differences the day before. Most short-term relationships exist on the day of the event, a few appear on the next day (day 1). On the third day one
significant difference, repeated, from the first day is observed.

Digital Fabrication in Medical Practice

Maker culture in health care is on the rise with the rapid adoption of consumer-grade fabrication technologies. However, little is known about the activity and resources involved in prototyping medical devices to improve patient care. In this paper, we characterize medical making based on a qualitative study of medical stakeholder engagement in physical prototyping (making) experiences. We examine perspectives from diverse stakeholders including clinicians, engineers, administrators, and medical researchers. Through 18 semi-structured interviews with medical-makers in US and Canada, we analyze making activity in medical settings. We find that medical-makers share strategies to address risks, define labor roles, and acquire resources by adapting traditional structures or creating new infrastructures. Our findings outline how medical-makers mitigate risks for patient safety, collaborate with local and global stakeholder networks, and overcome constraints of co-location and material practices. We recommend a clinician-aided software system, partially-open repositories, and a collaborative skill-share social network to extend their strategies in support of medical making.

“Point-of-Care Manufacturing”: Maker Perspectives onDigital Fabrication in Medical Practice. Udaya Lakshmi, Megan Hofmann, Stephanie Valencia, Lauren Wilcox, Jennifer Mankoff and Rosa Arriaga. CSCW 2019. To Appear.

A venn diagram showing the domains of expertise of those we interviewed including people from hospitals, universities, non-profits, va networks, private practices, and government. We interviewed clinicians and facilitators in each of these domains and there was a great deal of overlap with participants falling into multiple categories. For example, one participant was in a VA network and in private practice, while another was at a university and also a non-profit.