Lyme Disease’s Heterogeneous Impact

An ongoing, and very personal thread of research that our group engages in (due to my own journey with Lyme Disease, which I occasionally blog about here) is research into the impacts of Lyme Disease and opportunities for helping to support patients with Lyme Disease. From a patient perspective, Lyme disease is as tough to deal with as many other more well known conditions [1].

Lyme disease can be difficult to navigate because of the disagreements about its diagnosis and the disease process. In addition, it is woefully underfunded and understudied, given that the CDC estimates around 300,000 new cases occur per year (similar to the rate of breast cancer) [2].

Bar chart showing that Lyme disease is woefully under studied.

As an HCI researcher, I started out trying to understand the relationship that Lyme Disease patients have with digital technologies. For example, we studied the impact of conflicting information online on patients [3] and how patients self-mediate the accessibility of online content [4]. It is my hope to eventually begin exploring technologies that can improve quality of life as well.

However, one thing patients need right away is peer reviewed evidence about the impact that Lyme disease has on patients (e.g. [3]) and the value of treatment for patients (e.g. [4]). Here, as a technologist, the opportunity is to work with big data (thousands of patient reports) to unpack trends and model outcomes in new ways. That research is still in the formative stages, but in our most recent publication [4] we use straightforward subgroup analysis to demonstrate that treatment effectiveness is not adequately captured simply by looking at averages.

This chart shows that there is a large subgroup (about a third) of respondents to our survey who reported positive response to treatment, even though the average response was not positive.

There are many opportunities and much need for further data analysis here, including documenting the impact of differences such as gender on treatment (and access to treatment), developing interventions that can help patients to track symptoms, manage interaction within and between doctors, and navigate accessibility and access issues.

[1] Johnson, L., Wilcox, S., Mankoff, J., & Stricker, R. B. (2014). Severity of chronic Lyme disease compared to other chronic conditions: a quality of life survey. PeerJ2, e322.

[2] Johnson, L., Shapiro, M. & Mankoff, J. Removing the mask of average treatment effects in chronic Lyme Disease research using big data and subgroup analysis.

[3] Mankoff, J., Kuksenok, K., Kiesler, S., Rode, J. A., & Waldman, K. (2011, May). Competing online viewpoints and models of chronic illness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 589-598). ACM.

[4] Kuksenok, K., Brooks, M., & Mankoff, J. (2013, April). Accessible online content creation by end users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 59-68). ACM.

 

Understanding gender equity in author order assignment

Academic success and promotion are heavily influenced by publication record. In many fields, including computer science, multi-author papers are the norm. Evidence from other fields shows that norms for ordering author names can influence the assignment of credit. We interviewed 38 students and faculty in human- computer interaction (HCI) and machine learning (ML) at two institutions to determine factors related to assignment of author order in collaborative publication in the field of computer science. We found that women were concerned with author order earlier in the process:

Our female interviews reported raising author order in discussion earlier in the process than men.

Interview outcomes informed metrics for our bibliometric analysis of gender and collaboration in papers published between 1996 and 2016 in three top HCI and ML conferences. We found expected results overall — being the most junior author increased the likelihood of first authorship, while being the most senior author increased the likelihood of last authorship. However, these effects disappeared or even reversed for women authors:

Comparison of regression weights for author rank (blue) with author rank crossed with gender (orange). Regression was predicting author position (first, middle, last)

Based on our findings, we make recommendations for assignment of credit in multi-author papers and interpretation of author order, particularly with respect to how these factors affect women.

3D Printed Wireless Analytics

Wireless Analytics for 3D Printed Objects: Vikram Iyer, Justin Chan, Ian Culhane, Jennifer Mankoff, Shyam Gollakota UIST, Oct. 2018 [PDF]

We created a wireless physical analytics system works with commonly available conductive plastic filaments. Our design can enable various data capture and wireless physical analytics capabilities for 3D printed objects, without the need for electronics.

We make three key contributions:

(1) We demonstrate room scale backscatter communication and sensing using conductive plastic filaments.

(2) We introduce the first backscatter designs that detect a variety of bi-directional motions and support linear and rotational movements. An example is shown below

(3) As shown in the image below, we enable data capture and storage for later retrieval when outside the range of the wireless coverage, using a ratchet and gear system.

We validate our approach by wirelessly detecting the opening and closing of a pill bottle, capturing the joint angles of a 3D printed e-NABLE prosthetic hand, and an insulin pen that can store information to track its use outside the range of a wireless receiver.

Selected Media

6 of the most amazing things that were 3D-printed in 2018 (Erin Winick, MIT Technology Review, 12/24/2018)

Researchers develop 3D printed objects that can track and store how they are used (Sarah McQuate), UW Press release. 10/9/2018

Assistive Objects Can Track Their Own Use (Elizabeth Montalbano), Design News. 11/14/2018

People

Students

Vikram Iyer
Justin Chan
Ian Culhane

Faculty

Jennifer Mankoff
Shyam Gollakota

Contact: printedanalytics@cs.washington.edu

Interactiles

The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. In addition, the investigation of tactile solutions for large touchscreens may not address the challenges on mobile devices. We therefore present Interactiles, a low-cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.

XiaoyiZhang, TracyTran, YuqianSun, IanCulhane, ShobhitJain, JamesFogarty, JenniferMankoff: Interactiles: 3D Printed Tactile Interfaces to Enhance Mobile Touchscreen Accessibility. ASSETS 2018: To Appear [PDF]

Figure 2. Floating windows created for number pad (left), scrollbar (right) and control button (right bottom). The windows can be transparent; we use colors for demonstration.
Figure 4. Average task completion times of all tasks in the study.

EDigs

eDigs logoJennifer MankoffDimeji OnafuwaKirstin EarlyNidhi VyasVikram Kamath:
Understanding the Needs of Prospective Tenants. COMPASS 2018: 36:1-36:10

EDigs is a research project group in Carnegie Mellon University working on sustainability. Our research is focused on helping people find a perfect rental through machine learning and user research.

We sometimes study how our members use EDigs in order to learn how to build software support for successful social communities.

eDigs websiteScreenshot of edigs.org showing a mobile app, facebook and twitter feeds, and information about it.

Nonvisual Interaction Techniques at the Keyboard Surface

Rushil Khurana,Duncan McIsaac, Elliot Lockerman,Jennifer Mankoff Nonvisual Interaction Techniques at the Keyboard Surface, CHI 2018, To Appear

A table (shown on screen). Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys, and (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the position of the cell and its content are read out aloud.

Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. We introduce Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.

Talk [Slides]:

Sample Press:

KOMO Radio | New screen reader method helps blind, low-vision users browse complex web pages

Device helps blind, low-vision users better browse web pages. Allen Cone

Graph showing task completion rates for different kinds of tasks in our user study
A user is searching a table (shown on screen) for the word ‘Jill’. Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys. (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the number of occurrences of the search query in the respective column or row are read aloud. When the query is found, the position and content of the cell are read out aloud.

Expressing and Reusing Design Intent in 3D Models

Megan K Hofmann, Gabriella Han, Scott E Hudson, Jennifer Mankoff. Greater Than the Sum of Its PARTs: Expressing and Reusing Design Intent in 3D Models CHI 2018, To Appear.

With the increasing popularity of consumer-grade 3D printing, many people are creating, and even more using, objects shared on sites such as Thingiverse. However, our formative study of 962 Thingiverse models shows a lack of re-use of models, perhaps due to the advanced skills needed for 3D modeling. An end user program perspective on 3D modeling is needed. Our framework (PARTs) empowers amateur modelers to graphically specify design intent through geometry. PARTs includes a GUI, scripting API and exemplar library of assertions which test design expectations and integrators which act on intent to create geometry. PARTs lets modelers integrate advanced, model specific functionality into designs, so that they can be re-used and extended, without programming. In two workshops, we show that PARTs helps to create 3D printable models, and modify existing models more easily than with a standard tool.

Picture of 3D models and a printout

Hypertension recognition through overnight Heart Rate Variability sensing

Ni, H., Cho, S., Mankoff, J., & Yang, J. (2017). Automated recognition of hypertension through overnight continuous HRV monitoring. Journal of Ambient Intelligence and Humanized Computing, 1-13.

Hypertension is a common and chronic disease, caused by high blood pressure. Since hypertension often has no warning signs or symptoms, many cases remain undiagnosed. Untreated or sub-optimally controlled hypertension may lead to cardiovascular, cerebrovascular and renal morbidity and mortality, along with dysfunction of the autonomic nervous system. Therefore, it could be quite valuable to predict or provide early warnings about hypertension. Heart rate variability (HRV) analysis has emerged as the most valuable non-invasive test to assess autonomic nervous system function, and has great potential for detecting hypertension. However, HRV indicators may be subtle and present at random, resulting in two challenges: how to support continuous monitoring for hours at a time while being unobtrusive, and how to efficiently analyze the collected data to minimize data collection and user burden. In this paper, we present a machine learning-based approach for detecting hypertension, using a waist belt continuous sensing system that is worn overnight. Using 24 hypertension patients and 24 healthy controls, we demonstrate that our approach can differentiate hypertension patients from healthy controls with 93.33% accuracy. This represents a promising approach for performing hypertension classification in the field, and also we would improve its performance based on a large number of hypertensive subjects monitored by the proposed pervasive sensors.

The Tangible Desktop

Mark S. BaldwinGillian R. HayesOliver L. HaimsonJennifer MankoffScott E. Hudson: The Tangible Desktop: A Multimodal Approach to Nonvisual Computing. TACCESS 10(3): 9:1-9:28 (2017)

Audio-only interfaces, facilitated through text-to-speech screen reading software, have been the primary mode of computer interaction for blind and low-vision computer users for more than four decades. During this time, the advances that have made visual interfaces faster and easier to use, from direct manipulation to skeuomorphic design, have not been paralleled in nonvisual computing environments. The screen reader–dependent community is left with no alternatives to engage with our rapidly advancing technological infrastructure. In this article, we describe our efforts to understand the problems that exist with audio-only interfaces. Based on observing screen reader use for 4 months at a computer training school for blind and low-vision adults, we identify three problem areas within audio-only interfaces: ephemerality, linear interaction, and unidirectional communication. We then evaluated a multimodal approach to computer interaction called the Tangible Desktop that addresses these problems by moving semantic information from the auditory to the tactile channel. Our evaluation demonstrated that among novice screen reader users, Tangible Desktop improved task completion times by an average of 6 minutes when compared to traditional audio-only computer systems.

Also see: Mark S. BaldwinJennifer MankoffBonnie A. NardiGillian R. Hayes: An Activity Centered Approach to Nonvisual Computer Interaction. ACM Trans. Comput. Hum. Interact. 27(2): 12:1-12:27 (2020)

Uncertainty in Measurement

Kim, J., Guo, A., Yeh, T., Hudson, S. E., & Mankoff, J. (2017, June). Understanding Uncertainty in Measurement and Accommodating its Impact in 3D Modeling and Printing. In Proceedings of the 2017 Conference on Designing Interactive Systems (pp. 1067-1078). ACM.

3D printing enables everyday users to augment objects around them with personalized adaptations. There has been a proliferation of 3D models available on sharing platforms supporting this. If a model is parametric, a novice modeler can obtain a custom model simply by entering a few parameters (e.g., in the Customizer tool on Thingiverse.com). In theory, such custom models could fit any real world object one intends to augment. But in practice, a printed model seldom fits on the first try; multiple iterations are often necessary, wasting a considerable amount of time and material. We argue that parameterization or scaling alone is not sufficient for customizability, because users must correctly measure an object to specify parameters.

In a study of attempts to measure length, angle, and diameter, we demonstrate measurement errors as a significant (yet often overlooked) factor that adversely impacts the adaptation of 3D models to existing objects, requiring increased iteration. Images taken from our study are shown below.

We argue for a new design principle—accommodating measurement uncertainty—that designers as well as novices should begin to consider. We offer two strategies—modular joint and, buffer insertion—to help designers to build models that are robust to measurement uncertainty. Examples shown below.