The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect

Improving mobile keyboard typing speed increases in value as more tasks move to a mobile setting. Autocorrect is a powerful way to reduce the time it takes to manually fix typing errors, which results in typing speed increase. However, recent user studies of autocorrect uncovered an unexplored side-effect: participants’ aversion to typing errors despite autocorrect. We present the first computational model of typing on keyboards with autocorrect, which enables precise study of expert typists’ aversion to typing errors on such keyboards. Unlike empirical typing studies that last days, our model evaluates the effects of typists’ aversion to typing errors for any autocorrect accuracy in seconds. We show that typists’ aversion to typing errors adds a self-imposed limit on upper bound typing speeds, which decreases the value of highly accurate autocorrect. Our findings motivate future designs of keyboards with autocorrect that reduce typists’ aversion to typing errors to increase typing speeds.

The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect Nikola Banovic, Ticha Sethapakdi, Yasasvi Hari, Anind K. Dey, Jennifer Mankoff. Mobile HCI 2019.

A picture of a samsung phone. The screen says: Block 2. Trial 6 of 10. this camera takes nice photographs. The user has begun typing with errors: "this camera tankes l" Error correction offers 'tankes' 'tankers' and 'takes' and a soft keyboard is shown before that.

An example mobile device with a soft keyboard: A) text entry area, which in our study contained study progress, the current phrase to transcribe, and an area for transcribed characters, B) automatically suggested words, and C) a miniQWERTY soft keyboard with autocorrect.

A bar plat showing typing speed (WPM, y axis) against acuracy (0 to 1). The bars start at 32 WPM (for 0 accuracy) and go up to approx 32 (for accuracy of 1).
Our model estimated expected mean typing speeds (lines) for different levels of typing error rate aversion (e) compared to mean empirical typing speed with automatic correction and suggestion (bar plot) in WPM across Accuracy. Error bars represent 95% confidence intervals.
4 bar plats showing error rate in uncorrected, corrected, autocorrected, and manual corrected conditions. Error rates for uncorrected are (approximately) 0 to 0.05 as accuracy increases; error rates for corrected are .10 to .005 for corrected condition as accuracy goes from 0 to 1. Error rates are  0 to about .1 for uncorrected as accuracy goes from 0 to 1. Error rates are variable but all below 0.05 for manual as accuracy goes from 0 to 1
Median empirical error rates across Accuracy in session 3 with automated correction and suggestion. Error bars represent minimum and maximum error rate values, and dots represent outliers

KnitPick: Programming and Modifying Complex Knitted Textures for Machine and Hand Knitting

Knitting creates complex, soft objects with unique and controllable texture properties that can be used to create interactive objects. However, little work addresses the challenges of using knitted textures. We present KnitPick: a pipeline for interpreting pre-existing hand-knitting texture patterns into a directed-graph representation of knittable structures (KnitGraphs) which can be output to machine and hand-knitting instructions. Using KnitPick, we contribute a measured and photographed data set of 300 knitted textures. Based on findings from this data set, we contribute two algorithms for manipulating KnitGraphs. KnitCarving shapes a graph while respecting a texture, and KnitPatching combines graphs with disparate textures while maintaining a consistent shape. Using these algorithms and textures in our data set we are able to create three Knitting based interactions: roll, tug, and slide. KnitPick is the first system to bridge the gap between hand- and machine-knitting when creating complex knitted textures.

KnitPick: Programming and Modifying Complex Knitted Textures for Machine and Hand Knitting, Megan Hofmann, Lea Albaugh, Ticha Sethapakdi, Jessica Hodgins, Scott e. Hudson, James McCann, Jennifer Mankoff. UIST 2019. To Appear.

A picture of a knit speak file which is compiled into a knit graph (which can be modified using carving and patching) and then compiled to knitout, which can be printed on a knitting machine. Below the graph is a picture of different sorts of lace textures supported by knitpick.
KnitPick converts KnitSpeak into KnitGraphs which can be carved, patched and output to knitted results
A photograph of the table with our data measurement setup, along with piles of patches that are about to be measured and have recently been measured. One patch is attached to the rods and clips used for stretching.
Data set measurement setup, including camera, scale, and stretching rig
A series of five images, each progressively skinnier than the previous. Each image is a knitted texture with 4 stars on it. They are labeled (a) original swatch (b) 6 columns removed (c) 9 columns removed (d) 12 columns removed (e) 15 columns removed
The above images show a progression from the original Star texture to the same texture with 15 columns removed by texture carving. These photographs were shown to crowd-workers who rated their similarity. Even with a whole repetition width removed from the Stars, the pattern remains a recognizable star pattern.

Passively-sensed Behavioral Correlates of Discrimination Events in College Students

A deeper understanding of how discrimination impacts psychological health and well-being of students would allow us to better protect individuals at risk and support those who encounter discrimination. While the link between discrimination and diminished psychological and physical well-being is well established, existing research largely focuses on chronic discrimination and long-term outcomes. A better understanding of the short-term behavioral correlates of discrimination events could help us to concretely quantify the experience, which in turn could support policy and intervention design. In this paper we specifically examine, for the first time, what behaviors change and in what ways in relation to discrimination. We use actively-reported and passively-measured markers of health and well-being in a sample of 209 first-year college students over the course of two academic quarters. We examine changes in indicators of psychological state in relation to reports of unfair treatment in terms of five categories of behaviors: physical activity, phone usage, social interaction, mobility, and sleep. We find that students who encounter unfair treatment become more physically active, interact more with their phone in the morning, make more calls in the evening, and spend more time in bed on the day of the event. Some of these patterns continue the next day.

Passively-sensed Behavioral Correlates of Discrimination Events in College Students. Yasaman S. Sefidgar, Woosuk Seo, Kevin S. Kuehn, Tim Althoff, Anne Browning, Eve Ann Riskin, Paula S. Nurius, Anind K Dey, Jennifer Mankoff. CSCW 2019, To Appear

A bar plot sorted by number of reports, with about 100 reports of unfair treatment based on national origin, 90 based on intelligence, 70 based on gender, 60 based on apperance, 50 on age, 45 on sexual orientation, 35 on major, 30 on weight, 30 on height, 20 on income, 10 on disability, 10 on religion, and 10 on learning
Breakdown of 448 reports of unfair treatment by type. National, Orientation, and Learning refer to ancestry or national origin, sexual orientation, and learning disability respectively. See Table 3 for details of all categories. Participants were able to report multiple incidents of unfair treatment, possibly of different types, in each report. As described in the paper, we do not have data on unfair treatment based on race.
A heatplot showing sensor data collected by day in 5 categories: Activity, screen, locations, fitbit, and calls.
A heatplot showing compliance with sensor data collection. Sensor data availability for each day of the study is shown in terms of the number of participants whose data is available on a given day. Weeks of the study are marked on the horizontal axis while different sensors appear on the vertical axis. Important calendar dates (e.g., start / end of the quarter and exam periods) are highlighted as are the weeks of daily surveys. The brighter the cells for a sensor the larger the number of people contributing data for that sensor. Event-based sensors (e.g., calls) are not as bright as sensors continuously sampled (e.g., location) as expected. There was a technical issue in the data collection application in the middle of study, visible as a dark vertical line around the beginning of April.
A diagram showing compliance in surveys, organized by nweek of study. One line shows compliance in the large surveys given at pre, mid and post, which drops from 99% to 94% to 84%. The other line shows average weekly compliance in EMAs, which goes up in the second week to 93% but then drops slowly (with some variability) to 89%
Timeline and completion rate of pre, mid, and post questionnaires as well as EMA surveys. Y axis
shows the completion rates and is narrowed to the range 50-100%. The completion rate of pre, mid, and post questionnaires are percentages of the original pool of 209 participants, whereas EMA completion rates are based on the 176 participants who completed the study. EMA completion rates are computed as the average completion rate of the surveys administered in a certain week of the study. School-related events (i.e., start and end of quarters as well as exam periods) are marked. Dark blue bars (Daily Survey) show the weeks when participants answered surveys every day, four times a day
Barplot showing significance of morning screen use, calls, minutes asleep, time in bed, range of activities, number of steps, anxiety, depression, and frustration on the day before, of, and after unfair treatment. All but minutes asleep are significant at p=.05 or below on the day of discrimination, but this drops off after.
Patterns of feature significance from the day before to two days after the discrimination event. The
shortest bars represent the highest significance values (e.g., depressed and frustrated on day 0; depressed on day 1; morning screen use on day 2). There are no significant differences the day before. Most short-term relationships exist on the day of the event, a few appear on the next day (day 1). On the third day one
significant difference, repeated, from the first day is observed.

Designing in the Public Square

A Makapo paddler in a one-person outrigger canoe (OC1) with the final version of CoOP attached.

Design in the Public Square: Supporting Cooperative Assistive Technology Design Through Public Mixed-Ability Collaboration (CSCW 2019)

Mark. S. Baldwin, Sen H Hirano, Jennifer Mankoff, Gillian Hayes

From the white cane to the smartphone, technology has been an effective tool for broadening blind and low vision participation in a sighted world. In the face of this increased participation, individuals with visual impairments remain on the periphery of most sight-first activities. In this paper, we describe a multi-month public-facing co-design engagement with an organization that supports blind and low vision outrigger paddling. Using a mixed-ability design team, we developed an inexpensive cooperative outrigger paddling system, called DEVICE, that shares control between sighted and visually impaired paddlers. The results suggest that public design, a DIY (do-it-yourself) stance, and attentiveness to shared physical experiences, represent key strategies for creating assistive technologies that support shared experiences.

A close-up of version three of the CoOP system mounted to the rudder assembly and the transmitter
used to control the rudder (right corner).
Shows 5 iterations of the CoOP system, each of which is progressively less bulky, and more integrated (the first is strapped on for example and the last is more integrated).
The design evolution of the CoOP system in order of iteration from left to right.

The Future of Access Technologies

Picture of a 3D printed arm with backscatter sensing technology attached to it.

Sieg 322, M/W 9-10:20

Access technology (AT) has the potential to increase autonomy, and improve millions of people’s ability to live independently. This potential is currently under-realized because the expertise needed to create the right AT is in short supply and the custom nature of AT makes it difficult to deliver inexpensively. Yet computers’ flexibility and exponentially increasing power have revolutionized and democratized access technologies. In addition, by studying access technology, we can gain valuable insights into the future of all user interface technology.

In this course we will focus on two primary domains for access technologies: Access to the world (first half of the class) and Access to computers (second half of class). Students will start the course by learning some basic physical computing capabilities so that they have the tools to build novel access technologies. We will focus on creating AT using sensors and actuators that can be controlled/sensed with a mobile device. The largest project in the class will be an open ended opportunity to explore access technology in more depth. 

Class will meet 9-10:20 M/W

Class Syllabus

Private Class Canvas Website

Tentative Schedule

Week 1 (9/25 ONLY): Introduction

Week 1 (10/2 ONLY): Introduction

Week 2  (10/7; 10/9): 3D Printing & Laser Cutting

Week 3 (10/14; 10/16): Physical Computing

In class: Connect simple LED circuit to a phone

Pair Project: Build a Better Button (Due 10/28)

Week 4 (10/21; 10/23): Disability Studies

  • Critical perspectives on disability, assistive technology, and how the two relate
  • Methodological discussion
  • Disability Studies reading due

Week 5 (10/28; 10/30): Input

  • Characterizing the performance of input devices
  • Digital techniques for adapting to user input capabilities
  • Voice control
  • Eye Gaze
  • Passively Sensed Information
  • Project Proposals for second half project (Details of requirements TBD)

Week 6 (11/4; 11/6): Output

  • Braille displays
  • Alternative tactile displays
  • Vibration
  • Visual displays for the deaf
  • Ambient Displays & Calm Computing

Week 7 (11/13 ONLY): Applications

  • Exercise & Recreation
  • Navigation & Maps
  • Programming and Computation
  • Reflection on role of User Research in Successful AT

Week 8 (11/18; 11/20): The Web

Learn about “The Web,” how access technologies interact with the Web, and how to make accessible web pages.

Google Video on Practical Web Accessibility — this video provides a great overview of the Web and how to make web content accessible. Highly recommended as a supplement to what we will cover in class.

WebAim.org — WebAIM has long been a leader in providing information and tutorials on making the Web accessible. A great source where you can read about accessibility issues, making content accessible, etc.

Solo Assignment: Make An Accessible Web Page  (due for in-class grading on 11/18)

Week 9 (11/25; 11/27):  Screen Readers

  • Building screen reader (NVDA, … )
  • Building accessible app (work with screen reader)
  • Paradigms for Nonvisual Input
  • Advanced Issues:
    • Optical Character Recognition
    • Image Labeling
    • Image description
    • Audio Description for Video
  • Test each others’ accessible pages
  • Mid-project Reports (Requirements TBD)

Week 10 (12/2):  Other Computer Accessibility Challenges

  • Low Bandwidth Input
  • Reading Assistance
  • Mousing Assistance
  • Macros
  • Expert Tasks
  • Volunteer Activity due

————–

Interesting topics to consider (e.g. from Jeff’s class)

Transcoding

Topics:

  • Transcoding content to make it more accessible
  • Middleware

3D Printed Wireless Analytics

Picture of a 3D printed arm with backscatter sensing technology attached to it.

Wireless Analytics for 3D Printed Objects: Vikram Iyer, Justin Chan, Ian Culhane, Jennifer Mankoff, Shyam Gollakota UIST, Oct. 2018 [PDF]

We created a wireless physical analytics system works with commonly available conductive plastic filaments. Our design can enable various data capture and wireless physical analytics capabilities for 3D printed objects, without the need for electronics.

We make three key contributions:

(1) We demonstrate room scale backscatter communication and sensing using conductive plastic filaments.

(2) We introduce the first backscatter designs that detect a variety of bi-directional motions and support linear and rotational movements. An example is shown below

(3) As shown in the image below, we enable data capture and storage for later retrieval when outside the range of the wireless coverage, using a ratchet and gear system.

We validate our approach by wirelessly detecting the opening and closing of a pill bottle, capturing the joint angles of a 3D printed e-NABLE prosthetic hand, and an insulin pen that can store information to track its use outside the range of a wireless receiver.

Selected Media

6 of the most amazing things that were 3D-printed in 2018 (Erin Winick, MIT Technology Review, 12/24/2018)

Researchers develop 3D printed objects that can track and store how they are used (Sarah McQuate), UW Press release. 10/9/2018

Assistive Objects Can Track Their Own Use (Elizabeth Montalbano), Design News. 11/14/2018

People

Students

Vikram Iyer
Justin Chan
Ian Culhane

Faculty

Jennifer Mankoff
Shyam Gollakota

Contact: printedanalytics@cs.washington.edu

Interactiles

The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. In addition, the investigation of tactile solutions for large touchscreens may not address the challenges on mobile devices. We therefore present Interactiles, a low-cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.

XiaoyiZhang, TracyTran, YuqianSun, IanCulhane, ShobhitJain, JamesFogarty, JenniferMankoff: Interactiles: 3D Printed Tactile Interfaces to Enhance Mobile Touchscreen Accessibility. ASSETS 2018: To Appear [PDF]

Figure 2. Floating windows created for number pad (left), scrollbar (right) and control button (right bottom). The windows can be transparent; we use colors for demonstration.

Figure 4. Average task completion times of all tasks in the study.

Nonvisual Interaction Techniques at the Keyboard Surface

Rushil Khurana,Duncan McIsaac, Elliot Lockerman,Jennifer Mankoff Nonvisual Interaction Techniques at the Keyboard Surface, CHI 2018, To Appear

A table (shown on screen). Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys, and (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the position of the cell and its content are read out aloud.

Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. We introduce Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.

Talk [Slides]:

Sample Press:

KOMO Radio | New screen reader method helps blind, low-vision users browse complex web pages

Device helps blind, low-vision users better browse web pages. Allen Cone

Graph showing task completion rates for different kinds of tasks in our user study

A user is searching a table (shown on screen) for the word ‘Jill’. Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys. (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the number of occurrences of the search query in the respective column or row are read aloud. When the query is found, the position and content of the cell are read out aloud.

The Tangible Desktop

Mark S. BaldwinGillian R. HayesOliver L. HaimsonJennifer MankoffScott E. Hudson:
The Tangible Desktop: A Multimodal Approach to Nonvisual Computing. TACCESS 10(3): 9:1-9:28 (2017)

Audio-only interfaces, facilitated through text-to-speech screen reading software, have been the primary mode of computer interaction for blind and low-vision computer users for more than four decades. During this time, the advances that have made visual interfaces faster and easier to use, from direct manipulation to skeuomorphic design, have not been paralleled in nonvisual computing environments. The screen reader–dependent community is left with no alternatives to engage with our rapidly advancing technological infrastructure. In this article, we describe our efforts to understand the problems that exist with audio-only interfaces. Based on observing screen reader use for 4 months at a computer training school for blind and low-vision adults, we identify three problem areas within audio-only interfaces: ephemerality, linear interaction, and unidirectional communication. We then evaluated a multimodal approach to computer interaction called the Tangible Desktop that addresses these problems by moving semantic information from the auditory to the tactile channel. Our evaluation demonstrated that among novice screen reader users, Tangible Desktop improved task completion times by an average of 6 minutes when compared to traditional audio-only computer systems.

 

Uncertainty in Measurement

Examples of 3d printed objects that are robust to measurement uncertainty.

Kim, J., Guo, A., Yeh, T., Hudson, S. E., & Mankoff, J. (2017, June). Understanding Uncertainty in Measurement and Accommodating its Impact in 3D Modeling and Printing. In Proceedings of the 2017 Conference on Designing Interactive Systems (pp. 1067-1078). ACM.

3D printing enables everyday users to augment objects around them with personalized adaptations. There has been a proliferation of 3D models available on sharing platforms supporting this. If a model is parametric, a novice modeler can obtain a custom model simply by entering a few parameters (e.g., in the Customizer tool on Thingiverse.com). In theory, such custom models could fit any real world object one intends to augment. But in practice, a printed model seldom fits on the first try; multiple iterations are often necessary, wasting a considerable amount of time and material. We argue that parameterization or scaling alone is not sufficient for customizability, because users must correctly measure an object to specify parameters.

In a study of attempts to measure length, angle, and diameter, we demonstrate measurement errors as a significant (yet often overlooked) factor that adversely impacts the adaptation of 3D models to existing objects, requiring increased iteration. Images taken from our study are shown below.

We argue for a new design principle—accommodating measurement uncertainty—that designers as well as novices should begin to consider. We offer two strategies—modular joint and, buffer insertion—to help designers to build models that are robust to measurement uncertainty. Examples shown below.