The Limits of Expert Text Entry Speed

Improving mobile keyboard typing speed increases in value as more tasks move to a mobile setting. Autocorrect is a powerful way to reduce the time it takes to manually fix typing errors, which results in typing speed increase. However, recent user studies of autocorrect uncovered an unexplored side-effect: participants’ aversion to typing errors despite autocorrect. We present the first computational model of typing on keyboards with autocorrect, which enables precise study of expert typists’ aversion to typing errors on such keyboards. Unlike empirical typing studies that last days, our model evaluates the effects of typists’ aversion to typing errors for any autocorrect accuracy in seconds. We show that typists’ aversion to typing errors adds a self-imposed limit on upper bound typing speeds, which decreases the value of highly accurate autocorrect. Our findings motivate future designs of keyboards with autocorrect that reduce typists’ aversion to typing errors to increase typing speeds.

The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect Nikola Banovic, Ticha Sethapakdi, Yasasvi Hari, Anind K. Dey, Jennifer Mankoff. Mobile HCI 2019.

A picture of a samsung phone. The screen says: Block 2. Trial 6 of 10. this camera takes nice photographs. The user has begun typing with errors: "this camera tankes l" Error correction offers 'tankes' 'tankers' and 'takes' and a soft keyboard is shown before that.

An example mobile device with a soft keyboard: A) text entry area, which in our study contained study progress, the current phrase to transcribe, and an area for transcribed characters, B) automatically suggested words, and C) a miniQWERTY soft keyboard with autocorrect.

A bar plat showing typing speed (WPM, y axis) against acuracy (0 to 1). The bars start at 32 WPM (for 0 accuracy) and go up to approx 32 (for accuracy of 1).
Our model estimated expected mean typing speeds (lines) for different levels of typing error rate aversion (e) compared to mean empirical typing speed with automatic correction and suggestion (bar plot) in WPM across Accuracy. Error bars represent 95% confidence intervals.
4 bar plats showing error rate in uncorrected, corrected, autocorrected, and manual corrected conditions. Error rates for uncorrected are (approximately) 0 to 0.05 as accuracy increases; error rates for corrected are .10 to .005 for corrected condition as accuracy goes from 0 to 1. Error rates are  0 to about .1 for uncorrected as accuracy goes from 0 to 1. Error rates are variable but all below 0.05 for manual as accuracy goes from 0 to 1
Median empirical error rates across Accuracy in session 3 with automated correction and suggestion. Error bars represent minimum and maximum error rate values, and dots represent outliers

Automatically Tracking and Executing Green Actions

We believe that self-reporting is a limiting factor in the original vision of StepGreen.org, and this component of our research has begun to explore alternatives. For example, we showed that financial data can be used to extract footprint information [1], and in collaboration with researchers at Intel and University of Washington, we used a mobile device to track and visualize green transportation behavior in the Ubigreen project (published at CHI 2009 [2]). We have also worked on algorithms to predict the indoor location of work and home arrival time of residential building occupants so as to automatically minimize thermostat use [3, 4]. Finally, we moved away from individual behavioral remedies to structural remedies by exploring tools that could help tenants to pick greener apartments [5]

[1] J. Schwartz, J. Mankoff, H. Scott Matthews. Reflections of everyday activity in spending data. In Proceedings of CHI 2009.  (Note). (pdf)

[2] J. Froehlich, T. Dillahunt, P. Klasnja, J. Mankoff, S. Consolvo, B. Harrison, J. A. Landay, UbiGreen: Investigating a Mobile Tool for Tracking and Supporting Green Transportation Habits. In Proceedings of CHI 2009. (Full paper) (pdf)

[3] Indoor-ALPS: an adaptive indoor location prediction system Christian Koehler, Nikola Banovic, Ian Oakley, Jennifer Mankoff, Anind K. Dey
UbiComp ’14 Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2014

[4] TherML: occupancy prediction for thermostat control Christian Koehler, Brian D. Ziebart, Jennifer Mankoff, Anind K. Dey UbiComp ’13 Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, 2013

[5] Jennifer Mankoff, Dimeji Onafuwa, Kirstin Early, Nidhi Vyas, Vikram Kamath Cannanure: Understanding the Needs of Prospective Tenants. COMPASS 2018: 36:1-36:10

Aversion to Typing Errors

Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry

Text entry is an increasingly important activity for mobile device users. As a result, increasing text entry speed of expert typists is an important design goal for physical and soft keyboards. Mathematical models that predict text entry speed can help with keyboard design and optimization. Making typing errors when entering text is inevitable. However, current models do not consider how typists themselves reduce the risk of making typing errors (and lower error frequency) by typing more slowly. We demonstrate that users respond to costly typing errors by reducing their typing speed to minimize typing errors. We present a model that estimates the effects of risk aversion to errors on typing speed. We estimate the magnitude of this speed change, and show that disregarding the adjustments to typing speed that expert typists use to reduce typing errors leads to overly optimistic estimates of maximum errorless expert typing speeds.

promoNikola Banovic, Varun Rao, Abinaya Saravanan, Anind K. Dey, and Jennifer Mankoff. 2017. Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.

Modeling & Generating Routines

Leveraging Human Routine Models to Detect and Generate Human Behaviors

An ability to detect behaviors that negatively impact people’s wellbeing and show people how they can correct those behaviors could enable technology that improves people’s lives. Existing supervised machine learning approaches to detect and generate such behaviors require lengthy and expensive data labeling by domain experts. In this work, we focus on the domain of routine behaviors, where we model routines as a series of frequent actions that people perform in specific situations. We present an approach that bypasses labeling each behavior instance that a person exhibits. Instead, we weakly label instances using people’s demonstrated routine. We classify and generate new instances based on the probability that they belong to the routine model. We illustrate our approach on an example system that helps drivers become aware of and understand their aggressive driving behaviors. Our work enables technology that can trigger interventions and help people reflect on their behaviors when those behaviors are likely to negatively impact them.

drivingsimulator_no_labelNikola Banovic, Anqi Wang, Yanfeng Jin, Christie Chang, Julian Ramos, Anind K. Dey, and Jennifer Mankoff. 2017. Leveraging Human Routine Models to Detect and Generate Human Behaviors. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.

Modeling Human Routines

Modeling and Understanding Human Routine Behavior

Human routines are blueprints of behavior, which allow people to accomplish their purposeful repetitive tasks and activities. People express their routines through actions that they perform in the particular situations that triggered those actions. An ability to model routines and understand the situations in which they are likely to occur could allow technology to help people improve their bad habits, inexpert behavior, and other suboptimal routines. In this project we explore generalizable routine modeling approaches that encode patterns of routine behavior in ways that allow systems, such as smart agents, to classify, predict, and reason about human actions under the inherent uncertainty present in human behavior. Such technologies can have a positive effect on society by making people healthier, safer, and more efficient in their routine tasks.

Routines_Viz_Tool

Modeling and Understanding Human Routine Behavior
Nikola Banovic, Tofi Buzali, Fanny Chevalier, Jennifer Mankoff, and Anind K. Dey
In Proceedings of the 2016 ACM annual conference on Human Factors in Computing Systems(CHI ’16). ACM, New York, NY, USA.
Honorable Mention Award

Supporting Navigation in the Wild for the Blind

uncovering_thumbnailSighted individuals often develop significant knowledge about their environment through what they can visually observe. In contrast, individuals who are visually impaired mostly acquire such knowledge about their environment through information that is explicitly related to them. Our work examines the practices that visually impaired individuals use to learn about their environments and the associated challenges. In the first of our two studies, we uncover four types of information needed to master and navigate the environment. We detail how individuals’ context impacts their ability to learn this information, and outline requirements for independent spatial learning. In a second study, we explore how individuals learn about places and activities in their environment. Our findings show that users not only learn information to satisfy their immediate needs, but also to enable future opportunities – something existing technologies do not fully support. From these findings, we discuss future research and design opportunities to assist the visually impaired in independent spatial learning.

Uncovering information needs for independent spatial learning for users who are visually impaired. Nikola Banovic, Rachel L. Franz, Khai N. Truong, Jennifer Mankoff, and Anind K. DeyIn Proceedings of the 15th international ACM SIGACCESS conference on Computers and accessibility (ASSETS ’13). ACM, New York, NY, USA, Article 24, 8 pages. (pdf)

Nikola Banovic (PhD, co-advised with Anind Dey)

nikolaNikola is an alumnus of the group, as of 2018 an assistant professor in Computer Science at the University of Michigan.  He did his PhD work under Jennifer Mankoff and Anind Dey, working on developing new models of human routine behaviors that will inform the design and support smart agents that help people develop good routines. His projects included helping aggressive drivers improve their driving routine to become less aggressive, and helping students develop routines that help them balance their academic success and their health and wellbeing.