Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry
Text entry is an increasingly important activity for mobile device users. As a result, increasing text entry speed of expert typists is an important design goal for physical and soft keyboards. Mathematical models that predict text entry speed can help with keyboard design and optimization. Making typing errors when entering text is inevitable. However, current models do not consider how typists themselves reduce the risk of making typing errors (and lower error frequency) by typing more slowly. We demonstrate that users respond to costly typing errors by reducing their typing speed to minimize typing errors. We present a model that estimates the effects of risk aversion to errors on typing speed. We estimate the magnitude of this speed change, and show that disregarding the adjustments to typing speed that expert typists use to reduce typing errors leads to overly optimistic estimates of maximum errorless expert typing speeds.
Nikola Banovic, Varun Rao, Abinaya Saravanan, Anind K. Dey, and Jennifer Mankoff. 2017. Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.
Leveraging Human Routine Models to Detect and Generate Human Behaviors
An ability to detect behaviors that negatively impact people’s wellbeing and show people how they can correct those behaviors could enable technology that improves people’s lives. Existing supervised machine learning approaches to detect and generate such behaviors require lengthy and expensive data labeling by domain experts. In this work, we focus on the domain of routine behaviors, where we model routines as a series of frequent actions that people perform in specific situations. We present an approach that bypasses labeling each behavior instance that a person exhibits. Instead, we weakly label instances using people’s demonstrated routine. We classify and generate new instances based on the probability that they belong to the routine model. We illustrate our approach on an example system that helps drivers become aware of and understand their aggressive driving behaviors. Our work enables technology that can trigger interventions and help people reflect on their behaviors when those behaviors are likely to negatively impact them.
Nikola Banovic, Anqi Wang, Yanfeng Jin, Christie Chang, Julian Ramos, Anind K. Dey, and Jennifer Mankoff. 2017. Leveraging Human Routine Models to Detect and Generate Human Behaviors. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.
Watch-ya-doin is an innovative experienced based sampling framework for longitudinal data collection and analysis. Our system consists of a smartwatch and an android device working unobtrusively to track data. Our goal is to train on and recognize a specific activity over time. We use a simple wrist-worn accelerometer to predict eating behavior and other activities. These are inexpensive to deploy and easy to maintain, since battery life is a whole week using our application.
Our primary application area is AT abandonment. About 700,000 people in the United States have an upper limb amputation, and about 6.8 million face fine motor and/or arm dexterity limitations. Assistive technology (AT), ranging from myo-electric prosthetics to passive prosthetics to a variety of orthotics can help in the rehabilitation and improve independence and ability to perform everyday tasks. Yet AT is not used to its full potential, with abandonment rates ranging from 23% to 90% for prosthetics users, and high abandonment of orthotics as well. Given the cost of these devices, this is an enormous waste of a significant financial investment in developing, fabricating, and providing the device, as well as potentially leading to frustration, insufficient rehabilitation, increased risk of limb-loss associated co-morbidities, and overall a reduced quality of life for the recipient.
To address this, we need objective and accurate information about AT use. Current data is limited primarily to questionnaires, or skill testing during office visits. Apart from being limited by subjectivity and evaluator bias, survey tools are also not appropriate to estimate quality of use. A patient may – more or less accurately – report his or her AT use for a certain number of hours a day, but this does not indicate which tasks it was used for, which makes it difficult to evaluate how appropriate or helpful they were. In addition, neither reported use time nor skill testing can be sufficiently used to predict abandonment once AT is deployed.
Our next steps include generalizing our approach to AT (such as upper limb prosthetics), and expanding it to include a wider variety of tracked activities. In addition, we will develop a longitudinal data set that includes examples of abandonment. This will allow the creation algorithms that can characterize the type and quality of use over the lifecycle of AT and predict abandonment.
Human routines are blueprints of behavior, which allow people to accomplish their purposeful repetitive tasks and activities. People express their routines through actions that they perform in the particular situations that triggered those actions. An ability to model routines and understand the situations in which they are likely to occur could allow technology to help people improve their bad habits, inexpert behavior, and other suboptimal routines. In this project we explore generalizable routine modeling approaches that encode patterns of routine behavior in ways that allow systems, such as smart agents, to classify, predict, and reason about human actions under the inherent uncertainty present in human behavior. Such technologies can have a positive effect on society by making people healthier, safer, and more efficient in their routine tasks.
Modeling and Understanding Human Routine Behavior
Nikola Banovic, Tofi Buzali, Fanny Chevalier, Jennifer Mankoff, and Anind K. Dey
In Proceedings of the 2016 ACM annual conference on Human Factors in Computing Systems(CHI ’16). ACM, New York, NY, USA.
In recent years, surveys have been shifting online, offering the possibility for adaptive questions, where later questions depend on responses to earlier questions. We present a general framework for dynamically ordering questions, based on previous responses, to engage respondents, improving survey completion and imputation of unknown items. Our work considers two scenarios for data collection from survey-takers. In the first, we want to maximize survey completion (and the quality of necessary imputations) and so we focus on ordering questions to engage the respondent and collect hopefully all the information we seek, or at least the information that most characterizes the respondent so imputed values will be accurate. In the second scenario, our goal is to give the respondent a personalized prediction, based on information they provide. Since it is possible to give a reasonable prediction with only a subset of questions, we are not concerned with motivating the user to answer all questions. Instead, we want to order questions so that the user provides information that most reduces the uncertainty of our prediction, while not being too burdensome to answer.
Hospitalized children on continuous oxygen monitors generate >40,000 data points per patient each day. These data do not show context or reveal trends over time, techniques proven to improve comprehension and use. Management of oxygen in hospitalized patients is suboptimal—premature infants spend >40% of each day outside of evidence-based oxygen saturation ranges and weaning oxygen is delayed in infants with bronchiolitis who are physiologically ready. Data visualizations may improve user knowledge of data trends and inform better decisions in managing supplemental oxygen delivery.
First, we studied the workflows and breakdowns for nurses and respiratory therapists (RTs) in the supplemental oxygen delivery of infants with respiratory disease. Secondly, using end-user design we developed a data display that informed decision-making in this context. Our ultimate goal is to improve the overall work process using a combination of visualization and machine learning.
The goal of the Stepgreen project is to leverage Internet scale technologies to create opportunities for reduced energy consumption. The original vision of the project was to leverage existing online social networks to encourage individual change. Since then the project has broadened to include a number of other ideas. We have explored the impact of demographics on energy use practices; studied the value of empathetic figures such as a polar bear for motivation and exploredorganizational-level planning. We have also developed mobile technologies that can provide feedback about green actions on the go.
Try StepGreen.org out: The Stepgreen.org website provides a mechanism for allowing individuals to report on and track their environmental impact. It includes a visualization that can be displayed on an individual’s social networking web page. Go to Stepgreen.organd see for yourself how we leverage social networks to engage individuals in green behaviors.
Learn about our software products. Stepgreen is a service that we are hoping to share with non-profits that are encouraging behavior change, such as an open API you can use to build your own clients for encouraging green behavior. Please contact us at email@example.com if you are interested in collaborating with us.
Tawanna Dillahunt, Jennifer Mankoff, Eric Paulos. Understanding Conflict Between Landlords and Tenants: Implications for Energy Sensing and Feedback. Ubicomp ’10. (full paper)(pdf)
Jennifer Mankoff, Susan R. Fussell, Tawanna Dillahunt, Rachel Glaves, Catherine Grevet, Michael Johnson, Deanna Matthews, H. Scott Matthews, Robert McGuire, Robert Thompson. StepGreen.org: Increasing Energy Saving Behaviors via Social Networks. ICWSM’10. (full paper) (pdf, video of talk)
C. Grevet, J. Mankoff, S. D. Anderson Design and Evaluation of a Social Visualization aimed at Encouraging Sustainable Behavior. In Proceedings of HICSS 2010. (full paper) (pdf)
T. Dillahunt, J. Mankoff, E. Paulos, S. Fussell It’s Not All About “Green”: Energy Use in Low-Income Communities. In Proceedings of Ubicomp 2009. (Full paper) (pdf)
J. Froehlich, T. Dillahunt, P. Klasnja, J. Mankoff, S. Consolvo, B. Harrison, J. A. Landay, UbiGreen: Investigating a Mobile Tool for Tracking and Supporting Green Transportation Habits. In Proceedings of CHI 2009. (Full paper) (pdf)
J. Schwartz, J. Mankoff, H. Scott Matthews. Reflections of everyday activity in spending data. In Proceedings of CHI 2009. (Note). (pdf)
Jennifer Mankoff, Deanna Matthews, Susan R. Fussell and Michael Johnson. Leveraging Social Networks to Motivate Individuals to Reduce their Ecological Footprints. HICSS 2007. (pdf)
Rachael Nealer, Christopher Weber, H. Scott Matthews and Chris Hendrickson. Energy and Environmental Impacts of Consumer Purchases: A Case Study on Grocery Purchases. ISSST 2010
Dillahunt, T., Becker, G., Mankoff, J. and Kraut, R. Motivating Environmentally Sustainable Behavior Changes with a Virtual Polar Bear.” Pervasive 2008 workshop on Pervasive Persuasive Technology and Environmental Sustainability. (pdf)
Johnson, M., Fussell, S. Mankoff, J., Matthwes, D., and Setlock, L. “When Users Pledge to Take Green Actions, Are They Solving a Decision Problem?” INFORMS Fall 2008 Conference. (ppt)
Johnson, M., Fussell, S. Mankoff, J. and Matthwes, D. “How Does Problem Representation Influence Decision Performance and Attitudes?” INFORMS Fall 2007 Conference. Abstract
Johnson, M.P. 2006. “Public Participation and Decision Support Systems: Theory, Requirements, and Applications.” For presentation at Association of Public Policy Analysis and Management Fall Conference, Madison, WI, November 3, 2006. (pdf)
Over the past decade and a half, corporations and academies have invested considerable time and money in the realization of ubiquitous computing. Yet design approaches that yield ecologically valid understandings of ubiquitous computing systems, which can help designers make design decisions based on how systems perform in the context of actual experience, remain rare. The central question underlying this article is, What barriers stand in the way of real-world, ecologically valid design for ubicomp?
Using a literature survey and interviews with 28 developers, we illustrate how issues of sensing and scale cause ubicomp systems to resist iteration, prototype creation, and ecologically valid evaluation. In particular, we found that developers have difficulty creating prototypes that are both robust enough for realistic use and able to handle ambiguity and error and that they struggle to gather useful data from evaluations because critical events occur infrequently, because the level of use necessary to evaluate the system is difficult to maintain, or because the evaluation itself interferes with use of the system. We outline pitfalls for developers to avoid as well as practical solutions, and we draw on our results to outline research challenges for the future. Crucially, we do not argue for particular processes, sets of metrics, or intended outcomes, but rather we focus on prototyping tools and evaluation methods that support realistic use in realistic settings that can be selected according to the needs and goals of a particular developer or researcher.