EDigs

eDigs logoJennifer MankoffDimeji OnafuwaKirstin EarlyNidhi VyasVikram Kamath:
Understanding the Needs of Prospective Tenants. COMPASS 2018: 36:1-36:10

EDigs is a research project group in Carnegie Mellon University working on sustainability. Our research is focused on helping people find a perfect rental through machine learning and user research.

We sometimes study how our members use EDigs in order to learn how to build software support for successful social communities.

eDigs websiteScreenshot of edigs.org showing a mobile app, facebook and twitter feeds, and information about it.

Nonvisual Interaction Techniques at the Keyboard Surface

Rushil Khurana,Duncan McIsaac, Elliot Lockerman,Jennifer Mankoff Nonvisual Interaction Techniques at the Keyboard Surface, CHI 2018, To Appear

A table (shown on screen). Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys, and (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the position of the cell and its content are read out aloud.

Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. We introduce Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.

Talk [Slides]:

Sample Press:

KOMO Radio | New screen reader method helps blind, low-vision users browse complex web pages

Device helps blind, low-vision users better browse web pages. Allen Cone

Graph showing task completion rates for different kinds of tasks in our user study
A user is searching a table (shown on screen) for the word ‘Jill’. Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys. (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the number of occurrences of the search query in the respective column or row are read aloud. When the query is found, the position and content of the cell are read out aloud.

Expressing and Reusing Design Intent in 3D Models

Megan K Hofmann, Gabriella Han, Scott E Hudson, Jennifer Mankoff. Greater Than the Sum of Its PARTs: Expressing and Reusing Design Intent in 3D Models CHI 2018, To Appear.

With the increasing popularity of consumer-grade 3D printing, many people are creating, and even more using, objects shared on sites such as Thingiverse. However, our formative study of 962 Thingiverse models shows a lack of re-use of models, perhaps due to the advanced skills needed for 3D modeling. An end user program perspective on 3D modeling is needed. Our framework (PARTs) empowers amateur modelers to graphically specify design intent through geometry. PARTs includes a GUI, scripting API and exemplar library of assertions which test design expectations and integrators which act on intent to create geometry. PARTs lets modelers integrate advanced, model specific functionality into designs, so that they can be re-used and extended, without programming. In two workshops, we show that PARTs helps to create 3D printable models, and modify existing models more easily than with a standard tool.

Picture of 3D models and a printout

Hypertension recognition through overnight Heart Rate Variability sensing

Ni, H., Cho, S., Mankoff, J., & Yang, J. (2017). Automated recognition of hypertension through overnight continuous HRV monitoring. Journal of Ambient Intelligence and Humanized Computing, 1-13.

Hypertension is a common and chronic disease, caused by high blood pressure. Since hypertension often has no warning signs or symptoms, many cases remain undiagnosed. Untreated or sub-optimally controlled hypertension may lead to cardiovascular, cerebrovascular and renal morbidity and mortality, along with dysfunction of the autonomic nervous system. Therefore, it could be quite valuable to predict or provide early warnings about hypertension. Heart rate variability (HRV) analysis has emerged as the most valuable non-invasive test to assess autonomic nervous system function, and has great potential for detecting hypertension. However, HRV indicators may be subtle and present at random, resulting in two challenges: how to support continuous monitoring for hours at a time while being unobtrusive, and how to efficiently analyze the collected data to minimize data collection and user burden. In this paper, we present a machine learning-based approach for detecting hypertension, using a waist belt continuous sensing system that is worn overnight. Using 24 hypertension patients and 24 healthy controls, we demonstrate that our approach can differentiate hypertension patients from healthy controls with 93.33% accuracy. This represents a promising approach for performing hypertension classification in the field, and also we would improve its performance based on a large number of hypertensive subjects monitored by the proposed pervasive sensors.

The Tangible Desktop

Mark S. BaldwinGillian R. HayesOliver L. HaimsonJennifer MankoffScott E. Hudson: The Tangible Desktop: A Multimodal Approach to Nonvisual Computing. TACCESS 10(3): 9:1-9:28 (2017)

Audio-only interfaces, facilitated through text-to-speech screen reading software, have been the primary mode of computer interaction for blind and low-vision computer users for more than four decades. During this time, the advances that have made visual interfaces faster and easier to use, from direct manipulation to skeuomorphic design, have not been paralleled in nonvisual computing environments. The screen reader–dependent community is left with no alternatives to engage with our rapidly advancing technological infrastructure. In this article, we describe our efforts to understand the problems that exist with audio-only interfaces. Based on observing screen reader use for 4 months at a computer training school for blind and low-vision adults, we identify three problem areas within audio-only interfaces: ephemerality, linear interaction, and unidirectional communication. We then evaluated a multimodal approach to computer interaction called the Tangible Desktop that addresses these problems by moving semantic information from the auditory to the tactile channel. Our evaluation demonstrated that among novice screen reader users, Tangible Desktop improved task completion times by an average of 6 minutes when compared to traditional audio-only computer systems.

Also see: Mark S. BaldwinJennifer MankoffBonnie A. NardiGillian R. Hayes: An Activity Centered Approach to Nonvisual Computer Interaction. ACM Trans. Comput. Hum. Interact. 27(2): 12:1-12:27 (2020)

Uncertainty in Measurement

Kim, J., Guo, A., Yeh, T., Hudson, S. E., & Mankoff, J. (2017, June). Understanding Uncertainty in Measurement and Accommodating its Impact in 3D Modeling and Printing. In Proceedings of the 2017 Conference on Designing Interactive Systems (pp. 1067-1078). ACM.

3D printing enables everyday users to augment objects around them with personalized adaptations. There has been a proliferation of 3D models available on sharing platforms supporting this. If a model is parametric, a novice modeler can obtain a custom model simply by entering a few parameters (e.g., in the Customizer tool on Thingiverse.com). In theory, such custom models could fit any real world object one intends to augment. But in practice, a printed model seldom fits on the first try; multiple iterations are often necessary, wasting a considerable amount of time and material. We argue that parameterization or scaling alone is not sufficient for customizability, because users must correctly measure an object to specify parameters.

In a study of attempts to measure length, angle, and diameter, we demonstrate measurement errors as a significant (yet often overlooked) factor that adversely impacts the adaptation of 3D models to existing objects, requiring increased iteration. Images taken from our study are shown below.

We argue for a new design principle—accommodating measurement uncertainty—that designers as well as novices should begin to consider. We offer two strategies—modular joint and, buffer insertion—to help designers to build models that are robust to measurement uncertainty. Examples shown below.

 

 

Making the field of computing more inclusive for people with disabilities

Lazar, J., Churchill, E. F., Grossman, T., Van der Veer, G., Palanque, P., Morris, J. S., & Mankoff, J. (2017). Making the field of computing more inclusiveCommunications of the ACM60(3), 50-59.

More accessible conferences, digital resources, and ACM SIGs will lead to greater participation by more people with disabilities. Improving conference and online material accessibility has been an ongoing project that I’ve been lucky enough to help with. This effort, led by a wide set of people, is spearheaded currently by the SIGCHI Accessibility Community (also on facebook, summarized in a recent Interactions blog post.

 

A Beam Robot Jen is using to attend a conference

Volunteer AT Fabricators

Perry-Hill, J., Shi, P., Mankoff, J. & Ashbrook, D. Understanding Volunteer AT Fabricators: Opportunities and Challenges in DIY-AT for Others in e-NABLE. Accepted to CHI 2017

We present the results of a study of e-NABLE, a distributed, collaborative volunteer effort to design and fabricate upper-limb assistive technology devices for limb-different users. Informed by interviews with 14 stakeholders in e-NABLE, including volunteers and clinicians, we discuss differences and synergies among each group with respect to motivations, skills, and perceptions of risks inherent in the project. We found that both groups are motivated to be involved in e-NABLE by the ability to use their skills to help others, and that their skill sets are complementary, but that their different perceptions of risk may result in uneven outcomes or missed expectations for end users. We offer four opportunities for design and technology to enhance the stakeholders’ abilities to work together.

Screen Shot 2017-03-14 at 1.09.13 PMA variety of 3D-printed upper-limb assistive technology devices designed and produced by volunteers in the e-NABLE community. Photos were taken by the fourth author in the e-NABLE lab on RIT’s campus.

Tactile Interfaces to Appliances

Anhong Guo, Jeeeun Kim, Xiang ‘Anthony’ Chen, Tom Yeh, Scott E. Hudson, Jennifer Mankoff, & Jeffrey P. Bigham, Facade: Auto-generating Tactile Interfaces to Appliances, In Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems (CHI’17), Denver, CO (To appear)

Common appliances have shifted toward flat interface panels, making them inaccessible to blind people. Although blind people can label appliances with Braille stickers, doing so generally requires sighted assistance to identify the original functions and apply the labels. We introduce Facade – a crowdsourced fabrication pipeline to help blind people independently make physical interfaces accessible by adding a 3D printed augmentation of tactile buttons overlaying the original panel. Facade users capture a photo of the appliance with a readily available fiducial marker (a dollar bill) for recovering size information. This image is sent to multiple crowd workers, who work in parallel to quickly label and describe elements of the interface. Facade then generates a 3D model for a layer of tactile and pressable buttons that fits over the original controls. Finally, a home 3D printer or commercial service fabricates the layer, which is then aligned and attached to the interface by the blind person. We demonstrate the viability of Facade in a study with 11 blind participants.

5792511475098337672(1)

Aversion to Typing Errors

Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry

Text entry is an increasingly important activity for mobile device users. As a result, increasing text entry speed of expert typists is an important design goal for physical and soft keyboards. Mathematical models that predict text entry speed can help with keyboard design and optimization. Making typing errors when entering text is inevitable. However, current models do not consider how typists themselves reduce the risk of making typing errors (and lower error frequency) by typing more slowly. We demonstrate that users respond to costly typing errors by reducing their typing speed to minimize typing errors. We present a model that estimates the effects of risk aversion to errors on typing speed. We estimate the magnitude of this speed change, and show that disregarding the adjustments to typing speed that expert typists use to reduce typing errors leads to overly optimistic estimates of maximum errorless expert typing speeds.

promoNikola Banovic, Varun Rao, Abinaya Saravanan, Anind K. Dey, and Jennifer Mankoff. 2017. Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.