Making the field of computing more inclusive for people with disabilities

Lazar, J., Churchill, E. F., Grossman, T., Van der Veer, G., Palanque, P., Morris, J. S., & Mankoff, J. (2017). Making the field of computing more inclusiveCommunications of the ACM60(3), 50-59.

More accessible conferences, digital resources, and ACM SIGs will lead to greater participation by more people with disabilities. Improving conference and online material accessibility has been an ongoing project that I’ve been lucky enough to help with. This effort, led by a wide set of people, is spearheaded currently by the SIGCHI Accessibility Community (also on facebook, summarized in a recent Interactions blog post.

 

A Beam Robot Jen is using to attend a conference

Volunteer AT Fabricators

Perry-Hill, J., Shi, P., Mankoff, J. & Ashbrook, D. Understanding Volunteer AT Fabricators: Opportunities and Challenges in DIY-AT for Others in e-NABLE. Accepted to CHI 2017

We present the results of a study of e-NABLE, a distributed, collaborative volunteer effort to design and fabricate upper-limb assistive technology devices for limb-different users. Informed by interviews with 14 stakeholders in e-NABLE, including volunteers and clinicians, we discuss differences and synergies among each group with respect to motivations, skills, and perceptions of risks inherent in the project. We found that both groups are motivated to be involved in e-NABLE by the ability to use their skills to help others, and that their skill sets are complementary, but that their different perceptions of risk may result in uneven outcomes or missed expectations for end users. We offer four opportunities for design and technology to enhance the stakeholders’ abilities to work together.

Screen Shot 2017-03-14 at 1.09.13 PMA variety of 3D-printed upper-limb assistive technology devices designed and produced by volunteers in the e-NABLE community. Photos were taken by the fourth author in the e-NABLE lab on RIT’s campus.

Tactile Interfaces to Appliances

Anhong Guo, Jeeeun Kim, Xiang ‘Anthony’ Chen, Tom Yeh, Scott E. Hudson, Jennifer Mankoff, & Jeffrey P. Bigham, Facade: Auto-generating Tactile Interfaces to Appliances, In Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems (CHI’17), Denver, CO (To appear)

Common appliances have shifted toward flat interface panels, making them inaccessible to blind people. Although blind people can label appliances with Braille stickers, doing so generally requires sighted assistance to identify the original functions and apply the labels. We introduce Facade – a crowdsourced fabrication pipeline to help blind people independently make physical interfaces accessible by adding a 3D printed augmentation of tactile buttons overlaying the original panel. Facade users capture a photo of the appliance with a readily available fiducial marker (a dollar bill) for recovering size information. This image is sent to multiple crowd workers, who work in parallel to quickly label and describe elements of the interface. Facade then generates a 3D model for a layer of tactile and pressable buttons that fits over the original controls. Finally, a home 3D printer or commercial service fabricates the layer, which is then aligned and attached to the interface by the blind person. We demonstrate the viability of Facade in a study with 11 blind participants.

5792511475098337672(1)

Aversion to Typing Errors

Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry

Text entry is an increasingly important activity for mobile device users. As a result, increasing text entry speed of expert typists is an important design goal for physical and soft keyboards. Mathematical models that predict text entry speed can help with keyboard design and optimization. Making typing errors when entering text is inevitable. However, current models do not consider how typists themselves reduce the risk of making typing errors (and lower error frequency) by typing more slowly. We demonstrate that users respond to costly typing errors by reducing their typing speed to minimize typing errors. We present a model that estimates the effects of risk aversion to errors on typing speed. We estimate the magnitude of this speed change, and show that disregarding the adjustments to typing speed that expert typists use to reduce typing errors leads to overly optimistic estimates of maximum errorless expert typing speeds.

promoNikola Banovic, Varun Rao, Abinaya Saravanan, Anind K. Dey, and Jennifer Mankoff. 2017. Quantifying Aversion to Costly Typing Errors in Expert Mobile Text Entry. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.

Modeling & Generating Routines

Leveraging Human Routine Models to Detect and Generate Human Behaviors

An ability to detect behaviors that negatively impact people’s wellbeing and show people how they can correct those behaviors could enable technology that improves people’s lives. Existing supervised machine learning approaches to detect and generate such behaviors require lengthy and expensive data labeling by domain experts. In this work, we focus on the domain of routine behaviors, where we model routines as a series of frequent actions that people perform in specific situations. We present an approach that bypasses labeling each behavior instance that a person exhibits. Instead, we weakly label instances using people’s demonstrated routine. We classify and generate new instances based on the probability that they belong to the routine model. We illustrate our approach on an example system that helps drivers become aware of and understand their aggressive driving behaviors. Our work enables technology that can trigger interventions and help people reflect on their behaviors when those behaviors are likely to negatively impact them.

drivingsimulator_no_labelNikola Banovic, Anqi Wang, Yanfeng Jin, Christie Chang, Julian Ramos, Anind K. Dey, and Jennifer Mankoff. 2017. Leveraging Human Routine Models to Detect and Generate Human Behaviors. (To appear) In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA.

3D Printing with Embedded Textiles

screen-shot-2017-01-09-at-9-03-14-pm


Stretching the Bounds of 3D Printing with Embedded Textiles

Textiles are an old and well developed technology that have many desirable characteristics. They can be easily folded, twisted, deformed, or cut; some can be stretched; many are soft. Textiles can maintain their shape when placed under tension and can even be engineered with variable stretching ability.

When combined, textiles and 3D printing open up new opportunities for rapidly creating rigid objects with embedded flexibility as well as soft materials imbued with additional functionality. We introduce a suite of techniques for integrating the two and demonstrate how the malleability, stretchability and aesthetic qualities of textiles can enhance rigid printed objects, and how textiles can be augmented with functional properties enabled by 3D printing.

Click images below to see more detail:


Citation

Rivera, M.L., Moukperian, M., Ashbrook, D., Mankoff, J., Hudson, S.E. 2017. Stretching the Bounds of 3D Printing with Embedded Textiles. To appear in to the annual ACM conference on Human Factors in Computing Systems. CHI ‘17. [Paper]

Watch-ya-doin

Watch-ya-doin is an innovative experienced based sampling framework for longitudinal data collection and analysis. Our system consists of a smartwatch and an android device working unobtrusively to track data. Our goal is to train on and recognize a specific activity over time. We use a simple wrist-worn accelerometer to predict eating behavior and other activities. These are inexpensive to deploy and easy to maintain, since battery life is a whole week using our application.
document3
     Our primary application area is AT abandonment. About 700,000 people in the United States have an upper limb amputation, and about 6.8 million face fine motor and/or arm dexterity limitations[1]. Assistive technology (AT), ranging from myo-electric prosthetics to passive prosthetics to a variety of orthotics can help in the rehabilitation and improve independence and ability to perform everyday tasks. Yet AT is not used to its full potential, with abandonment rates ranging from 23% to 90% for prosthetics users, and high abandonment of orthotics as well. Given the cost of these devices, this is an enormous waste of a significant financial investment in developing, fabricating, and providing the device, as well as potentially leading to frustration, insufficient rehabilitation, increased risk of limb-loss associated co-morbidities, and overall a reduced quality of life for the recipient.
       To address this, we need objective and accurate information about AT use. Current data is limited primarily to questionnaires, or skill testing during office visits. Apart from being limited by subjectivity and evaluator bias, survey tools are also not appropriate to estimate quality of use. A patient may – more or less accurately – report his or her AT use for a certain number of hours a day, but this does not indicate which tasks it was used for, which makes it difficult to evaluate how appropriate or helpful they were. In addition, neither reported use time nor skill testing can be sufficiently used to predict abandonment once AT is deployed.

      Our next steps include generalizing our approach to AT (such as upper limb prosthetics), and expanding it to include a wider variety of tracked activities. In addition, we will develop a longitudinal data set that includes examples of abandonment. This will allow the creation algorithms that can characterize the type and quality of use over the lifecycle of AT and predict abandonment.

[1] U.S. Census 2001

Printable Adaptations

Reprise: A Design Tool for Specifying, Generating, and Customizing 3D Printable Adaptations on Everyday Objects

Reprise is a tool for creating custom adaptive 3D printable designs for making it easier to manipulate everything from tools to zipper pulls. Reprise’s library is based on a survey of about 3,000 assistive technology and life hacks drawn from textbooks on the topic as well as Thingiverse. Using Reprise, it is possible to specify a type of action (such as grasp or pull), indicate the direction of action on a 3D model of the object being adapted, parameterize the action in a simple GUI, specify an attachment method, and produce a 3D model that is ready to print.

Xiang ‘Anthony’ Chen, Jeeeun Kim, Jennifer Mankoff, Tovi Grossman, Stelian Coros, Scott Hudson (2016). Reprise: A Design Tool for Specifying, Generating, and Customizing 3D Printable Adaptations on Everyday Objects. Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST 2016) (pdf)

This slideshow requires JavaScript.

A Knitting Machine Compiler

 

A teddy bear wearing a knit hat, scarf (with pocket) and sweaterAlthough industrial knitting machines can automatically produce a wide range of garments, they are programmed through onerous means such as pixel level image manipulation. This limits the potential for automation of knitted object design, re-use of object components, and narrows the audience able to design for these machines. Our contribution is a visual design interface for specifying objects in terms of tubes and sheets and a compiler that can convert such an object into knittable machine instructions which handle knotty issues such as transfer planning (among needles) correctly. We demonstrate the range of objects our approach supports by example.

A Compiler for 3D Machine Knitting (SIGGRAPH 2016) Jim McCannLea Albaugh,
Vidya NarayananApril GrowWojciech MatusikJennifer MankoffJessica Hodgins

RapID — interactive RFID

RapID – A framework for fabricating low-latency interactive objects with RFID tags

RFID tags can be used to add inexpensive, wireless, batteryless sensing to objects. However, quickly and accurately estimating the state of an RFID tag is difficult. In this work, we show how to achieve low-latency manipulation and movement sensing with off-the-shelf RFID tags and readers. Our approach couples a probabilistic filtering layer with a monte- carlo-sampling-based interaction layer, preserving uncertainty in tag reads until they can be resolved in the context of interactions. This allows designers’ code to reason about inputs at a high level. We demonstrate the effectiveness of our approach with a number of interactive objects, along with a library of components that can be combined to make new designs.

bestRapID: A Framework for Fabricating Low-Latency Interactive Objects with RFID Tags (CHI 2016, Page 5897) Andrew Spielberg, Alanson Sample, Scott E. Hudson, Jennifer Mankoff, James McCann

mixer-use quiz-use tic-tac-toe-play
pong-use pong-build tic-tac-toe-sketchup