The Future of Access Technologies

Picture of a 3D printed arm with backscatter sensing technology attached to it.

Sieg 322, M/W 9-10:20

Access technology (AT) has the potential to increase autonomy, and improve millions of people’s ability to live independently. This potential is currently under-realized because the expertise needed to create the right AT is in short supply and the custom nature of AT makes it difficult to deliver inexpensively. Yet computers’ flexibility and exponentially increasing power have revolutionized and democratized access technologies. In addition, by studying access technology, we can gain valuable insights into the future of all user interface technology.

In this course we will focus on two primary domains for access technologies: Access to the world (first half of the class) and Access to computers (second half of class). Students will start the course by learning some basic physical computing capabilities so that they have the tools to build novel access technologies. We will focus on creating AT using sensors and actuators that can be controlled/sensed with a mobile device. The largest project in the class will be an open ended opportunity to explore access technology in more depth. 

Class will meet 9-10:20 M/W

Class Syllabus

Private Class Canvas Website

Tentative Schedule

Week 1 (9/25 ONLY): Introduction

Week 1 (10/2 ONLY): Introduction

Week 2  (10/7; 10/9): 3D Printing & Laser Cutting

Week 3 (10/14; 10/16): Physical Computing

In class: Connect simple LED circuit to a phone

Pair Project: Build a Better Button (Due 10/28)

Week 4 (10/21; 10/23): Disability Studies

  • Critical perspectives on disability, assistive technology, and how the two relate
  • Methodological discussion
  • Disability Studies reading due

Week 5 (10/28; 10/30): Input

  • Characterizing the performance of input devices
  • Digital techniques for adapting to user input capabilities
  • Voice control
  • Eye Gaze
  • Passively Sensed Information
  • Project Proposals for second half project (Details of requirements TBD)

Week 6 (11/4; 11/6): Output

  • Braille displays
  • Alternative tactile displays
  • Vibration
  • Visual displays for the deaf
  • Ambient Displays & Calm Computing

Week 7 (11/13 ONLY): Applications

  • Exercise & Recreation
  • Navigation & Maps
  • Programming and Computation
  • Reflection on role of User Research in Successful AT

Week 8 (11/18; 11/20): The Web

Learn about “The Web,” how access technologies interact with the Web, and how to make accessible web pages.

Google Video on Practical Web Accessibility — this video provides a great overview of the Web and how to make web content accessible. Highly recommended as a supplement to what we will cover in class.

WebAim.org — WebAIM has long been a leader in providing information and tutorials on making the Web accessible. A great source where you can read about accessibility issues, making content accessible, etc.

Solo Assignment: Make An Accessible Web Page  (due for in-class grading on 11/18)

Week 9 (11/25; 11/27):  Screen Readers

  • Building screen reader (NVDA, … )
  • Building accessible app (work with screen reader)
  • Paradigms for Nonvisual Input
  • Advanced Issues:
    • Optical Character Recognition
    • Image Labeling
    • Image description
    • Audio Description for Video
  • Test each others’ accessible pages
  • Mid-project Reports (Requirements TBD)

Week 10 (12/2):  Other Computer Accessibility Challenges

  • Low Bandwidth Input
  • Reading Assistance
  • Mousing Assistance
  • Macros
  • Expert Tasks
  • Volunteer Activity due

————–

Interesting topics to consider (e.g. from Jeff’s class)

Transcoding

Topics:

  • Transcoding content to make it more accessible
  • Middleware

“Occupational Therapy is Making”: Clinical Rapid Prototyping and Digital Fabrication

Splint that has been 3D printed in a material of an appropriate skin color and fit to a client's hand.

Lyme Disease’s Impact

An ongoing, and very personal thread of research that our group engages in (due to my own journey with Lyme Disease, which I occasionally blog about here) is research into the impacts of Lyme Disease and opportunities for helping to support patients with Lyme Disease. From a patient perspective, Lyme disease is as tough to deal with as many other more well known conditions [1].

Lyme disease can be difficult to navigate because of the disagreements about its diagnosis and the disease process. In addition, it is woefully underfunded and understudied, given that the CDC estimates around 300,000 new cases occur per year (similar to the rate of breast cancer) [2].

Bar chart showing that Lyme disease is woefully under studied.

As an HCI researcher, I started out trying to understand the relationship that Lyme Disease patients have with digital technologies. For example, we studied the impact of conflicting information online on patients [3] and how patients self-mediate the accessibility of online content [4]. It is my hope to eventually begin exploring technologies that can improve quality of life as well.

However, one thing patients need right away is peer reviewed evidence about the impact that Lyme disease has on patients (e.g. [3]) and the value of treatment for patients (e.g. [4]). Here, as a technologist, the opportunity is to work with big data (thousands of patient reports) to unpack trends and model outcomes in new ways. That research is still in the formative stages, but in our most recent publication [4] we use straightforward subgroup analysis to demonstrate that treatment effectiveness is not adequately captured simply by looking at averages.

This chart shows that there is a large subgroup (about a third) of respondents to our survey who reported positive response to treatment, even though the average response was not positive.

There are many opportunities and much need for further data analysis here, including documenting the impact of differences such as gender on treatment (and access to treatment), developing interventions that can help patients to track symptoms, manage interaction within and between doctors, and navigate accessibility and access issues.

[1] Johnson, L., Wilcox, S., Mankoff, J., & Stricker, R. B. (2014). Severity of chronic Lyme disease compared to other chronic conditions: a quality of life survey. PeerJ2, e322.

[2] Johnson, L., Shapiro, M. & Mankoff, J. Removing the mask of average treatment effects in chronic Lyme Disease research using big data and subgroup analysis.

[3] Mankoff, J., Kuksenok, K., Kiesler, S., Rode, J. A., & Waldman, K. (2011, May). Competing online viewpoints and models of chronic illness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 589-598). ACM.

[4] Kuksenok, K., Brooks, M., & Mankoff, J. (2013, April). Accessible online content creation by end users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 59-68). ACM.

 

Xin Liu

Xin is a first-year Ph.D. student with Jennifer Mankoff and Shwetak Patel in the Paul G. Allen School of Computer Science & Engineering at the University of Washington – Seattle. Prior to joining UW, he obtained a Bachelor’s degree in computer science from the University of Massachusetts Amherst in 2018. While at UMass Amherst, he received a 21st Century Leaders Award, Rising Researcher Award, and Outstanding Undergraduate Achievements Award. He is interested in using wearable sensing, human-computer interaction and machine learning to advancing healthcare.

Website: https://homes.cs.washington.edu/~xliu0/

Volunteer AT Fabricators

Perry-Hill, J., Shi, P., Mankoff, J. & Ashbrook, D. Understanding Volunteer AT Fabricators: Opportunities and Challenges in DIY-AT for Others in e-NABLE. Accepted to CHI 2017

We present the results of a study of e-NABLE, a distributed, collaborative volunteer effort to design and fabricate upper-limb assistive technology devices for limb-different users. Informed by interviews with 14 stakeholders in e-NABLE, including volunteers and clinicians, we discuss differences and synergies among each group with respect to motivations, skills, and perceptions of risks inherent in the project. We found that both groups are motivated to be involved in e-NABLE by the ability to use their skills to help others, and that their skill sets are complementary, but that their different perceptions of risk may result in uneven outcomes or missed expectations for end users. We offer four opportunities for design and technology to enhance the stakeholders’ abilities to work together.

Screen Shot 2017-03-14 at 1.09.13 PMA variety of 3D-printed upper-limb assistive technology devices designed and produced by volunteers in the e-NABLE community. Photos were taken by the fourth author in the e-NABLE lab on RIT’s campus.

Tactile Interfaces to Appliances

Anhong Guo, Jeeeun Kim, Xiang ‘Anthony’ Chen, Tom Yeh, Scott E. Hudson, Jennifer Mankoff, & Jeffrey P. Bigham, Facade: Auto-generating Tactile Interfaces to Appliances, In Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems (CHI’17), Denver, CO (To appear)

Common appliances have shifted toward flat interface panels, making them inaccessible to blind people. Although blind people can label appliances with Braille stickers, doing so generally requires sighted assistance to identify the original functions and apply the labels. We introduce Facade – a crowdsourced fabrication pipeline to help blind people independently make physical interfaces accessible by adding a 3D printed augmentation of tactile buttons overlaying the original panel. Facade users capture a photo of the appliance with a readily available fiducial marker (a dollar bill) for recovering size information. This image is sent to multiple crowd workers, who work in parallel to quickly label and describe elements of the interface. Facade then generates a 3D model for a layer of tactile and pressable buttons that fits over the original controls. Finally, a home 3D printer or commercial service fabricates the layer, which is then aligned and attached to the interface by the blind person. We demonstrate the viability of Facade in a study with 11 blind participants.

5792511475098337672(1)

Printable Adaptations

Shows someone placing a pen in a cap with two different types of adaptations.

Reprise: A Design Tool for Specifying, Generating, and Customizing 3D Printable Adaptations on Everyday Objects

Reprise is a tool for creating custom adaptive 3D printable designs for making it easier to manipulate everything from tools to zipper pulls. Reprise’s library is based on a survey of about 3,000 assistive technology and life hacks drawn from textbooks on the topic as well as Thingiverse. Using Reprise, it is possible to specify a type of action (such as grasp or pull), indicate the direction of action on a 3D model of the object being adapted, parameterize the action in a simple GUI, specify an attachment method, and produce a 3D model that is ready to print.

Xiang ‘Anthony’ Chen, Jeeeun Kim, Jennifer Mankoff, Tovi Grossman, Stelian Coros, Scott Hudson (2016). Reprise: A Design Tool for Specifying, Generating, and Customizing 3D Printable Adaptations on Everyday Objects. Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology (UIST 2016) (pdf)

This slideshow requires JavaScript.

Helping Hands

Prosthetic limbs and assistive technology (AT) require customization and modification over time to effectively meet the needs of end users. Yet, this process is typically costly and, as a result, abandonment rates are very high. Rapid prototyping technologies such as 3D printing have begun to alleviate this issue by making it possible to inexpensively, and iteratively create general AT designs and prosthetics. However for effective use, technology must be applied using design methods that support physical rapid prototyping and can accommodate the unique needs of a specific user. While most research has focused on the tools for creating fitted assistive devices, we focus on the requirements of a design process that engages the user and designer in the rapid iterative prototyping of prosthetic devices.

We present a case study of three participants with upper-limb amputations working with researchers to design prosthetic devices for specific tasks. Kevin wanted to play the cello, Ellen wanted to ride a hand-cycle (a bicycle for people with lower limb mobility impairments), and Bret wanted to use a table knife. Our goal was to identify requirements for a design process that can engage the assistive technology user in rapidly prototyping assistive devices that fill needs not easily met by traditional assistive technology. Our study made use of 3D printing and other playful and practical prototyping materials. We discuss materials that support on-the-spot design and iteration, dimensions along which in-person iteration is most important (such as length and angle) and the value of a supportive social network for users who prototype their own assistive technology. From these findings we argue for the importance of extensions in supporting modularity, community engagement, and relatable prototyping materials in the iterative design of prosthetics

Prosthetic limbs and assistive technology (AT) require customization and modification over time to effectively meet the needs of end users. Yet, this process is typically costly and, as a result, abandonment rates are very high. Rapid prototyping technologies such as 3D printing have begun to alleviate this issue by making it possible to inexpensively, and iteratively create general AT designs and prosthetics. However for effective use, technology must be applied using design methods that support physical rapid prototyping and can accommodate the unique needs of a specific user. While most research has focused on the tools for creating fitted assistive devices, we focus on the requirements of a design process that engages the user and designer in the rapid iterative prototyping of prosthetic devices.

We present a case study of three participants with upper-limb amputations working with researchers to design prosthetic devices for specific tasks. Kevin wanted to play the cello, Ellen wanted to ride a hand-cycle (a bicycle for people with lower limb mobility impairments), and Bret wanted to use a table knife. Our goal was to identify requirements for a design process that can engage the assistive technology user in rapidly prototyping assistive devices that fill needs not easily met by traditional assistive technology. Our study made use of 3D printing and other playful and practical prototyping materials. We discuss materials that support on-the-spot design and iteration, dimensions along which in-person iteration is most important (such as length and angle) and the value of a supportive social network for users who prototype their own assistive technology. From these findings we argue for the importance of extensions in supporting modularity, community engagement, and relatable prototyping materials in the iterative design of prosthetics

Photos

Project Files

https://www.thingiverse.com/thing:2365703

Project Publications

Helping Hands: Requirements for a Prototyping Methodology for Upper-limb Prosthetics Users

Reference:

Megan Kelly Hofmann, Jeffery Harris, Scott E Hudson, Jennifer Mankoff. 2016.Helping Hands: Requirements for a Prototyping Methodology for Upper-limb Prosthetics Users. InProceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 525-534.

Making Connections: Modular 3D Printing for Designing Assistive Attachments to Prosthetic Devices

Reference:

Megan Kelly Hofmann. 2015. Making Connections: Modular 3D Printing for Designing Assistive Attachments to Prosthetic Devices. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS ’15). ACM, New York, NY, USA, 353-354. DOI=http://dx.doi.org/10.1145/2700648.2811323

Supporting Navigation in the Wild for the Blind

uncovering_thumbnailSighted individuals often develop significant knowledge about their environment through what they can visually observe. In contrast, individuals who are visually impaired mostly acquire such knowledge about their environment through information that is explicitly related to them. Our work examines the practices that visually impaired individuals use to learn about their environments and the associated challenges. In the first of our two studies, we uncover four types of information needed to master and navigate the environment. We detail how individuals’ context impacts their ability to learn this information, and outline requirements for independent spatial learning. In a second study, we explore how individuals learn about places and activities in their environment. Our findings show that users not only learn information to satisfy their immediate needs, but also to enable future opportunities – something existing technologies do not fully support. From these findings, we discuss future research and design opportunities to assist the visually impaired in independent spatial learning.

Uncovering information needs for independent spatial learning for users who are visually impaired. Nikola Banovic, Rachel L. Franz, Khai N. Truong, Jennifer Mankoff, and Anind K. DeyIn Proceedings of the 15th international ACM SIGACCESS conference on Computers and accessibility (ASSETS ’13). ACM, New York, NY, USA, Article 24, 8 pages. (pdf)

Henny Admoni

Photo of Henny Admoni

unnamedHenny Admoni is a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University, where she works on assistive robotics and human-robot interaction with Siddhartha Srinivasa in the Personal Robotics Lab. Henny develops and studies intelligent robots that improve people’s lives by providing assistance through social and physical interactions. She studies how nonverbal communication, such as eye gaze and pointing, can improve assistive interactions by revealing underlying human intentions and increasing human-robot communication. Henny completed her PhD in Computer Science at Yale University with Professor Brian Scassellati. Her PhD dissertation was about modeling the complex dynamics of nonverbal behavior for socially assistive human-robot interaction. Henny holds an MS in Computer Science from Yale University, and a BA/MA joint degree in Computer Science fromWesleyan University. Henny’s scholarship has been recognized with awards such as the NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, and the Palantir Women in Technology Scholarship.