Digital Fabrication in Medical Practice

Maker culture in health care is on the rise with the rapid adoption of consumer-grade fabrication technologies. However, little is known about the activity and resources involved in prototyping medical devices to improve patient care. In this paper, we characterize medical making based on a qualitative study of medical stakeholder engagement in physical prototyping (making) experiences. We examine perspectives from diverse stakeholders including clinicians, engineers, administrators, and medical researchers. Through 18 semi-structured interviews with medical-makers in US and Canada, we analyze making activity in medical settings. We find that medical-makers share strategies to address risks, define labor roles, and acquire resources by adapting traditional structures or creating new infrastructures. Our findings outline how medical-makers mitigate risks for patient safety, collaborate with local and global stakeholder networks, and overcome constraints of co-location and material practices. We recommend a clinician-aided software system, partially-open repositories, and a collaborative skill-share social network to extend their strategies in support of medical making.

“Point-of-Care Manufacturing”: Maker Perspectives onDigital Fabrication in Medical Practice. Udaya Lakshmi, Megan Hofmann, Stephanie Valencia, Lauren Wilcox, Jennifer Mankoff and Rosa Arriaga. CSCW 2019. To Appear.

A venn diagram showing the domains of expertise of those we interviewed including people from hospitals, universities, non-profits, va networks, private practices, and government. We interviewed clinicians and facilitators in each of these domains and there was a great deal of overlap with participants falling into multiple categories. For example, one participant was in a VA network and in private practice, while another was at a university and also a non-profit.

Project 2: Build a Better Button

Learning Goals for the Project

  • Learn about Circuit design
  • Learn how to communicate between an Arduino and your phone
  • Build a simple circuit that is enhanced by its connection to your phone

Basic Requirements for Project

Your project should demonstrate your ability to either:

  • Take input from at least one button (or other sensor), and connect it to some interesting service
  • Your focus should be on circuit design and Arduino programming. You don’t need to create a custom phone app. You can if you want create a custom case or button using 3d printing.

You should make a case for why this is an assistive technology of some sort. For example, you could build a door opening sensor (using a button or proximity sensor) that causes your phone to announce the door was opened, or a single switch control for scrolling or tabbing through a web page, or a capacitive sensor that captures a log of how often a cane is used.

There is some great software that con be connected to the Arduino including 1Shield, AppInventor, Blynk and IFTTT. Some work only for Android, others for both Android and iPhone.

There are lots of really great examples online of arduino based projects, arduino projects that involve smartphones, and arduino projects that involve 3D printing or laser cutting. Many of them are too complex for the expectations of this project, though they might help to inspire final projects, or give you ideas for something simple you can do in a week. Here is a sample:

Hand In

Create a Thingiverse or Instructables page for your project with a brief description of the project, a video, any 3D printed files, and a schematic for your circuit. Turn the URL in by email with the subject: Project 2. Be prepared to demo your project in class.

PointsDescription
1 or 0 Project uses physical computing to solve an accessibility problem
1 or 0Project communicates with your phone in some way
1 or 0Project includes a working circuit that you designed
1 or 0Project includes at least one button
1 or 0 Project includes some kind of response to the button
1 or 0Thingiverse or Instructables page describes project in a reproducable fashion.

“Occupational Therapy is Making”

Assistive Technology

Instructor: Jennifer Mankoffjmankoff@cs.cmu.edu
Spring 2005

HCII, 3601 NSH, (W)+1 (412) 268-1295
Office hours: By Appointment & 1-2pm Thurs

Course Description

This class will focus on computer accessibility, including web and desktop computing, and research in the area of assistive technology.

The major learning goals from this course include:

  • Develop an understanding of the relationship between disability policy, the disability rights movement, and your role as a technologist. For example, we will discuss we will discuss the pros and cons and infrastructure involved in supporting mainstream computer applications rather than creating new ones from scratch.
  • Develop a skill set for basic design and evaluation of accessible web pages and desktop applications.
  • Develop familiarity with technologies and research relating to accessibility including a study of optimal font size and color for people with dyslexia, word-prediction aids, a blind-accessible drawing program,
  • Develop familiarity with assistive technologies that use computation to increase the accessibility of the world in general. Examples include memory aids, sign-language recognition, and so on.

Requirements

Students will be expected to do service work with non-profits serving the local disabled community during one to two weekends of the start of the semester. This course has a project component, where students will design, implement, and test software for people with disabilities. Additionally, students will read and report on research papers pertinent to the domain.

Grading will be based on service work (10%); the project (60%); and class participation, including your reading summary and the lecture you lead (30%).

Other relevant documents

Course CalendarAssignmentsBibliography

Prerequisites

Prerequisites for this class are: Familiarity with basic Human Computer Interaction material or consent of the instructor (for undergraduate students)

It is recommended that you contact the instructor if you are interested in taking this class.

Lyme Disease’s Heterogeneous Impact

An ongoing, and very personal thread of research that our group engages in (due to my own journey with Lyme Disease, which I occasionally blog about here) is research into the impacts of Lyme Disease and opportunities for helping to support patients with Lyme Disease. From a patient perspective, Lyme disease is as tough to deal with as many other more well known conditions [1].

Lyme disease can be difficult to navigate because of the disagreements about its diagnosis and the disease process. In addition, it is woefully underfunded and understudied, given that the CDC estimates around 300,000 new cases occur per year (similar to the rate of breast cancer) [2].

Bar chart showing that Lyme disease is woefully under studied.

As an HCI researcher, I started out trying to understand the relationship that Lyme Disease patients have with digital technologies. For example, we studied the impact of conflicting information online on patients [3] and how patients self-mediate the accessibility of online content [4]. It is my hope to eventually begin exploring technologies that can improve quality of life as well.

However, one thing patients need right away is peer reviewed evidence about the impact that Lyme disease has on patients (e.g. [3]) and the value of treatment for patients (e.g. [4]). Here, as a technologist, the opportunity is to work with big data (thousands of patient reports) to unpack trends and model outcomes in new ways. That research is still in the formative stages, but in our most recent publication [4] we use straightforward subgroup analysis to demonstrate that treatment effectiveness is not adequately captured simply by looking at averages.

This chart shows that there is a large subgroup (about a third) of respondents to our survey who reported positive response to treatment, even though the average response was not positive.

There are many opportunities and much need for further data analysis here, including documenting the impact of differences such as gender on treatment (and access to treatment), developing interventions that can help patients to track symptoms, manage interaction within and between doctors, and navigate accessibility and access issues.

[1] Johnson, L., Wilcox, S., Mankoff, J., & Stricker, R. B. (2014). Severity of chronic Lyme disease compared to other chronic conditions: a quality of life survey. PeerJ2, e322.

[2] Johnson, L., Shapiro, M. & Mankoff, J. Removing the mask of average treatment effects in chronic Lyme Disease research using big data and subgroup analysis.

[3] Mankoff, J., Kuksenok, K., Kiesler, S., Rode, J. A., & Waldman, K. (2011, May). Competing online viewpoints and models of chronic illness. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 589-598). ACM.

[4] Kuksenok, K., Brooks, M., & Mankoff, J. (2013, April). Accessible online content creation by end users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 59-68). ACM.

 

3D Printed Wireless Analytics

Wireless Analytics for 3D Printed Objects: Vikram Iyer, Justin Chan, Ian Culhane, Jennifer Mankoff, Shyam Gollakota UIST, Oct. 2018 [PDF]

We created a wireless physical analytics system works with commonly available conductive plastic filaments. Our design can enable various data capture and wireless physical analytics capabilities for 3D printed objects, without the need for electronics.

We make three key contributions:

(1) We demonstrate room scale backscatter communication and sensing using conductive plastic filaments.

(2) We introduce the first backscatter designs that detect a variety of bi-directional motions and support linear and rotational movements. An example is shown below

(3) As shown in the image below, we enable data capture and storage for later retrieval when outside the range of the wireless coverage, using a ratchet and gear system.

We validate our approach by wirelessly detecting the opening and closing of a pill bottle, capturing the joint angles of a 3D printed e-NABLE prosthetic hand, and an insulin pen that can store information to track its use outside the range of a wireless receiver.

Selected Media

6 of the most amazing things that were 3D-printed in 2018 (Erin Winick, MIT Technology Review, 12/24/2018)

Researchers develop 3D printed objects that can track and store how they are used (Sarah McQuate), UW Press release. 10/9/2018

Assistive Objects Can Track Their Own Use (Elizabeth Montalbano), Design News. 11/14/2018

People

Students

Vikram Iyer
Justin Chan
Ian Culhane

Faculty

Jennifer Mankoff
Shyam Gollakota

Contact: printedanalytics@cs.washington.edu

Venkatesh Potluri

Venkatesh Potluri is a Ph.D. student at the Paul G. Allen Center for Computer Science & Engineering at University of Washington. He is advised by Prof Jennifer Mankoff and Prof Jon Froehlich. Venkatesh believes that technology, when designed right, empowers everybody to fulfill their goals and aspirations. His broad research goals are to upgrade accessibility to the ever-changing ways of our interactions with technology, and, improve the independence and quality of life of people with disabilities. These goals stem from his personal experience as a researcher with a visual impairment. His research focus is to enable developers with visual impairments perform a variety of programming tasks efficiently. Previously, he was a Research Fellow at Microsoft Research India, where his team was responsible for building CodeTalk, an accessibility framework and a plugin for better IDE accessibility. Venkatesh earned a master’s degree in Computer Science at International Institute of Information Technology Hyderabad, where his research was on audio rendering of mathematical content.

You can find more information about him at https://venkateshpotluri.me

Interactiles

The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. In addition, the investigation of tactile solutions for large touchscreens may not address the challenges on mobile devices. We therefore present Interactiles, a low-cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.

XiaoyiZhang, TracyTran, YuqianSun, IanCulhane, ShobhitJain, JamesFogarty, JenniferMankoff: Interactiles: 3D Printed Tactile Interfaces to Enhance Mobile Touchscreen Accessibility. ASSETS 2018: To Appear [PDF]

Figure 2. Floating windows created for number pad (left), scrollbar (right) and control button (right bottom). The windows can be transparent; we use colors for demonstration.

Figure 4. Average task completion times of all tasks in the study.

Nonvisual Interaction Techniques at the Keyboard Surface

Rushil Khurana,Duncan McIsaac, Elliot Lockerman,Jennifer Mankoff Nonvisual Interaction Techniques at the Keyboard Surface, CHI 2018, To Appear

A table (shown on screen). Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys, and (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the position of the cell and its content are read out aloud.

Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. We introduce Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.

Talk [Slides]:

Sample Press:

KOMO Radio | New screen reader method helps blind, low-vision users browse complex web pages

Device helps blind, low-vision users better browse web pages. Allen Cone

Graph showing task completion rates for different kinds of tasks in our user study

A user is searching a table (shown on screen) for the word ‘Jill’. Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys. (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the number of occurrences of the search query in the respective column or row are read aloud. When the query is found, the position and content of the cell are read out aloud.