3D Printed Wireless Analytics

Wireless Analytics for 3D Printed Objects: Vikram Iyer, Justin Chan, Ian Culhane, Jennifer Mankoff, Shyam Gollakota UIST, Oct. 2018 [PDF]

We created a wireless physical analytics system works with commonly available conductive plastic filaments. Our design can enable various data capture and wireless physical analytics capabilities for 3D printed objects, without the need for electronics.

We make three key contributions:

(1) We demonstrate room scale backscatter communication and sensing using conductive plastic filaments.

(2) We introduce the first backscatter designs that detect a variety of bi-directional motions and support linear and rotational movements. An example is shown below

(3) As shown in the image below, we enable data capture and storage for later retrieval when outside the range of the wireless coverage, using a ratchet and gear system.

We validate our approach by wirelessly detecting the opening and closing of a pill bottle, capturing the joint angles of a 3D printed e-NABLE prosthetic hand, and an insulin pen that can store information to track its use outside the range of a wireless receiver.

Selected Media

Researchers develop 3D printed objects that can track and store how they are used (Sarah McQuate), UW Press release. 10/9/2018

Assistive Objects Can Track Their Own Use (Elizabeth Montalbano), Design News. 11/14/2018

People

Students

Vikram Iyer
Justin Chan
Ian Culhane

Faculty

Jennifer Mankoff
Shyam Gollakota

Contact: printedanalytics@cs.washington.edu

Interactiles

The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. In addition, the investigation of tactile solutions for large touchscreens may not address the challenges on mobile devices. We therefore present Interactiles, a low-cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.

XiaoyiZhang, TracyTran, YuqianSun, IanCulhane, ShobhitJain, JamesFogarty, JenniferMankoff: Interactiles: 3D Printed Tactile Interfaces to Enhance Mobile Touchscreen Accessibility. ASSETS 2018: To Appear [PDF]

Figure 2. Floating windows created for number pad (left), scrollbar (right) and control button (right bottom). The windows can be transparent; we use colors for demonstration.
Figure 4. Average task completion times of all tasks in the study.

Nonvisual Interaction Techniques at the Keyboard Surface

Rushil Khurana,Duncan McIsaac, Elliot Lockerman,Jennifer Mankoff Nonvisual Interaction Techniques at the Keyboard Surface, CHI 2018, To Appear

A table (shown on screen). Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys, and (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the position of the cell and its content are read out aloud.

Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. We introduce Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.

Talk [Slides]:

Sample Press:

KOMO Radio | New screen reader method helps blind, low-vision users browse complex web pages

Device helps blind, low-vision users better browse web pages. Allen Cone

Graph showing task completion rates for different kinds of tasks in our user study
A user is searching a table (shown on screen) for the word ‘Jill’. Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys. (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the number of occurrences of the search query in the respective column or row are read aloud. When the query is found, the position and content of the cell are read out aloud.

The Tangible Desktop

Mark S. BaldwinGillian R. HayesOliver L. HaimsonJennifer MankoffScott E. Hudson:
The Tangible Desktop: A Multimodal Approach to Nonvisual Computing. TACCESS 10(3): 9:1-9:28 (2017)

Audio-only interfaces, facilitated through text-to-speech screen reading software, have been the primary mode of computer interaction for blind and low-vision computer users for more than four decades. During this time, the advances that have made visual interfaces faster and easier to use, from direct manipulation to skeuomorphic design, have not been paralleled in nonvisual computing environments. The screen reader–dependent community is left with no alternatives to engage with our rapidly advancing technological infrastructure. In this article, we describe our efforts to understand the problems that exist with audio-only interfaces. Based on observing screen reader use for 4 months at a computer training school for blind and low-vision adults, we identify three problem areas within audio-only interfaces: ephemerality, linear interaction, and unidirectional communication. We then evaluated a multimodal approach to computer interaction called the Tangible Desktop that addresses these problems by moving semantic information from the auditory to the tactile channel. Our evaluation demonstrated that among novice screen reader users, Tangible Desktop improved task completion times by an average of 6 minutes when compared to traditional audio-only computer systems.

 

Uncertainty in Measurement

Examples of 3d printed objects that are robust to measurement uncertainty.

Kim, J., Guo, A., Yeh, T., Hudson, S. E., & Mankoff, J. (2017, June). Understanding Uncertainty in Measurement and Accommodating its Impact in 3D Modeling and Printing. In Proceedings of the 2017 Conference on Designing Interactive Systems (pp. 1067-1078). ACM.

3D printing enables everyday users to augment objects around them with personalized adaptations. There has been a proliferation of 3D models available on sharing platforms supporting this. If a model is parametric, a novice modeler can obtain a custom model simply by entering a few parameters (e.g., in the Customizer tool on Thingiverse.com). In theory, such custom models could fit any real world object one intends to augment. But in practice, a printed model seldom fits on the first try; multiple iterations are often necessary, wasting a considerable amount of time and material. We argue that parameterization or scaling alone is not sufficient for customizability, because users must correctly measure an object to specify parameters.

In a study of attempts to measure length, angle, and diameter, we demonstrate measurement errors as a significant (yet often overlooked) factor that adversely impacts the adaptation of 3D models to existing objects, requiring increased iteration. Images taken from our study are shown below.

We argue for a new design principle—accommodating measurement uncertainty—that designers as well as novices should begin to consider. We offer two strategies—modular joint and, buffer insertion—to help designers to build models that are robust to measurement uncertainty. Examples shown below.

 

 

3D Printing with Embedded Textiles

screen-shot-2017-01-09-at-9-03-14-pm


Stretching the Bounds of 3D Printing with Embedded Textiles

Textiles are an old and well developed technology that have many desirable characteristics. They can be easily folded, twisted, deformed, or cut; some can be stretched; many are soft. Textiles can maintain their shape when placed under tension and can even be engineered with variable stretching ability.

When combined, textiles and 3D printing open up new opportunities for rapidly creating rigid objects with embedded flexibility as well as soft materials imbued with additional functionality. We introduce a suite of techniques for integrating the two and demonstrate how the malleability, stretchability and aesthetic qualities of textiles can enhance rigid printed objects, and how textiles can be augmented with functional properties enabled by 3D printing.

Click images below to see more detail:


Citation

Rivera, M.L., Moukperian, M., Ashbrook, D., Mankoff, J., Hudson, S.E. 2017. Stretching the Bounds of 3D Printing with Embedded Textiles. To appear in to the annual ACM conference on Human Factors in Computing Systems. CHI ‘17. [Paper]

Layered Fabric Printing

A Layered Fabric 3D Printer for Soft Interactive ObjectsHuaishu Peng, Jennifer Mankoff, Scott E. Hudson, James McCann. CHI ’15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015.


In work done collaboratively with Disney Research and led by Disney Intern Huaishu Peng (of Cornell), we have begun to explore alternative material options for fabrication. Unlike traditional 3D printing, which uses hard plastic, this project made use of cloth (in the video shown above, felt). In addition to its aesthetic properties, fabric is deformable, and the degree of deformability can be controlled. Our printer, which works by gluing layers of laser-cut fabric to each other also allows for dual material printing, meaning that layers of conductive fabric can be inserted. This allows fabric objects to also easily support embedded electronics. This work has been in the news recently, and was featured at AdafruitFuturityGizmodo; Geek.com and TechCrunch, among others.

 

 

Jennifer Mankoff

Research | Students | Teaching | Bio | CV | Advice | Fun | Contact

Research

My work focuses on assistive technologies for access, health and wellness, and takes a multifaceted approach that includes machine learning, 3D printing, and tool building. At a high level, my goal is to tackle the technical challenges necessary for everyday individuals and communities to solve real-world problems (see all the Make4all projects).

Some of my most recent projects:

Your Portfolio Archive currently has no entries. You can start creating them on your dashboard.

Students

Current PhD students:

Yasaman Sefidgar; Megan Hofmann (co-advised with Scott Hudson); Mark Baldwin (co-advised with Gillian Hayes)

Former PhD Students:

Nikola Banovic (co-advised with Anind Dey); Christian Koehler; Sunyoung Kim; Scott Carter; Tara Matthews; Julia SchwarzTawanna Dillahunt; Amy Hurst; and Kirstin Early.

I love to work with undergraduate and masters students and have mentored more than I can count. My mentorship always tries to include career advice as well as project advice, whether students are going on to research or not. Many undergraduate students I advised have gone on to careers in research, however, including some current faculty (Julie Kientz, Gary Hsieh, Ruth Wylie). There are at least 50 other students who are alumni of my group who are not currently listed on this page but who all made important contributions to my work over the years. Some current mentees:

Your Portfolio Archive currently has no entries. You can start creating them on your dashboard.

Recent Alumni I mentored/advised:

Additional alumni can be found on the People page.

Teaching

I love to teach, and have put significant time into curriculum development over the years.

CLASSES DEVELOPED FOR AND TAUGHT AT CMU
  • I am currently developing a new course on data centric computing, called The Data Pipeline. The course is accessible to novice programmers and includes a series of tutorials that can support independent online learning.
  • I helped to redesign the HCI Masters course User Centered Research and Evaluation, specifically bringing a real world focus to our skills teaching around contextual inquiry
  • I developed an online course specifically for folks who want to know enough program to be able to prototype simple interfaces (targeted at our incoming masters students). The course is available free online at CMU’s Open Learning Initiative under “Media Programming”
  • I developed and taught the Environment and Society course over the last five years. This was a project oriented course that took a very multifaceted look at the role of technology in solving environmental problems.
  • I helped to develop a reading course that is required for our PhD students to ensure that they have depth in technical HCI: CS Mini
  • Assistive Technology: I developed and taught one of the first Assistive Technology courses in the country (specifically from an HCI perspective), and I used a service learning model to do so. Original class
  • I have helped to revamp Process and Theory over the years, a skills course intended for our first year PhD students.

Bio

My Bachelor’s of Arts was done at Oberlin College, where I was a member of two great societies — FOO and ACM. I received my Ph.D. as a member of the Future Computing Environments research group in the College of Computing at Georgia Tech , Gregory Abowd and Scott Hudson were my advisors. I then spent three formative years at UC Berkeley as an Assistant Professor working with the I/O group and 12 years at CMU before joining the faculty of the University of Washington. My “Academic genealogy” on the Abowd side.

Bio:  Jennifer Mankoff is the Richard E. Ladner Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Her research focuses on assistive technologies for equal access, health and wellness, and takes a multifaceted approach that includes machine learning, 3D printing, and tool building.  Jennifer applies a human-centered approach that combines empirical methods and technical innovation. For example, she has designed 3D-printed assistive technologies for people with disabilities.

Jennifer received her PhD at Georgia Tech, advised by Gregory Abowd and Scott Hudson, and her B.A. from Oberlin College. Her previous faculty positions include UC Berkeley’s EECS department and Carnegie Mellon’s HCI Institute. Jennifer has been recognized with an Alfred P. Sloan Fellowship, IBM Faculty Fellowship and Best Paper awards from ASSETS, CHI and Mobile HCI. Some supporters of her research include Autodesk, Google Inc., the Intel Corporation, IBM, Hewlett-Packard, Microsoft Corporation and the National Science Foundation.

Other Thoughts and Links

Best conference experience ever: The CHI2009 Straggler’s Seder

This slideshow requires JavaScript.

Things I love (below)

kids

This slideshow requires JavaScript.

Picture of my children, Kavi and Elena Artwork I’ve done
lupa-small My Husband, Anind

This slideshow requires JavaScript.

My Viola My Husband My dogs: Demi, Nugget, Gryffin

Contact Information

Jennifer Mankoff
jmankoff [at] acm.org
206-685-3035
Paul G. Allen School of Computer Science & Engineering
University of Washington
Paul G. Allen Center
185 Stevens Way
Campus Box 352350
Seattle, WA 98195