Dynamic question ordering

In recent years, surveys have been shifting online, offering the possibility for adaptive questions, where later questions depend on responses to earlier questions. We present a general framework for dynamically ordering questions, based on previous responses, to engage respondents, improving survey completion and imputation of unknown items. Our work considers two scenarios for data collection from survey-takers. In the first, we want to maximize survey completion (and the quality of necessary imputations) and so we focus on ordering questions to engage the respondent and collect hopefully all the information we seek, or at least the information that most characterizes the respondent so imputed values will be accurate. In the second scenario, our goal is to give the respondent a personalized prediction, based on information they provide. Since it is possible to give a reasonable prediction with only a subset of questions, we are not concerned with motivating the user to answer all questions. Instead, we want to order questions so that the user provides information that most reduces the uncertainty of our prediction, while not being too burdensome to answer.

Publications
Kirstin Early, Stephen E. Fienberg, Jennifer Mankoff. (2016). Test time feature ordering with FOCUS: Interactive predictions with minimal user burden. In Proceedings of 2016 ACM Conference on Pervasive and Ubiquitous ComputingHonorable Mention: Top 5% of submissions. Talk slides.

3D printed attachments

Encore: 3D printed attachments

What happens when you want to 3D print something that must interact with the real world? The Encore project makes it possible to 3D print objects that must attach to things in the real world. Encore provides an interface that, given an imported object and a chosen attachment method, visualizes metrics relating the goodness of the attachment. In addition, once an attachment type and location is chosen, Encore helps to produce the necessary support structure for attachment. Encore supports three main types of attachment: print-over, print-to-affix, and print-through.

Print-Over

Print-over attachments are printed directly on the existing object. This works well if the object is flat enough that the print head won’t encounter obstacles as it moves, and the object is made of a material that the printed material will easily adhere to. Encore helps by finding a rotation of the existing object that minimizes obstacles, and generating support material to hold the existing object in place.

Printing a magnet holder over a Teddy bear toy.
 
Left: printing an LED casing on a battery to make a simple torch; right: printing a handle to an espresso cup.
 

Print-to-Affix

An alternative that is useful when the existing object does not fit on the print bed is print-to-affix. In this approach, the attachment is designed to fit snugly against the existing object. It may be glued in place, or can include holes for a strap, such as a zip tie.

Left: printing a structure to make a glue gun stand; right: printing a reusable four-pack holder.
 

Print-Through

Finally, sometimes the attachment should be interlocked more loosely with the existing object. In this case, the process is to begin printing and stop the print partway through so that the existing object can be inserted. Encore can compute when this stopping point should be (and
whether it is possible)

A name tag printed through a pair of scissors
 
A bracelet printed through a charm
 

Encore the Design Tool

Encore is implemented in WebGL. It supports importation of an existing object, selection of an attachment, and then lets the user click to indicate where the attachment will go. Given this information, it uses geometric analysis to compute metrics for goodness of attachment, such as attachability and strength. Encore visualizes them using a heat map so that the user can adjust the attachment point.


Encore visualizes which parts of a wrench are more attachable when printing over a handle.

More Examples


Using print-to-affix to make a trophy from an egg holder
 

Using print-over to make a minion keychain
 

Using print-over to add a hanger to a screwdriver handle
 

Using print-through to make a key ring.
 

Using print-to-affix to make a battery case
 
Xiang ‘Anthony’ Chen, Stelian Coros, Jennifer Mankoff, Scott Hudson (2015). Encore: 3D Printed Augmentation of Everyday Objects with Printed-Over, Affixed and Interlocked Attachments. Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (UIST 2015)

Helping Hands

Prosthetic limbs and assistive technology (AT) require customization and modification over time to effectively meet the needs of end users. Yet, this process is typically costly and, as a result, abandonment rates are very high. Rapid prototyping technologies such as 3D printing have begun to alleviate this issue by making it possible to inexpensively, and iteratively create general AT designs and prosthetics. However for effective use, technology must be applied using design methods that support physical rapid prototyping and can accommodate the unique needs of a specific user. While most research has focused on the tools for creating fitted assistive devices, we focus on the requirements of a design process that engages the user and designer in the rapid iterative prototyping of prosthetic devices.

We present a case study of three participants with upper-limb amputations working with researchers to design prosthetic devices for specific tasks. Kevin wanted to play the cello, Ellen wanted to ride a hand-cycle (a bicycle for people with lower limb mobility impairments), and Bret wanted to use a table knife. Our goal was to identify requirements for a design process that can engage the assistive technology user in rapidly prototyping assistive devices that fill needs not easily met by traditional assistive technology. Our study made use of 3D printing and other playful and practical prototyping materials. We discuss materials that support on-the-spot design and iteration, dimensions along which in-person iteration is most important (such as length and angle) and the value of a supportive social network for users who prototype their own assistive technology. From these findings we argue for the importance of extensions in supporting modularity, community engagement, and relatable prototyping materials in the iterative design of prosthetics

Prosthetic limbs and assistive technology (AT) require customization and modification over time to effectively meet the needs of end users. Yet, this process is typically costly and, as a result, abandonment rates are very high. Rapid prototyping technologies such as 3D printing have begun to alleviate this issue by making it possible to inexpensively, and iteratively create general AT designs and prosthetics. However for effective use, technology must be applied using design methods that support physical rapid prototyping and can accommodate the unique needs of a specific user. While most research has focused on the tools for creating fitted assistive devices, we focus on the requirements of a design process that engages the user and designer in the rapid iterative prototyping of prosthetic devices.

We present a case study of three participants with upper-limb amputations working with researchers to design prosthetic devices for specific tasks. Kevin wanted to play the cello, Ellen wanted to ride a hand-cycle (a bicycle for people with lower limb mobility impairments), and Bret wanted to use a table knife. Our goal was to identify requirements for a design process that can engage the assistive technology user in rapidly prototyping assistive devices that fill needs not easily met by traditional assistive technology. Our study made use of 3D printing and other playful and practical prototyping materials. We discuss materials that support on-the-spot design and iteration, dimensions along which in-person iteration is most important (such as length and angle) and the value of a supportive social network for users who prototype their own assistive technology. From these findings we argue for the importance of extensions in supporting modularity, community engagement, and relatable prototyping materials in the iterative design of prosthetics

Photos

Project Files

https://www.thingiverse.com/thing:2365703

Project Publications

Helping Hands: Requirements for a Prototyping Methodology for Upper-limb Prosthetics Users

Reference:

Megan Kelly Hofmann, Jeffery Harris, Scott E Hudson, Jennifer Mankoff. 2016.Helping Hands: Requirements for a Prototyping Methodology for Upper-limb Prosthetics Users. InProceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 525-534.

Making Connections: Modular 3D Printing for Designing Assistive Attachments to Prosthetic Devices

Reference:

Megan Kelly Hofmann. 2015. Making Connections: Modular 3D Printing for Designing Assistive Attachments to Prosthetic Devices. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS ’15). ACM, New York, NY, USA, 353-354. DOI=http://dx.doi.org/10.1145/2700648.2811323

Supporting Navigation in the Wild for the Blind

uncovering_thumbnailSighted individuals often develop significant knowledge about their environment through what they can visually observe. In contrast, individuals who are visually impaired mostly acquire such knowledge about their environment through information that is explicitly related to them. Our work examines the practices that visually impaired individuals use to learn about their environments and the associated challenges. In the first of our two studies, we uncover four types of information needed to master and navigate the environment. We detail how individuals’ context impacts their ability to learn this information, and outline requirements for independent spatial learning. In a second study, we explore how individuals learn about places and activities in their environment. Our findings show that users not only learn information to satisfy their immediate needs, but also to enable future opportunities – something existing technologies do not fully support. From these findings, we discuss future research and design opportunities to assist the visually impaired in independent spatial learning.

Uncovering information needs for independent spatial learning for users who are visually impaired. Nikola Banovic, Rachel L. Franz, Khai N. Truong, Jennifer Mankoff, and Anind K. DeyIn Proceedings of the 15th international ACM SIGACCESS conference on Computers and accessibility (ASSETS ’13). ACM, New York, NY, USA, Article 24, 8 pages. (pdf)

Layered Fabric Printing

A Layered Fabric 3D Printer for Soft Interactive ObjectsHuaishu Peng, Jennifer Mankoff, Scott E. Hudson, James McCann. CHI ’15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015.

In work done collaboratively with Disney Research and led by Disney Intern Huaishu Peng (of Cornell), we have begun to explore alternative material options for fabrication. Unlike traditional 3D printing, which uses hard plastic, this project made use of cloth (in the video shown above, felt). In addition to its aesthetic properties, fabric is deformable, and the degree of deformability can be controlled. Our printer, which works by gluing layers of laser-cut fabric to each other also allows for dual material printing, meaning that layers of conductive fabric can be inserted. This allows fabric objects to also easily support embedded electronics. This work has been in the news recently, and was featured at AdafruitFuturityGizmodo; Geek.com and TechCrunch, among others.

James McCann

Headshot of Jim McCannJames McCann [research page] [disney page] is an Associate Research Scientist at Disney Research Pittsburgh. He develops systems and interfaces that operate in real-time and build user intuition, with — at present — a focus on textiles artifact design and manufacturing. He also makes video games as TCHOW llc, including recent releases “Rktcr” and “Rainbow“. He obtained his PhD in 2010 from Carnegie Mellon University.

Jessica Hodgins

Headshot of Jessica HodginsJessica Hodgins is a Professor in the Robotics Institute and Computer Science Department at Carnegie Mellon University. She is also VP of Research, Disney Research, running research labs in Pittsburgh, Los Angeles and Boston. Prior to moving to Carnegie Mellon in 2000, she was an Associate Professor and Assistant Dean in the College of Computing at Georgia Institute of Technology. She received her Ph.D. in Computer Science from Carnegie Mellon University in 1989. Her research focuses on computer graphics, animation, and robotics with an emphasis on generating and analyzing human motion. She has received a NSF Young Investigator Award, a Packard Fellowship, and a Sloan Fellowship. She was editor-in-chief of ACM Transactions on Graphics from 2000-2002 and ACM SIGGRAPH Papers Chair in 2003. In 2010, she was awarded the ACM SIGGRAPH Computer Graphics Achievement Award.

Henny Admoni

unnamedHenny Admoni is a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University, where she works on assistive robotics and human-robot interaction with Siddhartha Srinivasa in the Personal Robotics Lab. Henny develops and studies intelligent robots that improve people’s lives by providing assistance through social and physical interactions. She studies how nonverbal communication, such as eye gaze and pointing, can improve assistive interactions by revealing underlying human intentions and increasing human-robot communication. Henny completed her PhD in Computer Science at Yale University with Professor Brian Scassellati. Her PhD dissertation was about modeling the complex dynamics of nonverbal behavior for socially assistive human-robot interaction. Henny holds an MS in Computer Science from Yale University, and a BA/MA joint degree in Computer Science fromWesleyan University. Henny’s scholarship has been recognized with awards such as the NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, and the Palantir Women in Technology Scholarship.

Sidd Srinivasa

siddhartha_srinivasa_6Siddhartha Srinivasa is the Finmeccanica Associate Professor at The
Robotics Institute at Carnegie Mellon University. He works on robotic
manipulation, with the goal of enabling robots to perform complex
manipulation tasks under uncertainty and clutter, with and around
people. To this end, he founded and directs the Personal Robotics Lab,
and co-directs the Manipulation Lab. He has been a PI on the Quality
of Life Technologies NSF ERC, DARPA ARM-S and the CMU CHIMP team on
the DARPA DRC.

Sidd is also passionate about building end-to-end systems (HERB, ADA, HRP3, CHIMP, Andy, among others) that integrate perception, planning, and control in the real world. Understanding the interplay between system components has helped produce state of the art algorithms for object recognition and pose estimation (MOPED), and dense 3D modeling (CHISEL, now used by Google Project Tango).
Sidd received a B.Tech in Mechanical Engineering from the Indian Institute of Technology Madras in 1999, an MS in 2001 and a PhD in 2005 from the Robotics Institute at Carnegie Mellon University. He
played badminton and tennis for IIT Madras, captained the CMU squash team, and likes to run ultra marathons.