Tactile maps can help people who are blind or have low vision navigate and familiarize themselves with unfamiliar locations. Ideally, tactile maps are created by considering an individual’s unique needs and abilities because of their limited space for representation. However, significant customization is not supported by existing tools for generating tactile maps. We present the Maptimizer system which generates tactile maps that are customized to a user’s preferences and requirements, while making simplified and easy to read tactile maps. Maptimizer uses a two stage optimization process to pair representations with geographic information and tune those representations to present that information more clearly. In a user study with six blind/low-vision participants, Maptimizer helped participants more successfully and efficiently identify locations of interest in unknown areas. These results demonstrate the utility of optimization techniques and generative design in complex accessibility domains that require significant customization by the end user.
We present an interactive design system for knitting that allows users to create template patterns that can be fabricated using an industrial knitting machine. Our interactive design tool is novel in that it allows direct control of key knitting design axes we have identified in our formative study and does so consistently across the variations of an input parametric template geometry. This is achieved with two key technical advances. First, we present an interactive meshing tool that lets users build a coarse quadrilateral mesh that adheres to their knit design guidelines. This solution ensures consistency across the parameter space for further customization over shape variations and avoids helices, promoting knittability. Second, we lift and formalize low-level machine knitting constraints to the level of this coarse quad mesh. This enables us to not only guarantee hand- and machine-knittability, but also provides automatic design assistance through auto-completion and suggestions. We show the capabilities through a set of fabricated examples that illustrate the effectiveness of our approach in creating a wide variety of objects and interactively exploring the space of design variations.
Our interactive design system helps users explore key design axes for knitting to generate highly customized patterns from input shape templates; e.g., a seamless yoke dress with princess-cut apparent seams (a), and drop shoulder dresses with textures on the arms and skirt (b–d). The output of our system is a knit pattern template that lets users vary the shape while preserving the design, for example, creating a child’s dress with short sleeves (d) that matches an adult dress (b), or varying skirt texture and angle, and sleeve knitting direction (c). The system guarantees that all results and variations are machine knittable.
Overview of our framework. (a) Triangle meshes from a parametric template (the system deals with a single mesh at a time). (b) Input triangle mesh with user annotations of composition, layout, and direction guidelines. (c) Generated quad mesh patches, which are consistent across template variations. (d) Quad mesh annotated for knitting the body tube in the round using short rows to curve the tube. Blue lines indicate seams. The same design applies to all template variations (two shown here). (e) Duck knit with short rows. (f ) Quad mesh annotated with different textures and orientations; the body is knit as seamed sheets with decreases. (g) Duck knit with textures and a large head from template (f ).
Smartphone overuse is related to a variety of issues such as lack of sleep and anxiety. We explore the application of Self-Affirmation Theory on smartphone overuse intervention in a just-in-time manner. We present TypeOut, a just-in-time intervention technique that integrates two components: an in-situ typing-based unlock process to improve user engagement, and self-affirmation-based typing content to enhance effectiveness. We hypothesize that the integration of typing and self-affirmation content can better reduce smartphone overuse. We conducted a 10-week within-subject field experiment (N=54) and compared TypeOut against two baselines: one only showing the self-affirmation content (a common notification-based intervention), and one only requiring typing non-semantic content (a state-of-the-art method). TypeOut reduces app usage by over 50%, and both app opening frequency and usage duration by over 25%, all significantly outperforming baselines. TypeOut can potentially be used in other domains where an intervention may benefit from integrating self-affirmation exercises with an engaging just-in-time mechanism.
Typeout: Leveraging just-in-time self-affirmation for smartphone overuse reduction. Xuhai Xu, Tianyuan Zou, Xiao Han, Yanzhang Li, Ruolin Wang, Tianyi Yuan, Yuntao Wang, Yuanchun Shi, Jennifer Mankoff,and Anind K. Dey. 2022. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). ACM, New York, NY, USA.
In order for “human-centered research” to include all humans, we need to make sure that research practices are accessible for both participants and researchers with disabilities. Yet, people rarely discuss how to make common methods accessible. We interviewed 17 accessibility experts who were researchers or community organizers about their practices. Our findings emphasize the importance of considering accessibility at all stages of the research process and across different dimensions of studies like communication, materials, time, and space. We explore how technology or processes could reflect a norm of accessibility and offer a practical structure for planning accessible research.
Passive mobile sensing for the purpose of human state modeling is a fast-growing area. It has been applied to solve a wide range of behavior-related problems, including physical and mental health monitoring, affective computing, activity recognition, routine modeling, etc. However, in spite of the emerging literature that has investigated a wide range of application scenarios, there is little work focusing on the lessons learned by researchers, and on guidance for researchers to this approach. How do researchers conduct these types of research studies? Is there any established common practice when applying mobile sensing across different application areas? What are the pain points and needs that they frequently encounter? Answering these questions is an important step in the maturing of this growing sub-field of ubiquitous computing, and can benefit a wide range of audiences. It can serve to educate researchers who have growing interests in this area but have little to no previous experience. Intermediate researchers may also find the results interesting and helpful for reference to improve their skills. Moreover, it can further shed light on the design guidelines for a future toolkit that could facilitate research processes being used. In this paper, we fill this gap and answer these questions by conducting semi-structured interviews with ten experienced researchers from four countries to understand their practices and pain points when conducting their research. Our results reveal a common pipeline that researchers have adopted, and identify major challenges that do not appear in published work but that researchers often encounter. Based on the results of our interviews, we discuss practical suggestions for novice researchers and high-level design principles for a toolkit that can accelerate passive mobile sensing research.
Mental health of UW students during Spring 2020 varied tremendously: the challenges of online learning during the pandemic were entwined with social isolation, family demands and socioeconomic pressures. In this context, individual differences in coping mechanisms had a big impact. The findings of this paper underline the need for interventions oriented towards problem-focused coping and suggest opportunities for peer role modeling.
Heterogeneity in individuals’ levels of anxiety (reported in ESM). Individual trajectories of anxiety are shown in different line types and colors (dotted versus solid lines represent different participants). Although the mean level of anxiety is 1 on a scale of 0–4, the significant variation in responses invites examination of individuals and subgroups.
This mixed-method study examined the experiences of college students during the COVID-19 pandemic through surveys, experience sampling data collected over two academic quarters (Spring 2019 n1 = 253; Spring 2020 n2 = 147), and semi-structured interviews with 27 undergraduate students.
There were no marked changes in mean levels of depressive symptoms, anxiety, stress, or loneliness between 2019 and 2020, or over the course of the Spring 2020 term. Students in both the 2019 and 2020 cohort who indicated psychosocial vulnerability at the initial assessment showed worse psychosocial functioning throughout the entire Spring term relative to other students. However, rates of distress increased faster in 2020 than in 2019 for these individuals. Across individuals, homogeneity of variance tests and multi-level models revealed significant heterogeneity, suggesting the need to examine not just means but the variations in individuals’ experiences.
Thematic analysis of interviews characterizes these varied experiences, describing the contexts for students’ challenges and strategies. This analysis highlights the interweaving of psychosocial and academic distress: Challenges such as isolation from peers, lack of interactivity with instructors, and difficulty adjusting to family needs had both an emotional and academic toll. Strategies for adjusting to this new context included initiating remote study and hangout sessions with peers, as well as self-learning. In these and other strategies, students used technologies in different ways and for different purposes than they had previously. Supporting qualitative insight about adaptive responses were quantitative findings that students who used more problem-focused forms of coping reported fewer mental health symptoms over the course of the pandemic, even though they perceived their stress as more severe.
Example quotes:
“I like to build things and stuff like that. I like to see it in person and feel it. So the fact that everything was online…. I’m just basically reading all the time. I just couldn’t learn that way”
Insomnia has been pretty hard for me . . . I would spend a lot of time lying in bed not doing anything when I had a lot of homework to do the next day. So then I would become stressed about whether I’ll be able to finish that homework or not.”
“It was challenging … being independent and then being pushed back home. It’s a huge change because now you have more rules again”
“For a few of my classes I feel like actually [I] was self-learning because sometimes it’s hard to sit through hours of lectures and watch it.”
“I would initiate… we have a study group chat and every day I would be like ‘Hey I’m going to be on at this time starting at this time.’ So then I gave them time to all have the room open for Zoom and stuff. Okay and then any time after that they can join and then said I [would] wait like maybe 30 minutes or even an hour…. And then people join and then we work maybe … till midnight, a little bit past midnight”
Megan is a Phd Student at the Human Computer Interaction Institute at Carnegie Mellon Unviversity. She is advised by Prof. Jennifer Mankoff of the University of Washington and and Prof. Scott E. Hudson. She completed her bachelors in Computer Science at Colorado State University in 2017. She is an NSF Fellow, and a Center for Machine Learning and Health Fellow. During her Undergraduate degree Megan’s research was adviced by Dr. Jaime Ruiz and Prof. Amy Hurst.
Her research focuses on creating computer aided design and fabrication tools that expand the digital fabrication process with new materials. She uses participatory observation and participatory design methods to study assistive technology and digital fabrication among many stakeholder (people with disabilities, caregivers, and clinicians).
Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efficiently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual affordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could benefit from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.
Past research regarding on-body interaction typically requires custom sensors, limiting their scalability and generalizability. We propose EarBuddy, a real-time system that leverages the microphone in commercial wireless earbuds to detect tapping and sliding gestures near the face and ears. We develop a design space to generate 27 valid gestures and conducted a user study (N=16) to select the eight gestures that were optimal for both human preference and microphone detectability. We collected a dataset on those eight gestures (N=20) and trained deep learning models for gesture detection and classification. Our optimized classifier achieved an accuracy of 95.3%. Finally, we conducted a user study (N=12) to evaluate EarBuddy’s usability. Our results show that EarBuddy can facilitate novel interaction and that users feel very positively about the system. EarBuddy provides a new eyes-free, socially acceptable input method that is compatible with commercial wireless earbuds and has the potential for scalability and generalizability
We present HulaMove, a novel interaction technique that leverages the movement of the waist as a new eyes-free and hands-free input method for both the physical world and the virtual world. We first conducted a user study (N=12) to understand users’ ability to control their waist. We found that users could easily discriminate eight shifting directions and two rotating orientations, and quickly confirm actions by returning to the original position (quick return). We developed a design space with eight gestures for waist interaction based on the results and implemented an IMU-based real-time system. Using a hierarchical machine learning model, our system could recognize waist gestures at an accuracy of 97.5%. Finally, we conducted a second user study (N=12) for usability testing in both real-world scenarios and virtual reality settings. Our usability study indicated that HulaMove significantly reduced interaction time by 41.8% compared to a touch screen method, and greatly improved users’ sense of presence in the virtual world. This novel technique provides an additional input method when users’ eyes or hands are busy, accelerates users’ daily operations, and augments their immersive experience in the virtual world.