Venkatesh Potluri is a Ph.D. student at the Paul G. Allen Center for Computer Science & Engineering at University of Washington. He is advised by Prof Jennifer Mankoff and Prof Jon Froehlich. Venkatesh believes that technology, when designed right, empowers everybody to fulfill their goals and aspirations. His broad research goals are to upgrade accessibility to the ever-changing ways of our interactions with technology, and, improve the independence and quality of life of people with disabilities. These goals stem from his personal experience as a researcher with a visual impairment. His research focus is to enable developers with visual impairments perform a variety of programming tasks efficiently. Previously, he was a Research Fellow at Microsoft Research India, where his team was responsible for building CodeTalk, an accessibility framework and a plugin for better IDE accessibility. Venkatesh earned a master’s degree in Computer Science at International Institute of Information Technology Hyderabad, where his research was on audio rendering of mathematical content.
Xin is a first-year Ph.D. student with Jennifer Mankoff and Shwetak Patel in the Paul G. Allen School of Computer Science & Engineering at the University of Washington – Seattle. Prior to joining UW, he obtained a Bachelor’s degree in computer science from the University of Massachusetts Amherst in 2018. While at UMass Amherst, he received a 21st Century Leaders Award, Rising Researcher Award, and Outstanding Undergraduate Achievements Award. He is interested in using wearable sensing, human-computer interaction and machine learning to advancing healthcare.
Orson is a Ph.D. student working with Jennifer Mankoff and Anind K. Dey in the Information School at the University of Washington – Seattle. Prior to joining UW, he obtained his Bachelor’s degrees in Industrial Engineering (major) and Computer Science (minor) from Tsinghua University in 2018. While at Tsinghua, he received Best Paper Honorable Mentioned Award (CHI 2018), Person of the Year Award and Outstanding Undergraduate Awards. His research focuses on two aspects in the intersection of human-computer interaction, ubiquitous computing and machine learning: 1) the modeling of human behavior such as routine behavior and 2) novel interaction techniques.
The Accessibility Seminar (CSE 590W) is taught most quarters. This fall (2018), it will be taught at 2:30 on Wednesdays. The focus will be at the intersection of fabrication and assistive technology.
The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. In addition, the investigation of tactile solutions for large touchscreens may not address the challenges on mobile devices. We therefore present Interactiles, a low-cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.
XiaoyiZhang, TracyTran, YuqianSun, IanCulhane, ShobhitJain, JamesFogarty, JenniferMankoff:Interactiles: 3D Printed Tactile Interfaces to Enhance Mobile Touchscreen Accessibility. ASSETS 2018: To Appear[PDF]
Figure 2. Floating windows created for number pad (left), scrollbar (right) and control button (right bottom). The windows can be transparent; we use colors for demonstration.
Figure 4. Average task completion times of all tasks in the study.
EDigs is a research project group in Carnegie Mellon University working on sustainability. Our research is focused on helping people find a perfect rental through machine learning and user research.
We sometimes study how our members use EDigs in order to learn how to build software support for successful social communities.
A table (shown on screen). Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys, and (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the position of the cell and its content are read out aloud.
Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. We introduce Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.
Graph showing task completion rates for different kinds of tasks in our user study
A user is searching a table (shown on screen) for the word ‘Jill’. Columns are mapped to the number row of the keyboard and rows to the leftmost column of keys. (1) By default the top left cell is selected. (2) The right hand presses the ‘2’ key, selecting the second column (3) The left hand selects the next row (4) The left hand selects the third row. In each case, the number of occurrences of the search query in the respective column or row are read aloud. When the query is found, the position and content of the cell are read out aloud.
Hi, I’m Yuqian Sun and I’m an exchange student from University of Tokyo, Japan. I’m interested in how technology can combine with the human cognition, persuade and as a result, change human behavior. My research field is human computer interaction and Ubiquitous Computing. I’m currently working on SPRITEs and Interactiles project.
With the increasing popularity of consumer-grade 3D printing, many people are creating, and even more using, objects shared on sites such as Thingiverse. However, our formative study of 962 Thingiverse models shows a lack of re-use of models, perhaps due to the advanced skills needed for 3D modeling. An end user program perspective on 3D modeling is needed. Our framework (PARTs) empowers amateur modelers to graphically specify design intent through geometry. PARTs includes a GUI, scripting API and exemplar library of assertions which test design expectations and integrators which act on intent to create geometry. PARTs lets modelers integrate advanced, model specific functionality into designs, so that they can be re-used and extended, without programming. In two workshops, we show that PARTs helps to create 3D printable models, and modify existing models more easily than with a standard tool.
My name is Ying Wang and I am a junior double majoring in Computer Science and Applied & Computational Mathematical Science. I am interested in the communication between Nature, Human and Technology. She is fascinated by the unlimited potential and profound meaning revealed by data communication and how human-centered design plays an essential role in bridging the gaps in between human expression and technology realization. I am currently working on the Don’t Touch My Belly project in the lab.