Domain Specific Metaheuristic Optimization

For non-technical domain experts and designers it can be a substantial challenge to create designs that meet domain specific goals. This presents an opportunity to create specialized tools that produce optimized designs in the domain. However, implementing domain specific optimization methods requires a rare combination of programming and domain expertise. Creating flexible design tools with re-configurable optimizers that can tackle a variety of problems in a domain requires even more domain and programming expertise. We present OPTIMISM, a toolkit which enables programmers and domain experts to collaboratively implement an optimization component of design tools. OPTIMISM supports the implementation of metaheuristic optimization methods by factoring them into easy to implement and reuse components: objectives that measure desirable qualities in the domain, modifiers which make useful changes to designs, design and modifier selectors which determine how the optimizer steps through the search space, and stopping criteria that determine when to return results. Implementing optimizers with OPTIMISM shifts the burden of domain expertise from programmers to domain experts.

Megan Hofmann, Nayha Auradkar, Jessica Birchfield, Jerry Cao, Autumn G. Hughes, Gene S.-H. Kim, Shriya Kurpad, Kathryn J. Lum, Kelly Mack, Anisha Nilakantan, Margaret Ellen Seehorn, Emily Warnock, Jennifer Mankoff, Scott E. Hudson: OPTIMISM: Enabling Collaborative Implementation of Domain Specific Metaheuristic Optimization. CHI 2023: 709:1-709:19

PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data

Venkatesh Potluri, John Thompson, James Devine, Bongshin Lee, Nora Morsi, Peli De Halleux, Steve Hodges, and Jennifer Mankoff. 2022. PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (UIST ’22). Association for Computing Machinery, New York, NY, USA, Article 46, 1–13. https://doi.org/10.1145/3526113.3545700

We present the first toolkit that equips blind and visually impaired (BVI) developers with the tools to create accessible data displays. Called PSST (Physical Computing Streaming Sensor data Toolkit), it enables BVI developers to understand the data generated by sensors from a mouse to a micro: bit physical computing platform. By assuming visual abilities, earlier efforts to make physical computing accessible fail to address the need for BVI developers to access sensor data. PSST enables BVI developers to understand real-time, real-world sensor data by providing control over what should be displayed, as well as when to display and how to display sensor data. PSST supports filtering based on raw or calculated values, highlighting, and transformation of data. Output formats include tonal sonification, nonspeech audio files, speech, and SVGs for laser cutting. We validate PSST through a series of demonstrations and a user study with BVI developers.

The demo video can be found here: https://youtu.be/UDIl9krawxg.

What Do We Mean by “Accessibility Research”?

Accessibility research has grown substantially in the past few decades, yet there has been no literature review of the field. To understand current and historical trends, we created and analyzed a dataset of accessibility papers appearing at CHI and ASSETS since ASSETS’ founding in 1994. Our findings highlight areas that have received disproportionate attention and those that are underserved— for example, over 43% of papers in the past 10 years are on accessibility for blind and low vision people. We also capture common study characteristics, such as the roles of disabled and nondisabled participants as well as sample sizes (e.g., a median of 13 for participant groups with disabilities and older adults). We close by critically reflecting on gaps in the literature and offering guidance for future work in the field.

What Do We Mean by “Accessibility Research”? A Literature Survey of Accessibility Papers in CHI and ASSETS from 1994 to 2019. Kelly Mack, Emma McDonnell, Dhruv Jain, Lucy Lu Wang, Jon E. Froehlich, and Leah Findlater In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 371, 1–18.

Megan Hofmann

Megan is a Phd Student at the Human Computer Interaction Institute at Carnegie Mellon Unviversity. She is advised by Prof. Jennifer Mankoff of the University of Washington and and Prof. Scott E. Hudson. She completed her bachelors in Computer Science at Colorado State University in 2017. She is an NSF Fellow, and a Center for Machine Learning and Health Fellow. During her Undergraduate degree Megan’s research was adviced by Dr. Jaime Ruiz and Prof. Amy Hurst.

Her research focuses on creating computer aided design and fabrication tools that expand the digital fabrication process with new materials. She uses participatory observation and participatory design methods to study assistive technology and digital fabrication among many stakeholder (people with disabilities, caregivers, and clinicians).

Visit Megan’s homepage at https://www.megan-hofmann.com/publications/.

Research

Some recent projects (see more)

BLV Understanding of Visual Semantics


Venkatesh Potluri
Tadashi E. GrindelandJon E. Froehlich, Jennifer Mankoff: Examining Visual Semantic Understanding in Blind and Low-Vision Technology Users. CHI 2021: 35:1-35:14

Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efficiently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual affordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could benefit from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.

Assistive Technology

Instructor: Jennifer Mankoffjmankoff@cs.cmu.edu
Spring 2005

HCII, 3601 NSH, (W)+1 (412) 268-1295
Office hours: By Appointment & 1-2pm Thurs

Course Description

This class will focus on computer accessibility, including web and desktop computing, and research in the area of assistive technology.

The major learning goals from this course include:

  • Develop an understanding of the relationship between disability policy, the disability rights movement, and your role as a technologist. For example, we will discuss we will discuss the pros and cons and infrastructure involved in supporting mainstream computer applications rather than creating new ones from scratch.
  • Develop a skill set for basic design and evaluation of accessible web pages and desktop applications.
  • Develop familiarity with technologies and research relating to accessibility including a study of optimal font size and color for people with dyslexia, word-prediction aids, a blind-accessible drawing program,
  • Develop familiarity with assistive technologies that use computation to increase the accessibility of the world in general. Examples include memory aids, sign-language recognition, and so on.

Requirements

Students will be expected to do service work with non-profits serving the local disabled community during one to two weekends of the start of the semester. This course has a project component, where students will design, implement, and test software for people with disabilities. Additionally, students will read and report on research papers pertinent to the domain.

Grading will be based on service work (10%); the project (60%); and class participation, including your reading summary and the lecture you lead (30%).

Other relevant documents

Course CalendarAssignmentsBibliography

Prerequisites

Prerequisites for this class are: Familiarity with basic Human Computer Interaction material or consent of the instructor (for undergraduate students)

It is recommended that you contact the instructor if you are interested in taking this class.

Venkatesh Potluri

Venkatesh Potluri is a Ph.D. student at the Paul G. Allen Center for Computer Science & Engineering at University of Washington. He is advised by Prof Jennifer Mankoff and Prof Jon Froehlich. Venkatesh believes that technology, when designed right, empowers everybody to fulfill their goals and aspirations. His broad research goals are to upgrade accessibility to the ever-changing ways of our interactions with technology, and, improve the independence and quality of life of people with disabilities. These goals stem from his personal experience as a researcher with a visual impairment. His research focus is to enable developers with visual impairments perform a variety of programming tasks efficiently. Previously, he was a Research Fellow at Microsoft Research India, where his team was responsible for building CodeTalk, an accessibility framework and a plugin for better IDE accessibility. Venkatesh earned a master’s degree in Computer Science at International Institute of Information Technology Hyderabad, where his research was on audio rendering of mathematical content.

You can find more information about him at https://venkateshpotluri.me

Interactiles

The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. In addition, the investigation of tactile solutions for large touchscreens may not address the challenges on mobile devices. We therefore present Interactiles, a low-cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.

XiaoyiZhang, TracyTran, YuqianSun, IanCulhane, ShobhitJain, JamesFogarty, JenniferMankoff: Interactiles: 3D Printed Tactile Interfaces to Enhance Mobile Touchscreen Accessibility. ASSETS 2018: To Appear [PDF]

Figure 2. Floating windows created for number pad (left), scrollbar (right) and control button (right bottom). The windows can be transparent; we use colors for demonstration.

Figure 4. Average task completion times of all tasks in the study.

The Tangible Desktop

Mark S. BaldwinGillian R. HayesOliver L. HaimsonJennifer MankoffScott E. Hudson: The Tangible Desktop: A Multimodal Approach to Nonvisual Computing. TACCESS 10(3): 9:1-9:28 (2017)

Audio-only interfaces, facilitated through text-to-speech screen reading software, have been the primary mode of computer interaction for blind and low-vision computer users for more than four decades. During this time, the advances that have made visual interfaces faster and easier to use, from direct manipulation to skeuomorphic design, have not been paralleled in nonvisual computing environments. The screen reader–dependent community is left with no alternatives to engage with our rapidly advancing technological infrastructure. In this article, we describe our efforts to understand the problems that exist with audio-only interfaces. Based on observing screen reader use for 4 months at a computer training school for blind and low-vision adults, we identify three problem areas within audio-only interfaces: ephemerality, linear interaction, and unidirectional communication. We then evaluated a multimodal approach to computer interaction called the Tangible Desktop that addresses these problems by moving semantic information from the auditory to the tactile channel. Our evaluation demonstrated that among novice screen reader users, Tangible Desktop improved task completion times by an average of 6 minutes when compared to traditional audio-only computer systems.

Also see: Mark S. BaldwinJennifer MankoffBonnie A. NardiGillian R. Hayes: An Activity Centered Approach to Nonvisual Computer Interaction. ACM Trans. Comput. Hum. Interact. 27(2): 12:1-12:27 (2020)