Disability studies and assistive technology are two related fields
that have long shared common goals–understanding the
experience of disability and identifying and addressing relevant
issues. Despite these common goals, there are some important
differences in what professionals in these fields consider
problems, perhaps related to the lack of connection between the
fields. To help bridge this gap, we review some of the key
literature in disability studies. We present case studies of two
research projects in assistive technology and discuss how the
field of disability studies influenced that work, led us to identify
new or different problems relevant to the field of assistive
technology, and helped us to think in new ways about the
research process and its impact on the experiences of individuals
who live with disability. We also discuss how the field of
disability studies has influenced our teaching and highlight
some of the key publications and publication venues from which
our community may want to draw more deeply in the future.
Over the past decade and a half, corporations and academies have invested considerable time and money in the realization of ubiquitous computing. Yet design approaches that yield ecologically valid understandings of ubiquitous computing systems, which can help designers make design decisions based on how systems perform in the context of actual experience, remain rare. The central question underlying this article is, What barriers stand in the way of real-world, ecologically valid design for ubicomp?
Using a literature survey and interviews with 28 developers, we illustrate how issues of sensing and scale cause ubicomp systems to resist iteration, prototype creation, and ecologically valid evaluation. In particular, we found that developers have difficulty creating prototypes that are both robust enough for realistic use and able to handle ambiguity and error and that they struggle to gather useful data from evaluations because critical events occur infrequently, because the level of use necessary to evaluate the system is difficult to maintain, or because the evaluation itself interferes with use of the system. We outline pitfalls for developers to avoid as well as practical solutions, and we draw on our results to outline research challenges for the future. Crucially, we do not argue for particular processes, sets of metrics, or intended outcomes, but rather we focus on prototyping tools and evaluation methods that support realistic use in realistic settings that can be selected according to the needs and goals of a particular developer or researcher.
Scott Carter has worked in industrial research for over a decade building and evaluating mobile and multimedia systems to support remote and collocated workers. He is currently a Staff Research Scientist at Toyota Research Institute as well as the Administrative Editor of the Human-Computer Interaction (HCI) Journal. Before that he was a Principal Scientist at FX Palo Alto Laboratory, Inc. He holds a PhD in CS (with an HCI focus) from UC Berkeley.
Tara Matthews is currently at Google Research, where her research focuses on privacy and security. Previously, she was a Research Staff Member in the USER group at IBM Almaden Research Center where she focused on how people collaborate at work and on designing better support tools. Her interests and expertise include awareness, visualization, evaluation, and CSCW. She has served on organizing and program committees for major HCI conferences. She holds a PhD in Computer Science from the University of California, Berkeley and is a BID alum.
Her thesis was titled “Evaluation of ambient displays”