Disability studies and assistive technology are two related fields
that have long shared common goals–understanding the
experience of disability and identifying and addressing relevant
issues. Despite these common goals, there are some important
differences in what professionals in these fields consider
problems, perhaps related to the lack of connection between the
fields. To help bridge this gap, we review some of the key
literature in disability studies. We present case studies of two
research projects in assistive technology and discuss how the
field of disability studies influenced that work, led us to identify
new or different problems relevant to the field of assistive
technology, and helped us to think in new ways about the
research process and its impact on the experiences of individuals
who live with disability. We also discuss how the field of
disability studies has influenced our teaching and highlight
some of the key publications and publication venues from which
our community may want to draw more deeply in the future.
Over the past decade and a half, corporations and academies have invested considerable time and money in the realization of ubiquitous computing. Yet design approaches that yield ecologically valid understandings of ubiquitous computing systems, which can help designers make design decisions based on how systems perform in the context of actual experience, remain rare. The central question underlying this article is, What barriers stand in the way of real-world, ecologically valid design for ubicomp?
Using a literature survey and interviews with 28 developers, we illustrate how issues of sensing and scale cause ubicomp systems to resist iteration, prototype creation, and ecologically valid evaluation. In particular, we found that developers have difficulty creating prototypes that are both robust enough for realistic use and able to handle ambiguity and error and that they struggle to gather useful data from evaluations because critical events occur infrequently, because the level of use necessary to evaluate the system is difficult to maintain, or because the evaluation itself interferes with use of the system. We outline pitfalls for developers to avoid as well as practical solutions, and we draw on our results to outline research challenges for the future. Crucially, we do not argue for particular processes, sets of metrics, or intended outcomes, but rather we focus on prototyping tools and evaluation methods that support realistic use in realistic settings that can be selected according to the needs and goals of a particular developer or researcher.