Tawanna Dillahunt

Tawanna Dillahunt is an Associate Professor at the University of Michigan’s School of Information (UMSI) and holds a courtesy appointment with the Electrical Engineering and Computer Science (EECS) department. Before starting as an Assistant Professor, she was a Presidential Postdoctoral Fellow in UMSI from January 2013 – July 2014. She also leads the Social Innovations Group at UMSI and her research interests are in the areas of human-computer interaction, ubiquitous computing, and social computing. She is primarily interested in identifying needs and opportunities to further explore how theories from the social sciences can be used to design technologies that have a positive impact on group and individual behavior. With the narrowing of the digital divide, the ubiquity of smart devices and mobile hotspots in common places in the U.S. (e.g., libraries, community centers, and even McDonald’s) she sees an urgent need to explore the use of these technologies for those that stand the most to gain from these resources. Therefore, she designs, builds, enhances and deploys innovative technologies that solve real-world problems, particularly in underserved communities.

Tawanna holds a M.S. and Ph.D. in Human-Computer Interaction from Carnegie Mellon University, a M.S. in Computer Science from the Oregon Graduate Institute School of Science and Engineering (now a part of the Oregon Health and Science University in Portland, OR), and a B.S. in Computer Engineering from North Carolina State University. She was also a software engineer at Intel Corporation for several years

Disability Studies and Accessible Technology Creation

Jennifer Mankoff, Gillian R. HayesDevva Kasnitz:
Disability studies as a source of critical inquiry for the field of assistive technology. ASSETS 2010: 3-10

Disability studies and assistive technology are two related fields that have long shared common goals–understanding the experience of disability and identifying and addressing relevant issues. Despite these common goals, there are some important differences in what professionals in these fields consider problems, perhaps related to the lack of connection between the fields. To help bridge this gap, we review some of the key literature in disability studies. We present case studies of two research projects in assistive technology and discuss how the field of disability studies influenced that work, led us to identify new or different problems relevant to the field of assistive technology, and helped us to think in new ways about the research process and its impact on the experiences of individuals who live with disability. We also discuss how the field of disability studies has influenced our teaching and highlight some of the key publications and publication venues from which our community may want to draw more deeply in the future.

Exiting the cleanroom: On ecological validity and ubiquitous computing

Carter, Scott, Jennifer Mankoff, Scott R. Klemmer, and Tara Matthews. “Exiting the cleanroom: On ecological validity and ubiquitous computing.” Human–Computer Interaction 23, no. 1 (2008): 47-99.

Over the past decade and a half, corporations and academies have invested considerable time and money in the realization of ubiquitous computing. Yet design approaches that yield ecologically valid understandings of ubiquitous computing systems, which can help designers make design decisions based on how systems perform in the context of actual experience, remain rare. The central question underlying this article is, What barriers stand in the way of real-world, ecologically valid design for ubicomp?

Using a literature survey and interviews with 28 developers, we illustrate how issues of sensing and scale cause ubicomp systems to resist iteration, prototype creation, and ecologically valid evaluation. In particular, we found that developers have difficulty creating prototypes that are both robust enough for realistic use and able to handle ambiguity and error and that they struggle to gather useful data from evaluations because critical events occur infrequently, because the level of use necessary to evaluate the system is difficult to maintain, or because the evaluation itself interferes with use of the system. We outline pitfalls for developers to avoid as well as practical solutions, and we draw on our results to outline research challenges for the future. Crucially, we do not argue for particular processes, sets of metrics, or intended outcomes, but rather we focus on prototyping tools and evaluation methods that support realistic use in realistic settings that can be selected according to the needs and goals of a particular developer or researcher.