Why study the future?

I have asked myself that question numerous times over the last several years. Why years? Because the paper that I will be presenting at CHI 2013 (Looking Past Yesterday’s Tomorrow: Using Futures Studies Methods to Extend the Research Horizon) is the 5th iteration of an idea that began at CHI 2009, was submitted in its initial form to CHI 2011 and 2012, then DIS 2012, Ubicomp 2012, and finally CHI 2013 (and, I think, winner for most iterations of one paper I’ve ever submitted). Each submission sparked long and informative reviews and led to major revisions (in one case even a new study), excepting the last one (which was accepted).

I am telling this story for two reasons. First, I want to explore what drove me to lead this effort despite the difficulty of succeeding. Second, I want to explore what I learned from the process that might help others publishing difficult papers.The idea of this paper came from the frustration I had with the near-term, often narrow view that so much research seems to take. The world is changing at a very rapid pace (consider the changes that have occurred since 1913, one year before the first world war, or even since 1963, 50 years ago). Those changes are not just computational, but social, political, economic, and environmental. There is no evidence that the pace is slowing down, and it seems to me that research meant to be relevant in 10, 15 or 20 years ought to take the fact of change into account. This, with the help of numerous students and colleagues, led me to investigate how, exactly, one might go about considering all of those factors (and how they are changing) when doing research. The answer I and my co-authors found (perhaps one of many) was the established, critical theoretic field of Futures Studies. Aside from the specific methods developed by that field, perhaps the most useful insight it provides is that the future is not one thing that we must divine (and of course cannot, with any accuracy truly know). Rather, it is necessary to forecast multiple potential futures and understand that they are simply an extension of human understanding of the present and present trends. These forecasts, then, become a way of reflecting critically on what is (and what is known) and at the same time a way of aiming where we go as we move forward.

HCI research combines deductive reasoning with inductive reasoning (left side). Futures Studies methods (right side) can contribute to enhanced abductive reasoning.
HCI research combines deductive reasoning with inductive reasoning (left
side). Futures Studies methods (right side) can contribute to enhanced
abductive reasoning.

That was perhaps the most surprising insight of this work for me: That it, for the first time, provided me with a set of techniques that allowed me to tackle the ethical issues inherent in science. I believe that scientists and inventors have a duty to think through the impact (negative or positive) that their discoveries may have, but I had never before had a satisfactory answer about how to do so. By forecasting and critiquing potential futures, I believe we can gain information that can guide us in our application of technology.

While this represented a valuable personal journey (and an opportunity to learn from and collaborate with a wonderful anthropologist, Jen Rode, and designer, Haakon Faste), the work is foreign to me, and far afield from what I normally spend my time on as a researcher. I might have given up on it entirely had it not been for my sabbatical, which I used as a chance to push my limits and try new things, finish off old work that was stagnating, and explore new ideas.

Even so, the paper seemed to raise more hackles than I wanted. The first CHI submission (and maybe the second) was rightly problematic, but after that it seemed that we were refining a piece of work that would never quite fit its readers’ needs. Sending it to DIS gave me an inside view of how hard it is for me to understand design thinking. Reviewers had a seemingly endless list of things we hadn’t thought of, theoretical stances that we had not accounted for, and of course critiques of the study we had added (which was, truly, a limited piece of work added more as demonstration and to assuage expectations than as a primary contribution). Ubicomp reviewers questioned its relevance to their conference, but by then we had a more balanced set of work and a carefully honed presentation of the study.

And, finally, here we are at the third CHI submission, and 5th submission (total). Interestingly, reviewers this time around seemed to come from the field of futures studies itself. I know very little else about what happened except this: The paper was accepted, with only minor revisions requested. I do wonder, is this a paper that belongs at a conference, or should we have sent it to journal long ago (We didn’t, because I have the impression that very few people read journal articles these days, while a CHI presentation is likely to have a reasonable audience). I’ve given one presentation on the topic already and the audience response was interest and engagement, so I’m hopeful that we made the right choice.

In the end, I’m not sure what to take away from all this. Was it the luck of the draw (specifically, the field from which reviewers were drawn)? Are paper acceptances like statistical tests — if you try enough times, you’re likely to get something that looks significant? My guess — a combination of luck (and many tries) was part of it, but the gains we made in understanding the topic (and the audience) were real and also important.