Tag Archives: research

CHI Trip Report: Visualization, Fabrication and Accessibility

I regularly take notes when I attend a conference, and especially when attending for the first time after several years I have the sense that there is so much to absorb and enjoy! I have been much less consistent about doing this in a public way, but this year I’m particularly trying to do so for the sake of the many people who can’t attend in person or at all; as well as all of us who can only go to one session when so many are worth attending!

It would be a very long blog post if I summarized all of the great accessibility work, as that is an increasingly large subset of CHI work. I’ll focus on things relevant to areas that are of growing importance to me — visualization and fabrication, along with a brief section on some accessibility highlights. To learn more about the experience of attending remotely versus in person, see my other trip report.

Visualization Sessions

I attempted to attend a mix of paper sessions and other conversations around data and accessibility. In the SIG on “Data as Human Centered Design Material” I had a number of interesting conversations. I spoke with Alex Bowyer who looks at personal data use. One important use of data he mentioned to “create evidence.” Interesting to think about “data for activism” in this context. Another interesting conversation from that SIG centered on how to summarize complex data for retrospective interviews, and accessibility concerns here. Another challenge is how to design apps that use data effectively in the field/live use, again accessibility concerns arise. Further, how do you properly generalize machine learning and visualizations? How do you improve start up, and scale well? How do you support customization of both visualizations and machine learning?

The session on accessible visualization was my next dip into visualization. The first talk talked about audio data narratives for BLV users. Their study highlighted how audio narrative could help to highlight features that might be hard to hear otherwise through strategies like foreshadowing breaking up the basic sonification. The second talk was on 1dof haptic techniques for graph understanding, for BLV users. The authors explored the value of static 3D printed and haptic moveable cues for helping with graph understanding tasks such as comparison. An interesting technique, though sonification also held its own in their data. An interesting question to me is whether a dynamic chart with a piece of flexible material over it (to “smooth” the line) would be better than the slider in terms of the experience — similar to the 3D printed chart, but dynamic. Next, a retroactive fix for a range of charts inaccessible to screen reader users was presented in VoxLens. The authors highlighted the state of online visualization today (which is truly terrible) and then provide a real solution to this problem. The solution combines sonification, high level summaries, and NLP based queries, all automatically supported given simple configuration in JavaScript about axes and a pointer to the chart object. It would be interesting to see them take advantage of ideas from a related paper on proactive visualization exploration support agents in this work. The next presentation focused on improving color patterns to address color vision deficiency. The final paper in the session focused on infographic accessibility.

Some additional papers also looked at accessibility. Joyner et al looked at visualization accessibility in the wild. They analyzed visualizations and surveyed and interviewed practitioners and found the vast majority were accessibility novices; 30-40% did not think it was their job and 71% could think of reasons to eliminate accessibility features even though the acknowledged accessibility was important. They also highlight some difficulties in creating accessible visualizations, such uncertainty in what and how to do (such as how to deal with filters), as well as lack of organization support and lack of tool support. “ComputableViz” support composition such as union, difference, and intersection. They discuss the potential for this approach to make a visualization more cognitively accessible. The intermediate representation used in this work is a relational database derived from the vega-lite specification — I think this has great potential for other accessibility applications including better description; change monitoring; end-user authoring of sonifications; and more. Finally, MyMove is a new method for collecting activity data from older adults.

Two studies made use of simulation. I call them out because of their thoughtfulness in doing so — simulation can have negative impacts and is usually not an appropriate substitute for working with people with disabilities. One study modified visualizations to simulate color deficiency on published visualizations and then crowdsourced large scale data about their accessibility. I think this is a reasonable application of that technique for two reasons: (1) the benefits of data at this scale are high and (2) the specific disability and task structure are unlikely to create bias either in the study data or in the participants (i.e. negatively influence their opinion of disability). Another good example of a study which used hearing people instead of DHH people was ProtoSound. This allowed them to collect data about the accuracy of non-speech sound recognition by their system. However they made use of DHH input throughout the design process.

I also want to highlight research that I though had interesting applications here that were not accessibility papers: “Data Every Day” was interesting because of its emphasis on personalized data tracking, including customization, data entry and authorship, both things that are under-explored in accessible visualization research. “Cicero” allows specification of transformations to visualizations declaratively — thus making visualization changes computationally available which creates many interesting opportunities. “CrossData” is an NLP interface to a data table which provides a fascinating model for authorship and exploration of data. Although the authors didn’t mention this, I think this could be a great way to author alt text for charts. “Diff In The Loop” highlights the ways that data changes during data science programming tasks as code changes. The authors explored a variety of ways to represent this– all visual– but highlights why change understanding is so important. It also raises issues such as what time scale to show changes over which would be relevant to any responsive visualization interaction task as well. Fan et al’s work (VADER lab) on addressing deception made me wonder whether accessible charts (and which chart accessibility techniques) can perform well on visual literacy tests. Cambo & Gergle’sq paper on model positionality and computational reflexivity has immediate implications for disability, and particularly highlights the importance of not only whether disability data is even collected but also things like who annotates such data. Finally, Metaphorical Visualizations translates graph data into arbitrary metaphorical spaces. Although the authors focus on visual metaphors, this could be valuable for audio as well.

There were several tangible visualizations, again relevant to but not targeting accessibility. Shape changing displays for sound zones; STRAIDE used interactive objects mounted on strings that could be actuated to move vertically in space over a large (apparently 4ft or so) range of heights to support rapid prototyping of moveable physical displays. Sauvé et al studied how people physicalize and label data. They explored the relation between type and location of labels, which may have implications for intuitive design of nonvisual physical charts. Making Data Tangible is an important survey paper that explores cross-disciplinary considerations in data physicalization.

Finally, a look at applications, and here we see a mix including some accessibility focused papers. CollabAlly, can represent collaboration activities within a document, and provided an interesting synergy with the earlier talk this same week, Co11ab that provided realtime audio signals about collaborators. This dive into what makes tables inaccessible suggests concrete viable solutions (and check out some other interesting work by this team). Graphiti demonstrated a thoughtfully designed approach to interacting with graph data extracted dynamically with computer vision, a complex and intricate exploratory visualization task. Xiaojun Ma and her students created a system for communicating about data analysis by automatically producing presentation slides from a Jupyter notebook. Working jointly and iteratively with automation and a machine, from a source notebook, could make it easier to make those slides accessible. ImageExplorer vocalizes description of an image with auto-generated captions to help BLV people identify errors. I wonder what one might need to change in this approach for auto-generated (or just badly written) chart captions. Two important learnings from ImageExplorer were the value hierarchical, spatial nature of navigation supported; and the need to support both text and spatial navigation. Cocomix was focused making on comics, not visualizations, accessible to BLV people, but I think the many innovations in this paper have lots of applications for change description and dashboard description. Saha et al discuss visualizing urban accessibility, exploring how different types of secondary information and visualization needs vary for different stakeholders.

Fabrication Work

The vast majority of fabrication work this year did not directly engage with accessibility. That said, I saw a lot of potential for future work in this space.

I first attended the workshop, Reimagining Systems for Learning Hands-on Creative and Maker Skills. We discussed a wide variety of topics around who makes, how people make, and what we might want to contribute to and learn about those experiences as a community. We talked about who is included in making as well. Some provocative discussions happened around whether digital tools or physical tools are a better starting place for learning; how we engage with diverse communities without “colonizing” those spaces; and whether we should “claim” diverse crafts as making or not; The ways in which new technologies can inhibit learning (everyone makes a keychain tag); and the need to educate not only students but educators. Another interesting discussion happened around how time plays out in making and adds challenges when things take time to print, grow, etc etc. Another topic that came up was the digital divide in access to maker spaces, i.e. what may be available in rural communities. Another was tutorial accessibility and whether/how to improve that. Some of the interesting people I learned more about include: Jun Gong (formerly at MIT, now at Apple; does everything from interaction technique design to meta materials); Erin Higgins (introducing youth in Baltimore City to making); and Fraser Anderson (whose work includes instrumenting maker spaces to better understand maker activities). I also want to highlight some students I already knew about here at UW ūüôā including my own student Aashaka Desai (working on machine embroidered tactile graphics) and Daniel Campos Zamora who is interested in the intersection of fabrication and access (ask him about his mobile 3D printer :). 

The first fabrication session I made it to was a panel that placed fabrication up against VR/AR/Haptics. Hrvoje Benko and Michael Nebeling were in “camp AR/VR” and listed challenges such as interaction devices; adaptive UIs and so on. Valkyrie Savage, “camp fabrication” talked of interaction and sensing (as well as the sensory systems); Huaishu Peng & Patrick Baudisch talked about fabrication + interaction; small artifacts; and AR/VR. The wide-ranging discussion mentioned machine reliability; accessibility and 3D printing as a public utility; using AR/VR to bridge the gap between a physical object that almost fills a need and reality with AR/VR (repurposing); and/or business (i.e. Amazon vs Fab). I will focus on what I see as accessibility challenges in the arguments made: Can have anything delivered to our door in 2 hrs? I would claim that our process today works for mass manufacturing but is not customizable to people with disabilities. Next, a potential flaw of AR/VR is its dependence on electricity — if we are to make meaningful, useful objects, physicality is essential for people who either cannot count on having power, or who depend upon a solution being available all the time. Another concern is bridging from accessible consumption to accessible, inclusive authoring tools. As Valkyrie Savage argued, physicality addresses multiple types of inclusion better than VR/AR. Lastly, materials were discussed briefly, though not from an accessibility perspective (though material properties are critical to successful access support in my mind).

Moving on to individual papers, I was excited to see multiple people talking about the importance of programming language support for parametric modeling and physical computing (an interest of my own). For example, Tom Veuskensideas about combining WYSWIG and code for parametric modeling look highly intuitive and fun to use. Valkyrie Savage talked about the importance of this for novice learners and lays the idea of a Jupyter notebook style approach to physical computing design. Tom Vueskens et al. provide a beautiful summary of the variety of code based approaches to support re-use in the literature. Another interesting and I think related set of conversations happened around how you understand making activities. An example is Colin Yeung’s work on tabletop time machines. The concept of being able to go back and forward in time, ideally with linkages to physical actions, project state, and digital file state, is really interesting from an accessibility perspective. Relatedly, Rasch, Kosch and Feger discuss how AR can support learning of physical computing. Finally, Clara Rigaud discussed the potential harms of automatic documentation.

In the domain of modeling tools, accessibility did not come up much. While many papers mentioned custom modeling tools, none of the talks mentioned accessibility for authors/modelers nor did they report on studies with disabled people. The most obvious challenge here is BLV accessibility; but to the extent that generative design is simplifying the modeling experience and computationally controllable, I think there are some very easy ways to improve on this situation. I was actually most intrigued by a non-fabrication paper’s relevance to both fabrication and accessibility: HAExplorer which the fabrication person in me sees as an interesting model for understanding the mechanics of motion of any mechanics, not just biomechanics. In addition, visualizations of mechanics raise the possibility of accessible visualizations of mechanics.

The haptics session had multiple potential accessibility applications. FlexHaptics provides a general method and design tool for designing haptic effects using fabrication, including in interactive controllers; while ShapeHaptics supports design of passive haptic using a combination of springs and sliders which together can control the force profile of a 1D slider. An interesting accessibility side effect of this approach is to modify existing objects by adding both physical and audio effects at key moments, such as when pliers close. In some configurations, the system can also be easily swappable, thus allowing for manual selection experiences based on context. ReCompFig supports dynamic changes to kinematics using a combination of rods and cables (which can be loosened or tightened by a motor). They create effects like a joystick, pixelated display with varying stiffness; and a range of haptic effects such as holding a glass of liquid, or stiff bar. 3D printed soft sensors uses a flexible lattice structure to support resistive soft sensor design.

Interesting materials included Embr, which combines hand embroidery with thermochromic ink, which characterizes 12 of the most popular embroidery stitches for this purpose and supports simulation and design. The results are beautiful to look at and I wonder if there is potential for creating shared multi-ability experiences, such as combining capacitive touch-based audio output with simultaneous visual feedback. I loved multiple aspects of FabricINK because of its sustainable approach (recycling e-ink) as well as the way it opens the door to fine grained displays on many surfaces. Luo it al present pneumatic actuators combined with machine knitting. ElectroPop uses static charge combined with cut mylar sheets to create 3D shapes (it would be interesting to know how well these would hold up to exploratory touch). SlabForge supports design (for manual production) of slab based ceramics. Finally, the Logic Bonbon provides a metamaterial edible interaction platform. Nice to see the range of materials+X being presented, and lots of interesting potential applications of these low-level capabilities.

Another grouping of papers explored capabilities that could increase the accessibility of objects with additional information. M. Doga Dogan et al created InfraredTags, which allow embedding of infrared QR codes in 3D printed objects. Although not a focus of the paper, this has accessibility value (for example an audio description could be embedded). This does require an IR camera. Pourjafarian et al present Print-A-Sketch, which supports accurate printing on arbitrary surfaces, including reproductions of small icons as well as scanning and line drawing. Although not a focus of the work, it would be very interesting to think about how this could be used to create tactile and interactive graphics, or to annotate existing documents and objects for increased accessibility. It would be interesting to think about inks with other kinds of properties than conductive (such as raised inks) as well.

Although my theme is accessibility, disability justice is an intersectional issue that must consider other aspects of design. In that vein, I want to highlight two lovely examples of sustainable design. ReClaym, which creates clay objects from personal composting, and Light In Light Out which harvests and manipulates natural light computationally.

There were a few papers that focused on accessibility, all in the applications space. Roman combines object augmentation with a robotic manipulator to support a wide range of manipulation tasks not easily manipulable by robotic arms. The robot is augmented with a motorized magnetic gripper, while the target object is augmented with a device that can translate rotary motion into an appropriate motion and force profile. Mobiot is a system that supports the creation of custom print and assembly instructions for IoT mobile actuators by leveraging recognition of objects in a known model database combined with demonstration. My student Megan Hofmann presented Maptimizer, a system which creates optimal 3D printed maps was very synergistic with 3D printed street crossings for mobility training. TronicBoards, which support electronic design. FoolProofJoints improves ease of assembly, which although not tested with disabled participants seems to me to have direct accessibility benefits. One final application was material adaptations for supporting accessible drone piloting. This involved both software adaptation, physical control adaptation and posture adaptations. The authors supported multiple disabilities (and multiply disabled people); and open sourced their toolkit.

Other Disability/Accessibility Highlights

Dreaming Disability Justice Workshop: This workshop discussed academic inclusion and the importance of research at the intersection of race and disability that is strongly influenced by both perspectives in conjunction (as opposed to a ‚Äúthis and that‚ÄĚ model). Also just not erasing that history (e.g. see these examples of Black, disabled, powerful women). Some of the interesting people I learned more about include:  Cella Sum (politics of care in disability spaces); Frank Elavsky (accessible visualization); Harsha Bala (an anthropologist); Erika Sam (Design Researcher at Microsoft) and Tamana Motahar (A PhD student studying personal informatics and Social Computing for empowering marginalized populations worldwide).

On Tuesday morning I attended Accessibility and Aging. The first talk explored the experience of older adults sharing the work of financial management. Joint accounts & power of attorney are both problematic mechanisms. Banking assistance needs fall along a spectrum these are two blunt for. These concerns seem common to many domains (e.g. health portals; social media accounts; etc). The second talk was on The third was a paper I collaborated on, access needs research design was presented by Kelly Mack. How can we empower researchers to anticipate the broad range of disabilities they might encounter? Anticipate (make things accessible from the beginning) and Adjust as needed. End by Reflecting on things like power dynamics and access conflicts. Next, Christina Harrington‘s award winning paper addressed communication breakdowns in health information seeking using voice for black older adults. Her paper addressed critical concerns such as what dialects are supported by voice systems.

In reviewing other accessibility-related talks, I want to highlight the importance of end-user and multi-stakeholder authoring of interaction and experiences. Dai et al discussed relational maintenance in AAC conversation between users and caregivers. Seita et al show how DHH and speaking people can effectively collaborate when using a visual collaboration system like Miro. Ang et al discuss how video conferencing systems can better support the combination of speaking / signing / interpreting in mixed ASL/spoken language video conferencing calls. One concern — the difficulty of dynamically adjusting communication — reminded me of challenges I’ve experienced with online conferencing as well. Another mixed stakeholder case is Waycott et al’s study staff’s role in supporting VR experiences for elders are positive. A unique case of multi-stakeholder interaction is Choi’s analysis of disabled video blogger interaction with content curation algorithms and how algorithms could better support identity negotiation and audience selection.

I also want to highlight some interaction design work. I loved the WoZ study with ASL control of voice assistants by Glasser et al. It got right to the heart of an important unsolved problem, and in the process shed light on challenges we must solve for ASL as an input language for all sorts of automated contexts. Zhao et al described user-defined above-shoulder gesturing for motor-impaired interaction design, highlighting the importance of end user control over the interaction language for things like social acceptability and comfort. Nomon is an innovative approach to single switch input. This might be amenable to smooth saccade tracking as well.

To summarize an inspiring event. I was glad to be able to review it so thoroughly, thanks to all the online talks which I watched asynchronously.

FABulous CHI 2016

At the CHI 2016 conference this week, there were a slew of presentations on the topic of fabrication. First of course I have to highlight our own Megan Hofmann who presented our paper, Helping Handsa study of participatory design of assistive technology that highlights the role of rapid prototyping techniques and 3D printing. In addition, Andrew Spielberg (MIT & Disney Intern) presented RapID (best), a joint project with Disney Research Pittsburgh which explores the use of RFID tags as a platform for rapidly prototyping interactive physical devices by leveraging probabilistic modeling to support rapid recognition of tag coverage and motion.

I was really excited by the breadth and depth of the interest at CHI in fabrication, which went far beyond these two papers. Although I only attended by robot (perhaps a topic for another blog post), attending got me to comb through the proceedings looking for things to go to — and there were far more than I could possibly find time for! Several papers looked qualitatively at the experiences and abilities of novices, from Hudson’s paper on newcomers to 3D printing to Booth’s paper on problems end users face constructing working circuits (video; couldn’t find a pdf) to Bennett’s study of e-NABLE hand users and identity.

There were a number of innovative technology papers, including Ramaker’s Rube Goldbergesque RetroFab, Groeger’s HotFlex (which supports post-processing at relatively low temperatures), and Peng’s incremental printing while modifying. These papers fill two full sessions (I have only listed about half).   Other interesting and relevant presentations (not from this group) included a slew of fabrication papers, a study of end user programming of physical interfaces, and studies of assistive technology use including the e-NABLE community.

Two final papers I have to call out because they are so beautiful: Kazi’s Chronofab (look at the video) and a study in patience, Efrat’s Hybrid Bricolage for designing hand-smocked artifacts (video, again can’t find the paper online).

AMIA trip report

I have been curious about AMIA for some time, and was even invited to be part of a panel submission to it this year. So when I realized it was only a few hours’ drive away, I took advantage of the closeness to plan a last minute trip. It has been an interesting experience and well worth the attendance. Although a very large conference, the group of people attending seems to be friendly and open, and I was welcomed in particular by two wonderful women I met, Bonnie Kaplan and Patti Brennan. The sessions are an intriguing combination of computer science, medicine, and clinical practice (with the quality of/knowledge about each varying based on the expertise/presence of appropriate collaborators). ¬†I attended sessions on Monday, Tuesday, and Wednesday. The theme that stood out to me more than any other across my experiences here was the great diversity of stakeholders that are (and especially that should be) considered in the design of effective health IT. Some very interesting observations came out of the large scale analysis of clinical data that I saw discussed on Monday. For example, there is a lot of attention being paid to data privacy (although one person commented that this is commonly misunderstood as “Uniqueness is not synonymous with being identified”) and particularly how to scrub data so that it can “get past IRB” for further analysis. One interesting approach taken by N. Shah (Learning Practice-based Evidence from unstructured Clinical Notes; Session S22) is to extract the terms (features) and use those instead of the data itself. Of course a limitation here is that you have to think of features ahead of time.

Another interesting topic that came up repeatedly is the¬†importance of defining the timeline of the data as much as the timeline of the person. Questions that need to be answered include what is time zero in the data being analyzed (and what might be missing as a result); what is the exit cause, or end moment for each person in the database¬†(and who is being left out / what is the bias as a result?); and the observation that in general “sick people make more data.” To this I would add that if you attempt to address these biases by collecting information there is potentially selection bias in the subjects and the impact of the burden of sensing on the data producer. Connected to this is the ongoing questions of the benefits and problems of a¬†single unique identifier as a way of connecting health information.

Last observation from Monday is the question of what public data sets are out there that we should make ourselves aware of. For example, MIT has big data medical initiative and (also see http://groups.csail.mit.edu/medg/) and may have a clinical notes data set associated with it (I am still looking for this).

On Tuesday I started the day with¬†S44: year in review ¬†(D. Masys). I missed the very start of it, but came in when he was talking about studies of IT’s use in improving clinical practice, such as a study showing that¬†reminding clinicians to do their work better improves patient outcomes (“physician alerts” “embedded in EHR systems” etc), or maybe just improves process, with the observation that we should measure both. Interestingly to me, the question of also improving process and outcomes by organizing the work of caregivers (and reminding them of things) was missing from this discussion.

Dr. Masys then moved on to explore unexpected consequences of IT that had been published: adding virtual reality caused “surgeon blindness” to some information; missed lab results in another study and alert fatigue in another (drug-drug interactions suffer from 90% overrides…). Given the difficulty of publishing negative results, it would be interesting to explore this particular set of work for tips. It was also interesting to hear his critique of questionable results, particularly the¬†repeated mentions of hawthorne effects ¬†because so many interventions are compared to care as usual (rather than an equal-intensity control condition). Another way of phrasing this is to ask at what cost does the intervention work (and/or how do we “adjust for the intensity of the intervention” )

Another category Dr. Masys explored of interest to me was health applications of mobile electronics.¬†Briefly, one study looked at chronic widespread pain … reduced ‘catastophizing’;¬†four looked at text messaging vs telephone appointment reminders; effectiveness of a short message reminder in increased followup compliance; text4baby mobile health program; cameroon mobile phone SMS 9CAMPS) trial (PLoS One)

Dr. Masys then moved on to the practice of clinical informatics and bioinformatics (out of “the world of rcts”). This focused on new methods that might be interesting. I particularly want to follow up on one of the few studies that looked at multiple stakeholders¬†which had the¬†goal of reducing unintended negative consequences; the use of registries to do low cost, very large trials; ¬†the use of a private key derived from dna data being encrypted¬†for encrypting that same data; and the creation of a 2D barcode summarizing patient genetic variants¬†that affect the dose or choice of a drug; and a demonstration that¬†diagnostic accuracy was as good on a tiny mobile phone screen as a big screen.

The last category reviewed by Dr. Masys was editors choice publications from JAMIA; J. of Biomed. Informatics; and the Diane Forsyth award. Almost all of these seem worth reviewing in more depth — particularly the JAMIA articles¬†scientific research in the age of omics¬†(explores the¬†need to increase accountability of scientists for the quality of their research)¬†web-scale pharmacovigilance¬†(used public search engine logs to detect novel drug drug interactions);¬†CPOEs decrease medication errors (a meta study that basically concluded without realizing it that CPOEs would work better if we had only applied basic principals from¬†contextual inquiry!) and the JBI articles by Rothman, who¬†developed a continuous measure of patient condition-predicted hospital re-admission and mortality independent of disease (how does this compare with patient reported health status); Weiskopf¬†(who documented the relative incompleteness of EHR data across charts he studied); Friedman’s overview of NLP¬†state of the art and prospects for significant progress (summary of a workshop); Post’s article on¬†tools¬†for analytics¬†of EHR data; and¬†Valizadegan’s article on learning classification models from multiple experts who may disagree (given my interest in multiple viewpoints).

Next, I attended a panel about Diana Forsyth (obit; some pubs; edited works), an ethnographer who had a big impact on the field of medical informatics (and others as well) … she has passed away, and perhaps only a small number of people read work, but her work had an enormous influence on those people who encountered her writing on methods, research topics, and so on. She was compared by one panelist to Arthur Kleinman¬†(who helped to make the distinction between the abstraction of disease and the human experience of illness; treatment and healing). Some of the most interesting parts of the discussion were focused on how the field is changing over time, prompted by a question of Katie Siek’s — for example getting data into the computer, computers into the hospitals, now making them work for people correctly, and what comes after that? Another interesting comment was about the authority of the physician being in part based on their ability to diagnose (which conveys all sorts of societal benefits). This points to the role of the physician (when diagnosis doesn’t exist human creativity is especially needed) versus IT (which can handle more well defined situations). However with respect to healing, maybe the power of physicians is in listening as much as diagnosing (also something computer’s can’t do, right?). Other topics that came up included the importance of the patient voice and patient empowerment/participation.

After lunch with a friend from high school I attended S66 (User centered design for patients and clinicians). In line with the hopes of the Forsyth panel I saw a mixture of techniques here including qualitative analysis. Unfortunately, what I did not see was technology innovation (something that may point to a different in vocabulary regarding what “user centered design” means). However the qualitative methods seemed strong. One interesting talk explored the issues in information transfer from the hospital to home health care nurses. A nice example of some of the breakdowns that occur between stakeholders in the caregiver community. More and more, however, I find myself wondering why so much of the work here only focuses on caregivers with degrees of some sort in medicine (as opposed to the full ecology of caregivers). I was pleased to see low-income settings represented, exploring the potential of mobile technology to help with reminders to attend appointments and other reminders; and a series of 3 studies on health routines culminating in a mobile snack application¬†(published at PervasiveHealth) by Katie Siek & collaborators. One nice aspect of this project was that the same application had differing interfaces for different stakeholders (e.g. teenagers vs parents).

I started to attend the crowdsourcing session after the break, but it did not appear to have much in terms of actual crowdsourcing. An open area for health informatics? Instead I went on to S71, family health history & health literacy. The most interesting paper in the session, to me, looked at health literacy in low SES communities (by many co-authors including Suzanne Bakken). In particular, they have data from 4500 households which they would like to visualize back to the participants to support increased health literacy. Their exploration of visualization options was very detailed and user centered and resulted in the website GetHealthyHeights.org (which doesn’t seem to be alive at the moment). However I have concerns about the very general set of goals with respect to what they hope people will get out of the visualizations. It would be interesting to explore whether there’s a higher level narrative that can be provided to help with this. Similarly, does it make sense to present “typical” cases rather than specific data.

On Wednesday I began in S86: Late breaking abstracts on machine learning in relation to EMRs. This session had some interesting exploration of tools as well as some patient focused work. One study looked at prediction of mobility improvements for older adults receiving home health care, by subgrouping 270k patients and looking at factors associated with the subgroups. Steps included de-identification; standardize data; accounting for confounding factors; divide into sub groups; and then used data mining to look at factors that affected individual scores and group scores using clustering and pattern mining. An interesting take on what is part of the data “pipeline” that goes beyond some of the things I’ve been thinking are needed for lowering barriers to data science. Another looked at decision support for pre-operative medication management (an interesting problem when I consider some of the difficulties faced by the many doctors coordinating my mother-in-law’s care recently). ¬†This work was heuristic in nature (a surprising amount of work here is still focusing on heuristics over other more statistically based approaches). From this work I noticed another trend however, the need to connect many different types of information together (such as published work on drugs, clinical notes, and patient history).

The last session I attended was S92, one of the few sessions focused specifically on patients (and not very well attended…). The first talk was about creating materials for patient consumption, supporting access to EHRs, 2-way secure messaging, and customized healthcare recommendations. They focused especially on summarizing medication information concisely. The second is about a national network for comparative effectiveness. Maybe this is the crowdsourcing of health IT? This was focus group based research (a surprising popular method across AMIA given how little support there is for this method in HCI) exploring user attitudes about data sharing. Interesting that the work presented here ignored a long history of research in trust in computing e.g. from Cliff Nass, the e-commerce literature, and so on. However, the data was nicely nuanced in exploring a variety of ethical issues and acknowledging the relative sophistication of the group members with respect to these issues. The issues raised are complex — who benefits, who owns the data, how would the bureaucracy function, how to manage authorization given that studies aren’t always known yet (and opt-in vs opt-out approaches). I wonder how a market for research would function (think kickstarter but I donate my data instead of money…). The next paper looked at what predicted people thinking EHR are important both for themselves and their providers, and through a disparities lens.

The closing plenary was given by Mary Czerwinski¬†(pubs) from Microsoft Research. I always enjoy her talks and this was no exception. Her focus was on her work with affective systems, relating to stress management. Her presentation included the a system for giving clinicians feedback about their empathy in consults with patients; as well as a system for giving parents reminders when they were too stressed to remember the key interactions that could help their ADHD kids. Interestingly, in the parent case, (1) the training itself is helpful and (2) the timing is really important — you need to predict a stress situation is building to intervene successfully (I would love to use this at home :). She ended by talking about a project submitted to CHI 2014 that used machine learning to make stress management suggestions based on things people already do (e.g. visit your favorite social network; take a walk; etc). One of the most interesting questions was whether emotional state could predict mistake making in coding data (or other tasks).

Would I go back a second time? Maybe … It is a potentially valuable setting for networking with physicians; the technical work is deep enough to be of interest (though the data sets are not as broad as I’d like to see). It’s a field that seems willing to accept HCI and to grow and change over time. And the people are really great. The publishing model is problematic (high acceptance rates; archival) and I think had an impact on the phase of the 3421work that was presented at times.¬†What was missing from this conference? Crowdsourcing, quantified self research, patient websites like¬†PatientsLikeMe, patient produced data (such as support group posts), significant interactive technology innovation outside the hospital silo. In the end, the trip was definitely worthwhile.

Some observations about people who might be interesting to other HCI professionals interested in healthcare. For example, I noticed that MITRE had a big presence here, perhaps because of their recent federally funded research center. In no particular order here are some people I spoke with and/or heard about while at AMIA 2013:

Patti Brennan (some pubs) is the person who introduced me to or told me about many of the people below, and generally welcomed me to AMIA. She studies health care in the home and takes a multi-stakeholder perspective on this. A breath of fresh air in a conference that has been very focused on things that happen inside the physician/hospital silo.

Bonnie Kaplan¬†is at the center for medical informatics in the Yale school of medicine. Her research focuses on “Ethical, legal, social, and organizational issues involving information technologies in health care, including electronic health and medical records, privacy, and changing roles of patients and clinicians.”

Mike Sanders from www.seekersolutions.com, which is providing support for shared information between nurses, caregivers & patients, based in B.C. (Canada).

Amy Franklin from UT Health Sciences Center, has done qualitative work exploring unplanned decision making using ethnographic methods. Her focus seems to be primarily on caregivers, though the concepts might well transfer to patients.

Dave Kaufman¬†is a cognitive scientist at ASU who studies, among other HCI and health including “conceptual understanding of biomedical information and decision making by lay people.” ¬†His studies of mental models and miscommunication in the context of patient handoff seem particularly relevant to the question of how the multi-stakeholder system involved in dealing with illness functions.

Paul Tang (Palo Alto Medical Foundation) is a national leader in the area of electronic health records and patient-facing access to healthcare information.

Danny Sands (bio; some pubs)– doctor; entrepreneur; founded society for participatory medicines; focus on doctor-patient communication and related tools; studies of ways to improve e.g. patient doctor email communication.

Dave deBronkart¬†(e-patient Dave, who’s primary physician was Dr. Sands during his major encounter with the healthcare system), best summarized in his Ted talk “Let Patients Help”¬†(here’s his blog post on AMIA 2013)George Demiris¬†from University of Washington studies “design and evaluation of home based technologies for older adults and patients with chronic conditions and disabilities, smart homes and ambient assisted living applications and the use of telehealth in home care and hospice.”. His projects seem focused on elders both healthy and sick.¬†One innovative project explored the use of skype to bring homebound patients into the discussions by the hospice team.
Mary Goldstein¬†who worked on temporal vision of patient data (KNAVE-II) and generally “studies innovative methods of implementing evidence-based clinical practice guidelines for quality improvement” including decision support.

Mark Musen¬†studies “mechanisms by which computers can assist in the development of large, electronic biomedical knowledge bases. Emphasis is placed on new methods for the automated generation of computer-based tools that end-users can use to enter knowledge of specific biomedical content.” and has created the Prot√©g√© knowledge base framework and ontology editing system.

Carol Friedman¬†does “both basic and applied research in the area of natural language processing, specializing in the medical domain” including creating the¬†MedLEE system (“a general natural language extraction and encoding system for the clinical domain”). Her overview of NLP paper¬†was mentioned in the year in review above.

Suzanne Bakken¬†(pubs) has been doing very interesting work in low income communities around Columbia in which she is particularly interested in communicating the data back to the data producers rather than just focusing on its use for data consumers.Henry Feldman¬†(pubs) who was an IT professional prior to becoming a physician has some very interesting thoughts on open charts, leading to the “Open Notes” project

Bradley Malin¬†(pubs) is a former CMU student focused on privacy who has moved into the health domain who is¬†currently faculty at Vanderbilt. His work provides a welcome and necessary theoretical dive into exactly how private various approaches to de-identifying patient data are. For example, his 2010 JAMIA article showed that “More than 96% of 2800 patients’ records are shown to be uniquely identified by their diagnosis codes with respect to a population of 1.2 million patients.”

Jina Huh
 (pubs) studies social media for health. One of her recent publications looked at health video loggers as a source of social support for patients. She shares an interest with me in integrating clinical perspectives into peer-produced data.
Katie Siek (pubs) who recently joined the faculty at Indiana does a combination of HCI and health research mostly focusing on pervasive computing technologies. One presentation by her group at AMIA this year focused on a mobile snacking advice application that presented different views to different stakeholders.
Madhu Reddy¬†(some pubs) trained at UC Irvine under Paul Dourish and Wanda Pratt and brings a qualitative perspective to AMIA (he was on the Diana Forsyth panel for instance). He studies “collaboration and coordination in healthcare settings”
Kathy Kim who spoke in the last session I attended about her investigations of patient views on a large data sharing network to support research, but also does work that is very patient centered (e.g. mobile platforms for youth).
Steve Downs¬†who works in decision support as well as policy around “how families and society value health outcomes in children”
Chris Gibbons (some pubs) who focuses on health disparity (e.g. barriers to inclusion in clinical trials and the potential of eHealth systems).

Data Collection & Analytics Tools?

I have become fascinated recently with the question of the role that data has in supporting analysis, action, and reflection. Actually, it would be more accurate to say that I’ve become aware recently that this is an intrinsic driver in much of the work I do, and thus it has become something I want to reflect on more directly. In this post, I want to explore some of the tools others have already built that might support analytics, machine learning, and so on. If you know of something I’ve missed, feel free to share it in the comments! So, in no particular order:

  • Hazy¬†¬†provides a small handful of key primitives for data analysis. These include Victor, which “uses RDBMS to solve a large class of statistical data analysis problems (supervised machine learning using incremental gradiant algorithms) and¬†WisCi (/ DeepDive, it’s successor), which is “an encylopedia powered by machines, for the people. ”¬†RapidMiner¬†is a similar tool that has been used by thousands. It is open source and supports data analysis and mining.
  • Prot√©g√©¬†is “a suite of tools to construct domain models and knowledge-based applications with ontologies” including visualization and manipulation
  • NELL¬†learns over time from the web. It has been running since 2010 and has “accumulated over 50 million candidate beliefs.”¬†¬†A similar system is
  • Ohmage and¬†Ushahidi¬†are open source citizen sensing platforms (think citizen based data collection). Both support mobile and web based data entry. This stands in contrast to things like Mechanical Turk¬†which is a for-pay service, and games and other dual-impact systems like PeekaBoom (von Ahn et al.)¬†which can label objects in an image using crowd labor, or systems like Kylin (Hoffmann¬†et al.) which simultaneously accelerates community content creation and information extraction.
  • WEKA¬†and LightSide¬†support GUI based machine learning (WEKA requires some expertise and comes with a whole textbook, while LightSide is built on WEKA but simplifies aspects of it, and specializes in mining textual data). For more text mining support, check out¬†Coh-Metrix, which “calculates the coherence of texts on a wide range of measures. It replaces common readability formulas by applying the latest in computational linguistics and linking this to the latest research in psycholinguistics.” Similarly,¬†LIWC, which supports linguistic analysis (not free) by providing a dictionary and a way to compare to that dictionary to analyze the presence of 70 language dimensions in a new text from negative emotions to casual words.

Deployed tools research and products aside, there is also a bunch of research in this area, ranging from early work such as aCappela (Dey¬†et al.),¬†Screen Crayons (Olsen,¬†et al.). More recently, Gestalt (Patel¬†et al.)¬†“allows developers to implement a classification pipeline” and Kietz¬†et al. use an analysis of RapidMiner’s many data analysis traces to automatically predict optimal KDD-Workflows.

Luis von Ahn, Ruoran Liu and Manuel Blum Peekaboom: A Game for Locating Objects in Images In ACM CHI 2006

Hoffmann, R., Amershi, S., Patel, K., Wu, F., Fogarty, J., & Weld, D. S. (2009, April). Amplifying community content creation with mixed initiative information extraction. In Proceedings of the 27th international conference on Human factors in computing systems (pp. 1849-1858). ACM.

Dey, A. K., Hamid, R., Beckmann, C., Li, I., & Hsu, D. (2004, April). a CAPpella: programming by demonstration of context-aware applications. InProceedings of the SIGCHI conference on Human factors in computing systems (pp. 33-40). ACM.

Olsen Jr, Dan R., Trent Taufer, and Jerry Alan Fails. “ScreenCrayons: annotating anything.”¬†Proceedings of the 17th annual ACM symposium on User interface software and technology. ACM, 2004.

Kayur Patel, Naomi Bancroft, Steven M. Drucker, James Fogarty, Andrew J. Ko, James A. Landay: Gestalt: integrated support for implementation and analysis in machine learning. UIST 2010: 37-46

Kietz et al. (2012). Designing KDD-Workflows via HTN-Planning, 1‚Äď2. doi:10.3233/978-1-61499-098-7-1011

Search and Rescue and Probability Theory

A man and a dog together belaying down a rock face
Canine Search and Rescue (photo from AMRG website)

I spent a fascinating evening with the Allegheny Mountain Rescue Group¬†today. This is a well run organization that provides free help for search and rescue efforts in the Pittsburgh area and beyond. I was in attendance because my kids and I were looking for a way to give Gryffin (our new puppy) a job in life beyond “pet” and we love to work in the outdoors. Canine search and rescue sounded like a fascinating way to do this and we wanted to learn more. During the meeting, I discovered a team of well-organized, highly trained, passionate and¬†committed¬†individuals that has a healthy influx of new people interested in taking part and a strong core of experienced people who help to run things. The discussions of recent rescues were at times heart rending, and very inspiring.

Later in the evening during a rope training session I started asking questions and soon learned much more about how a search operates. I discovered that about a third of searches end in mystery. Of those for which the outcome is known, there is about an even split between finding people who are injured, fine, or have died. Searches often involve multiple organizations simultaneously, and it is actually preferable to create teams that mix people from different search organizations rather than having a team that always works together. Some searches may involve volunteers as well. A large search may have as many as 500 volunteers, and if the target of the search may still be alive, it may go day and night. Searches can last for days. And this is what led me to one of the most unexpected facts of the evening.

I asked: How do you know when a search is over? And the answer I got was that a combination of statistics and modeling is used to decide this in a fairly sophisticated fashion. A search is broken up into multiple segments, and a probability is associated with each segment (that the person who is lost is in a segment). When a segment is searched, the type of search (human only, canine, helicopter, etc.) and locations searched, along with a field report containing details that may not be available otherwise are used to update the probability that a person is in that segment (but was missed) or absent from that segment. Finally, these probabilities are combined using a spreadsheet(s?) to help support decision making about whether (and how) to proceed. According to the people I was speaking with, a lot of this work is done by hand because it is faster than entering data in and dealing with more sophisticated GIS systems (though typically a computer is available at the search’s base, which may be something like a trailer with a generator). GPS systems may be used as well to help searchers with location information and/or track dogs.

Some of the challenges mentioned are the presence of conflicting information, the variability in how reliable different human searchers are, the fact that terrain may not be flat or easily represented in two dimensions, the speed of computer modeling, the difficulty of producing exact estimates of how different searchers affect the probability of finding someone and the variable skill levels of searchers (and the need to organize large numbers of searchers, at times partly untrained). When I raised the possibility of finding technology donations such as more GPS systems, I was also told that it is critical that any technology, especially technology intended for use in the field, be ultra simple to use (there is no time to mess with it), and consistent (i.e. searchers can all be trained once on the same thing).

Although this blog post summarizes what was really just a brief (maybe hour long) conversation with two people, the conversation had me thinking about research opportunities. The need for good human centered design is clear here, as is the value of being able to provide technology that can support probabilistic analysis and decision making. Although it sounds like they are not in use currently, predictive models could be applicable, and apparently a fair amount of data is gathered about each search (and searches are relatively frequent). Certainly visualization opportunities exist as well. Indeed, a recent VAST publication (Malik et al., 2011) looked specifically at visual analytics and its role in maritime resource allocation (across multiple search and rescue operations).

But the thing that especially caught my attention is the need to handle uncertain information in the face of both ignorance and conflict. I have been reading recently about Dempster-Shafer theory, which is useful when fusing multiple sources of data that may not be easily modeled with standar probabilities. Dempster-Shafer theory assigns a probability mass to each piece of evidence, and is able to explicitly model ignorance. It is best interpreted as producing information about the provability of a hypothesis, which means that at times it may produce a high certainty for something that is unlikely (but more provable than the alternatives). For example, suppose two people disagree about something (which disease someone has, for instance), but share a small point of agreement (perhaps both people have a low-confidence hypothesis that the problem is a brain tumor) that is highly improbable from the perspective of both individuals (one of whom suspects migraines, the other a concussion).  That small overlap will be the most certain outcome of combining their belief models in Dempster-Shafer theory, so a brain tumor, although both doctors agree it is unlikely, would be considered most probable by the theory.

One next obvious step would be to do some qualitative research and get a better picture of what really goes on in a search and rescue operation. Another possible step would be to collect a data set from one or more willing organizations (assuming the data exists) and explore algorithms that could aid decision making or make predictions using the existing data. Or then again, one could start by collecting GPS devices (I am suer some of you out there must have some sitting in a box that you could donate) and explore whether there are enough similar ones (android + google maps?) to meet the constraint of easy use and trainability. I don’t know yet whether I will pick up this challenge, but I certainly hope I have the opportunity to. It is a fascinating, meaningful, and technically interesting opportunity.

Malik, A., Maciejewski, R., Maule, B., & Ebert, D. S. (2011). A visual analytics process for maritime resource allocation and risk assessment. In the 2011 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 221-230.

Why study the future?

I have asked myself that question numerous times over the last several years. Why years? Because the paper that I will be presenting at CHI 2013 (Looking Past Yesterday‚Äôs Tomorrow: Using Futures Studies Methods to Extend the Research Horizon) is the 5th iteration of an idea that began at CHI 2009, was submitted in its initial form to CHI 2011 and 2012, then DIS 2012, Ubicomp 2012, and finally CHI 2013 (and, I think, winner for most iterations of one paper I’ve ever submitted). Each submission sparked long and informative reviews and led to major revisions (in one case even a new study), excepting the last one (which was accepted).

I am telling this story for two reasons. First, I want to explore what drove me to lead this effort despite the difficulty of succeeding. Second, I want to explore what I learned from the process that might help others publishing difficult papers. Continue reading Why study the future?

More technology?

I just came across a call to arms by Kristina H√∂√∂k, “A Cry for More Tech at CHI!” in¬†Interactions this month. I was so glad to see her writing about this and I hope the article bears fruit. She talks about the ways in which technology can inspire design — I would argue also enable design — and why alternate forms of publications should be given archival value as one way of supporting tech research (such as videos, demos, etc). Imagine if videos at CHI were valued as much as videos at SIGGRAPH!

I don’t want to say much more because I hope you go and read what she had to say, but I will note that her sentiment doesn’t just apply to CHI. How about other conferences? And a question I’ve been asking myself recently — what can I do to encourage more tech in my own research group? The answer, I’ve discovered, is to uncover what technology research means to me. That I will say more about.

As I mentioned in a recent post about my sabbatical goals, I have spent some time recently trying to reposition myself. I am and will always be driven by real-world problems, usually ones that come out of personal experience. However, if that is the sole driving force in my work, why am I not a sociologist? Or a politician? Or a anthropologist? Why don’t I run a non-profit? Why am I a computer scientist? The answer that I keep coming around to is that I like to build re-usable solutions to problems, solutions that are (ideally) bigger than the problem I started with. In addition, I believe that technology has value in part because it can solve problems in new ways, sometimes better ways, if we are innovative about how we use it, rigorous, and willing to push the boundary of what technology is capable of. So I am also driven by a wish to build systems, hard systems, systems that do things that have not been done before or create new ways of doing things. In fact, when I look back over the technical work I’ve done, after years of trying (for every job search, tenure review, and so on) I think I finally have put my finger on the unifying theme in my work.

I have always, in some way or another, built what I think of now as¬†data-driven interfaces. I’m certainly not the first person to use this term. Nor is it confined to my area (HCI). But to me it describes one of the most important roles that technology has to play in the world. Many of the most revolutionary impacts of technology have centered around its ability to show information (think of spreadsheets and visualization tools), share information (think of everything from email to the web to facebook) and process information (think of the work in context-aware and ubiquitous computing, machine learning, and so on). And my own inspiration has been similar — my honors thesis as an undergraduate centered around exposing what was going on inside of programs; my PhD work on managing the uncertainty that arises when recognizing sensed input; my PhD students have developed/are developing tools for building ambient and peripheral displays (a form of visualization), rapidly prototyping and field testing ubicomp apps (providing essential data about¬†what¬†information belongs in the application in the first place), measuring and predicting which users have difficulty with an interface (a type of information processing), handling uncertain input within the user interface toolkit, and sharing information about energy use in the home. Of course when you have a hammer, everything looks like a nail (and there’s many aspects of each of these projects that I left out so they would look more nail-shaped) but I do believe there’s a theme here.

Having recognized the theme, a natural question that I, as a toolkit builder, find myself asking is: What is hard about building these sorts of interfaces, especially in situations where we expect people to use the resulting applications. I think there are many unsolved problems, along the entire pipeline from deciding what data to use all the way through to acting on it in some way. Ideally, this should all be put together, in a fashion that scaffolds the process as much as possible and enables communication of constraints and other information from one end of the pipeline to other and back.

The idea of a pipeline for data-driven, interactive systems leads to a whole host of interesting questions I hope to begin answering. What should be communicated within the system? What should be communicated to the user? How does all of this change the way we construct toolkits at the input level? What about the output level? What abilities might we want to give end users with respect to data-driven interfaces? How do we help people build effective classifiers when they have not studied machine learning? How do we help people to select and integrate visualization techniques? What new sensors can we construct and what should we sense? When and how might we involve people (i.e. the crowd) in gathering, labeling, extracting features from, interpreting, even visualizing information? How do we trade this off with machines? And finally, how does the interactive nature of the end systems affect the way we should answer any of these questions?


Sabbatical goals, revisited

I am more than halfway through my sabbatical now, and it seems appropriate to revisit the goals I had for my sabbatical, and possibly set some new ones before it’s too late to make changes. I’m going to start with those original goals (in gray), and add comments and thoughts..

Learn about other ways of thinking through sustainability. I want to take the time to deeply explore my own beliefs about sustainability, cross-cultural understandings of sustainability, and how both relate to my chosen field. I am planning on spending at least an hour a week just thinking and writing and reading about ethical/social/planetary issues relating to sustainability. I am also planning on teaching my course on sustainability in both of my sabbatical locations. Total time commitment: 5-6 hours per week.

Outcome: So far this has been a success, although not exactly as I had imagined. I just completed a ubicomp submission with Indian co-authors based on interviews in India, and a survey deployed in India, U.S. high and U.S. low income communities. It was fascinating to explore this data, and definitely caused me to rethink the role of technology in sustainability. Related to this I also submitted a grant proposal with several other U.S. faculty intended to explore automated techniques for affecting energy use across three different continents. Much of this was made possible by collaborations developed with the wonderful folks at IBM Bangalore. I am also teaching my environmental class here in Z√ľrich, and although that is just starting, it is interesting to see the differences here. Finally, I have been blogging about sustainability in a non-academic fashion in an attempt to explore basic beliefs and perspectives on sustainability from a radically new perspective, and recently wrote an article for Interactions based on my blog post questioning the basic assumptions underlying HCI work in sustainability.¬†¬†Not sure if it adds up to an hour a week, but overall I’m happy with the progress on this topic so far.

Expand my toolbox. I want to learn more about hardware and machine learning (I’ve posted about this before on this blog). My current plan is to take a class on machine learning (I have a handy virtual one with me, or I can sign up wherever I’m at) and teach myself hardware using slides from a CMU class & hands on experimentation. I figure if I spend 2-3 hours per week on each (in parallel if possible, in series otherwise) I should make good progress on this over the year. Total time commitment: 4-6 hours per week.

Again, this has been a success. I completed the Stanford Online Machine Learning class¬†this fall and have been leveraging what I learned in my student advising. Overall, I found the class to be enlightening but the homework a little too easy to solve without complete understanding. Still, it did help deepen my understanding of the algorithms and the methods for improving accuracy and so on.¬†¬†It was highly focused on statistical techniques, and is nicely complemented by the second course I am taking (much more slowly), Carolyn Ros√©’s applied machine learning. Now that one ML class is done, I am also working on my knowledge of hardware using slides of Scott Hudson’s and related materials and have gotten through several arduino projects. So not complete, but I’m pretty happy with this. Extra bonus: I can squeeze some hardware work into family time as the kids¬†love to be included in it.¬†

Finish hanging projects. I have: Three projects that require analysis only and two-three projects that require writing code. I plan on doing these for the most part in series, unless I am able to recruit local talent to help with the latter two. It’s possible they won’t all get done, but I hope at least some will! Estimated time commitment: 4-6 hours per week. Start new projects that I’ve already thought about. I have two in mind. Estimated time commitment: 4-6 hours per week if done in series.

I wish I’d written down which specific projects I intended to work on! For the most part this has¬†not been a great success. I have completed one (writing) project, and recruited students to work on others. However, recruiting students in Switzerland has not worked out well, and I’ve had varying success reaching any complete goal with students I recruited in India. There is still time to accomplish this goal, but I think that I am unlikely to get anywhere near 6. For my own sake when looking back on this six months from now, I will name the projects this time around: Futures — done; Search tools — making progress; Cosmo/viewpoint extraction — making progress; Mechanical Turk web accessibility — stagnating; Diabetes + Lyme analysis — stagnating [but I could tackle this without additional students]; Macro energy audits — stagnating; Using routines to reduce energy use — done; Lo-fi presence — making progress.

Write a large NSF proposal [already started]. Estimated time commitment: 1 hour per week through November.

Sadly petered out at the last minute, though I cannot take the blame for this. On the upside I led a group of five other PIs in writing a medium proposal (mentioned above). It was a fascinating experience in herding faculty which I’m not sure I ever want to repeat! :).¬†

Continue supporting students. Estimated time commitment: 3-4 hours per week of meetings, 1 hour per week of prep & planning. Meet new people, start new projects, develop new ideas. Estimated time: 4-6 hours per week.

Both successes (though you might ask my current students if they agree :P). Amazing how Skype can shorten the distance between places. I’m now holding meetings across three continents each week, a sign of the new collaborations I’ve begun. Also along the way I’ve given 7 talks and counting. But what I’m most pleased about in this arena is the opportunity that it’s given me to rethink my own research agenda and try to explore new ways of positioning myself. I have begun to realize that while I am driven by applications that¬†matter this has in some ways obscured other things that I care about. It has also affected my ability to recruit a certain type of student — folks for whom programming and building things and solving hard technical problems is as important as the applications this enables. So I have, through public speaking engagements, been exploring a new way of presenting my work, one that emphasizes both the enabling middleware that can make possible the creation of applications that address real world problems using technology. More on this in a future post, as this is running long, but I consider the opportunity to rethink research approaches, goals, and so on to be a major benefit of sabbatical.¬†

So what next? Certainly, at a minimum more of the same. What I’ve been trying to do is working, and I plan to continue for the next few months. I still need to forge stronger collaborations with some swiss colleagues, and am actively working on that. I also need to step back and ask which hanging projects are worth pursuing and what the best approach to doing that is. And of course I need to keep in mind that this is my chance to rest and¬†rejuvenate. ¬†I will have been deeply involved in writing about 120 pages of text by the end of April, taught most of 4 classes since the start of last summer, and advised or mentored about 13 people (in one-on-ones) across two continents throughout most of the sabbatical, and learned 1.5 new foreign languages. In between, I’ve also been taking time to travel, relax, spend time with my kids, and continue fighting for my health. I plan to continue that, as I wish to arrive home not only inspired and educated but also healthy and well rested for the start of the fall semester. ¬†

A different academic model

Zurich in the snow, from our apartment
Zurich in the snow, from our apartment

The next phase of our sabbatical is in Z√ľrich, Switzerland, where we’ve been since the beginning of January. There hasn’t been much to post here because, I suppose, things feel so familiar. We have a “group” to be part of, thanks to our wonderful host, Friedemann Mattern, which makes a big difference in how integrated we are into the university community. The university setting itself is much more familiar somehow than in India, perhaps for the same reason: We had to work to ensure that our office was near that of other faculty, and actively pursue integration with the department in Hyderabad. Here, we still have to actively pursue potential collaborations, but this is facilitated by the support that Friedemann and his group have given us.

ETH is also familiar in the sense that it functions like most other universities I’ve been part of over the years, as a homing ground for students, an organizer of talks on a wide breadth of topics, a place to discuss and teach and learn. One thing that differs from american universities is the structure of the department. The model here is one person per area. For example, a friend at the University of Z√ľrich is the only person in Human Computer Interaction in her department, and is expected to carry the entire field. Critical mass is built across all of computer science, not within sub-areas. Instead, one recruits a productive and diverse set of post docs, doctoral students, masters students, and so on who work together to make the area a success. This is the polar opposite of a place like Carnegie Mellon, where entire departments are formed around sub-fields.

One of the more interesting things about being on sabbatical is the opportunity to rethink and think through who I am as a researcher. I am frequently given the opportunity to speak about my work to a variety of audiences, and I have written a number of different talks over the year attempting to summarize my work in assistive technology, my work in sustainability, overarching themes for the technical aspects of my work, and deeper questions about the value of the projects that I have chosen to do. Along the way, I have studied machine learning (I will have to write about this, as I took the Stanford ml course last fall) and am now studying hardware in more depth, finally finished a paper on the value of futurism (or rather Futures Studies) in guiding research (an enormous stretch for me, as it is primarily what I would consider a design/thought paper) and an article for interactions questioning the focus of sustainable human computer interaction research, based on a recent blog post on the topic.

To me, the ability to see and think about new models for academia as a whole, my own research, and everything in between, is one of the most valuable things about this year away. It’s a chance to rethink, question, and consider what works, what should be done, and what will make a difference.

Scanning books to explore the future role of technology with respect to climate change

I have been reading up on the discipline of futurism, which academically speaking provides methodological hints for exploring what may happen in the future. One of those approaches, monitoring, and in particular environmental scanning (looking for signs or indicators in large volumes of relevant information) can be of value [1].

Inspired by that and the work of Dourish & Bell in “Reading science fiction alongside ubiquitous computing,” I have begun to work my way through a collection of science fiction, science, and non-fiction books that look forward into the future with respect to climate change. In choosing books, I explored a combination of indie fiction, activist monologues, mainstream science fiction, and scientific writing. In reading these books, I am particularly focusing on how they portray science (or what they say about it) and its interaction with other trends.

I have not had time to read many of the books I’ve found yet, but I have started on a few: Forty Signs of Rain¬†(Robinson) depicts scientists at the NSF in the near future as they try to rethink the role of the organization as the climate reaches a tipping point; The Ultimate Choice (Hinsley) depicts a society in which population growth has eaten up the land needed to grow food and the government is forced to watch its people slowly starve or take more drastic measures; Smart Mobs: The Next Social Revolution (Rheingold) discusses the interaction between technology and the social nature of human society (but has only one paragraph that mentions climate change); The Windup Girl¬†(Bacigalupi) portrays a world already changed for the worse by genetic engineering but relatively untouched by climate change; Flood¬†(Baxter) posits a world that is slowly drowning in oceans rising at an exponentially increasing weight (non-anthropogenic climate change).

Some of these are books I just happened to read recently for pleasure or research while others were selected specifically for this project. All of them take very different perspectives on what may happen with the climate, and equally different perspectives on technology. For example, Robinson’s book is set in a future whose technologies are exactly those of today, while Hinsley posits an Orwellian society in which most citizens are lucky to have food and a map is a technological luxury. Television, of course still exists, but only the state has modern technologies (and they, again, match todays). In Rheingold’s book, the purpose is to explore cutting edge technology; the role of applications, and their interaction with culture and society society (specifically how to avoid threats to liberty, dignity, and quality of life while enabling tho promise of these technologies). Bacigalupi creates a rich portrayal of a world whose technological innovations are new engineered species (semi-human and others) that are at times indistinguishable from (or competitive with) existing species. In Baxter’s world, technology is more prominent than in the other books, despite the fact that technological innovation has (almost) halted (after the creation of the ultimate music player, one which connects almost magically with the brain, without cords) due to the focus on surviving the disastrous consequences of the unstoppable flood. The most prominent technology in the novel is a 3d projection system that can render the earth and illustrate the coming catastrophe. Perhaps equally important is the solar powered mobile phones that allow climate scientists to “convene” virtually once each year as they pay witness to the disastrous flood overtaking the earth.

At this point, I’m still trying to process all of this. I think it is interesting that technology and climate are so divorced in most of these novels. The exception is technology’s role in science in the Robinson and Baxter novels (supporting communication between scientists, visualizations for non-scientists, and so on). None of the books discuss technologies intended to help save energy or otherwise influence behavior; homes are not smart in these books and phones are just phones for the most part. I wonder if the non-fiction books about dealing with climate change are any more likely to mention technology. I suspect not: There is an aspect of luddism to some of the climate non-fiction that would

[1] Bell, W. (2003). Foundations of Futures Studies: Human science for a new era: History purposes and knowledge (Volume 1). Transaction Publishers.

[2]¬†Dourish, P. & Bell, G. (2008). ‚ÄėResistance is Futile‚Äô: Reading Science Fiction Alongside Ubiquitous Computing. Personal and Ubiquitous Computing. In Press.