AMIA Logo

AMIA trip report

I have been curious about AMIA for some time, and was even invited to be part of a panel submission to it this year. So when I realized it was only a few hours’ drive away, I took advantage of the closeness to plan a last minute trip. It has been an interesting experience and well worth the attendance. Although a very large conference, the group of people attending seems to be friendly and open, and I was welcomed in particular by two wonderful women I met, Bonnie Kaplan and Patti Brennan. The sessions are an intriguing combination of computer science, medicine, and clinical practice (with the quality of/knowledge about each varying based on the expertise/presence of appropriate collaborators).  I attended sessions on Monday, Tuesday, and Wednesday. The theme that stood out to me more than any other across my experiences here was the great diversity of stakeholders that are (and especially that should be) considered in the design of effective health IT. Some very interesting observations came out of the large scale analysis of clinical data that I saw discussed on Monday. For example, there is a lot of attention being paid to data privacy (although one person commented that this is commonly misunderstood as “Uniqueness is not synonymous with being identified”) and particularly how to scrub data so that it can “get past IRB” for further analysis. One interesting approach taken by N. Shah (Learning Practice-based Evidence from unstructured Clinical Notes; Session S22) is to extract the terms (features) and use those instead of the data itself. Of course a limitation here is that you have to think of features ahead of time.

Another interesting topic that came up repeatedly is the importance of defining the timeline of the data as much as the timeline of the person. Questions that need to be answered include what is time zero in the data being analyzed (and what might be missing as a result); what is the exit cause, or end moment for each person in the database (and who is being left out / what is the bias as a result?); and the observation that in general “sick people make more data.” To this I would add that if you attempt to address these biases by collecting information there is potentially selection bias in the subjects and the impact of the burden of sensing on the data producer. Connected to this is the ongoing questions of the benefits and problems of a single unique identifier as a way of connecting health information.

Last observation from Monday is the question of what public data sets are out there that we should make ourselves aware of. For example, MIT has big data medical initiative and (also see http://groups.csail.mit.edu/medg/) and may have a clinical notes data set associated with it (I am still looking for this).

On Tuesday I started the day with S44: year in review  (D. Masys). I missed the very start of it, but came in when he was talking about studies of IT’s use in improving clinical practice, such as a study showing that reminding clinicians to do their work better improves patient outcomes (“physician alerts” “embedded in EHR systems” etc), or maybe just improves process, with the observation that we should measure both. Interestingly to me, the question of also improving process and outcomes by organizing the work of caregivers (and reminding them of things) was missing from this discussion.

Dr. Masys then moved on to explore unexpected consequences of IT that had been published: adding virtual reality caused “surgeon blindness” to some information; missed lab results in another study and alert fatigue in another (drug-drug interactions suffer from 90% overrides…). Given the difficulty of publishing negative results, it would be interesting to explore this particular set of work for tips. It was also interesting to hear his critique of questionable results, particularly the repeated mentions of hawthorne effects  because so many interventions are compared to care as usual (rather than an equal-intensity control condition). Another way of phrasing this is to ask at what cost does the intervention work (and/or how do we “adjust for the intensity of the intervention” )

Another category Dr. Masys explored of interest to me was health applications of mobile electronics. Briefly, one study looked at chronic widespread pain … reduced ‘catastophizing’; four looked at text messaging vs telephone appointment reminders; effectiveness of a short message reminder in increased followup compliance; text4baby mobile health program; cameroon mobile phone SMS 9CAMPS) trial (PLoS One)

Dr. Masys then moved on to the practice of clinical informatics and bioinformatics (out of “the world of rcts”). This focused on new methods that might be interesting. I particularly want to follow up on one of the few studies that looked at multiple stakeholders which had the goal of reducing unintended negative consequences; the use of registries to do low cost, very large trials;  the use of a private key derived from dna data being encrypted for encrypting that same data; and the creation of a 2D barcode summarizing patient genetic variants that affect the dose or choice of a drug; and a demonstration that diagnostic accuracy was as good on a tiny mobile phone screen as a big screen.

The last category reviewed by Dr. Masys was editors choice publications from JAMIA; J. of Biomed. Informatics; and the Diane Forsyth award. Almost all of these seem worth reviewing in more depth — particularly the JAMIA articles scientific research in the age of omics (explores the need to increase accountability of scientists for the quality of their research) web-scale pharmacovigilance (used public search engine logs to detect novel drug drug interactions); CPOEs decrease medication errors (a meta study that basically concluded without realizing it that CPOEs would work better if we had only applied basic principals from contextual inquiry!) and the JBI articles by Rothman, who developed a continuous measure of patient condition-predicted hospital re-admission and mortality independent of disease (how does this compare with patient reported health status); Weiskopf (who documented the relative incompleteness of EHR data across charts he studied); Friedman’s overview of NLP state of the art and prospects for significant progress (summary of a workshop); Post’s article on tools for analytics of EHR data; and Valizadegan’s article on learning classification models from multiple experts who may disagree (given my interest in multiple viewpoints).

Next, I attended a panel about Diana Forsyth (obit; some pubs; edited works), an ethnographer who had a big impact on the field of medical informatics (and others as well) … she has passed away, and perhaps only a small number of people read work, but her work had an enormous influence on those people who encountered her writing on methods, research topics, and so on. She was compared by one panelist to Arthur Kleinman (who helped to make the distinction between the abstraction of disease and the human experience of illness; treatment and healing). Some of the most interesting parts of the discussion were focused on how the field is changing over time, prompted by a question of Katie Siek’s — for example getting data into the computer, computers into the hospitals, now making them work for people correctly, and what comes after that? Another interesting comment was about the authority of the physician being in part based on their ability to diagnose (which conveys all sorts of societal benefits). This points to the role of the physician (when diagnosis doesn’t exist human creativity is especially needed) versus IT (which can handle more well defined situations). However with respect to healing, maybe the power of physicians is in listening as much as diagnosing (also something computer’s can’t do, right?). Other topics that came up included the importance of the patient voice and patient empowerment/participation.

After lunch with a friend from high school I attended S66 (User centered design for patients and clinicians). In line with the hopes of the Forsyth panel I saw a mixture of techniques here including qualitative analysis. Unfortunately, what I did not see was technology innovation (something that may point to a different in vocabulary regarding what “user centered design” means). However the qualitative methods seemed strong. One interesting talk explored the issues in information transfer from the hospital to home health care nurses. A nice example of some of the breakdowns that occur between stakeholders in the caregiver community. More and more, however, I find myself wondering why so much of the work here only focuses on caregivers with degrees of some sort in medicine (as opposed to the full ecology of caregivers). I was pleased to see low-income settings represented, exploring the potential of mobile technology to help with reminders to attend appointments and other reminders; and a series of 3 studies on health routines culminating in a mobile snack application (published at PervasiveHealth) by Katie Siek & collaborators. One nice aspect of this project was that the same application had differing interfaces for different stakeholders (e.g. teenagers vs parents).

I started to attend the crowdsourcing session after the break, but it did not appear to have much in terms of actual crowdsourcing. An open area for health informatics? Instead I went on to S71, family health history & health literacy. The most interesting paper in the session, to me, looked at health literacy in low SES communities (by many co-authors including Suzanne Bakken). In particular, they have data from 4500 households which they would like to visualize back to the participants to support increased health literacy. Their exploration of visualization options was very detailed and user centered and resulted in the website GetHealthyHeights.org (which doesn’t seem to be alive at the moment). However I have concerns about the very general set of goals with respect to what they hope people will get out of the visualizations. It would be interesting to explore whether there’s a higher level narrative that can be provided to help with this. Similarly, does it make sense to present “typical” cases rather than specific data.

On Wednesday I began in S86: Late breaking abstracts on machine learning in relation to EMRs. This session had some interesting exploration of tools as well as some patient focused work. One study looked at prediction of mobility improvements for older adults receiving home health care, by subgrouping 270k patients and looking at factors associated with the subgroups. Steps included de-identification; standardize data; accounting for confounding factors; divide into sub groups; and then used data mining to look at factors that affected individual scores and group scores using clustering and pattern mining. An interesting take on what is part of the data “pipeline” that goes beyond some of the things I’ve been thinking are needed for lowering barriers to data science. Another looked at decision support for pre-operative medication management (an interesting problem when I consider some of the difficulties faced by the many doctors coordinating my mother-in-law’s care recently).  This work was heuristic in nature (a surprising amount of work here is still focusing on heuristics over other more statistically based approaches). From this work I noticed another trend however, the need to connect many different types of information together (such as published work on drugs, clinical notes, and patient history).

The last session I attended was S92, one of the few sessions focused specifically on patients (and not very well attended…). The first talk was about creating materials for patient consumption, supporting access to EHRs, 2-way secure messaging, and customized healthcare recommendations. They focused especially on summarizing medication information concisely. The second is about a national network for comparative effectiveness. Maybe this is the crowdsourcing of health IT? This was focus group based research (a surprising popular method across AMIA given how little support there is for this method in HCI) exploring user attitudes about data sharing. Interesting that the work presented here ignored a long history of research in trust in computing e.g. from Cliff Nass, the e-commerce literature, and so on. However, the data was nicely nuanced in exploring a variety of ethical issues and acknowledging the relative sophistication of the group members with respect to these issues. The issues raised are complex — who benefits, who owns the data, how would the bureaucracy function, how to manage authorization given that studies aren’t always known yet (and opt-in vs opt-out approaches). I wonder how a market for research would function (think kickstarter but I donate my data instead of money…). The next paper looked at what predicted people thinking EHR are important both for themselves and their providers, and through a disparities lens.

The closing plenary was given by Mary Czerwinski (pubs) from Microsoft Research. I always enjoy her talks and this was no exception. Her focus was on her work with affective systems, relating to stress management. Her presentation included the a system for giving clinicians feedback about their empathy in consults with patients; as well as a system for giving parents reminders when they were too stressed to remember the key interactions that could help their ADHD kids. Interestingly, in the parent case, (1) the training itself is helpful and (2) the timing is really important — you need to predict a stress situation is building to intervene successfully (I would love to use this at home :). She ended by talking about a project submitted to CHI 2014 that used machine learning to make stress management suggestions based on things people already do (e.g. visit your favorite social network; take a walk; etc). One of the most interesting questions was whether emotional state could predict mistake making in coding data (or other tasks).

Would I go back a second time? Maybe … It is a potentially valuable setting for networking with physicians; the technical work is deep enough to be of interest (though the data sets are not as broad as I’d like to see). It’s a field that seems willing to accept HCI and to grow and change over time. And the people are really great. The publishing model is problematic (high acceptance rates; archival) and I think had an impact on the phase of the 3421work that was presented at times. What was missing from this conference? Crowdsourcing, quantified self research, patient websites like PatientsLikeMe, patient produced data (such as support group posts), significant interactive technology innovation outside the hospital silo. In the end, the trip was definitely worthwhile.

Some observations about people who might be interesting to other HCI professionals interested in healthcare. For example, I noticed that MITRE had a big presence here, perhaps because of their recent federally funded research center. In no particular order here are some people I spoke with and/or heard about while at AMIA 2013:


Patti Brennan (some pubs) is the person who introduced me to or told me about many of the people below, and generally welcomed me to AMIA. She studies health care in the home and takes a multi-stakeholder perspective on this. A breath of fresh air in a conference that has been very focused on things that happen inside the physician/hospital silo.

Bonnie Kaplan is at the center for medical informatics in the Yale school of medicine. Her research focuses on “Ethical, legal, social, and organizational issues involving information technologies in health care, including electronic health and medical records, privacy, and changing roles of patients and clinicians.”

Mike Sanders from www.seekersolutions.com, which is providing support for shared information between nurses, caregivers & patients, based in B.C. (Canada).

Amy Franklin from UT Health Sciences Center, has done qualitative work exploring unplanned decision making using ethnographic methods. Her focus seems to be primarily on caregivers, though the concepts might well transfer to patients.

Dave Kaufman is a cognitive scientist at ASU who studies, among other HCI and health including “conceptual understanding of biomedical information and decision making by lay people.”  His studies of mental models and miscommunication in the context of patient handoff seem particularly relevant to the question of how the multi-stakeholder system involved in dealing with illness functions.

Paul Tang (Palo Alto Medical Foundation) is a national leader in the area of electronic health records and patient-facing access to healthcare information.

Danny Sands (bio; some pubs)– doctor; entrepreneur; founded society for participatory medicines; focus on doctor-patient communication and related tools; studies of ways to improve e.g. patient doctor email communication.

Dave deBronkart (e-patient Dave, who’s primary physician was Dr. Sands during his major encounter with the healthcare system), best summarized in his Ted talk “Let Patients Help” (here’s his blog post on AMIA 2013)George Demiris from University of Washington studies “design and evaluation of home based technologies for older adults and patients with chronic conditions and disabilities, smart homes and ambient assisted living applications and the use of telehealth in home care and hospice.”. His projects seem focused on elders both healthy and sick. One innovative project explored the use of skype to bring homebound patients into the discussions by the hospice team.
Mary Goldstein who worked on temporal vision of patient data (KNAVE-II) and generally “studies innovative methods of implementing evidence-based clinical practice guidelines for quality improvement” including decision support.

Mark Musen studies “mechanisms by which computers can assist in the development of large, electronic biomedical knowledge bases. Emphasis is placed on new methods for the automated generation of computer-based tools that end-users can use to enter knowledge of specific biomedical content.” and has created the Protégé knowledge base framework and ontology editing system.

Carol Friedman does “both basic and applied research in the area of natural language processing, specializing in the medical domain” including creating the MedLEE system (“a general natural language extraction and encoding system for the clinical domain”). Her overview of NLP paper was mentioned in the year in review above.

Suzanne Bakken (pubs) has been doing very interesting work in low income communities around Columbia in which she is particularly interested in communicating the data back to the data producers rather than just focusing on its use for data consumers.Henry Feldman (pubs) who was an IT professional prior to becoming a physician has some very interesting thoughts on open charts, leading to the “Open Notes” project

Bradley Malin (pubs) is a former CMU student focused on privacy who has moved into the health domain who is currently faculty at Vanderbilt. His work provides a welcome and necessary theoretical dive into exactly how private various approaches to de-identifying patient data are. For example, his 2010 JAMIA article showed that “More than 96% of 2800 patients’ records are shown to be uniquely identified by their diagnosis codes with respect to a population of 1.2 million patients.”


Jina Huh
 (pubs) studies social media for health. One of her recent publications looked at health video loggers as a source of social support for patients. She shares an interest with me in integrating clinical perspectives into peer-produced data.
Katie Siek (pubs) who recently joined the faculty at Indiana does a combination of HCI and health research mostly focusing on pervasive computing technologies. One presentation by her group at AMIA this year focused on a mobile snacking advice application that presented different views to different stakeholders.
Madhu Reddy (some pubs) trained at UC Irvine under Paul Dourish and Wanda Pratt and brings a qualitative perspective to AMIA (he was on the Diana Forsyth panel for instance). He studies “collaboration and coordination in healthcare settings”
Kathy Kim who spoke in the last session I attended about her investigations of patient views on a large data sharing network to support research, but also does work that is very patient centered (e.g. mobile platforms for youth).
Steve Downs who works in decision support as well as policy around “how families and society value health outcomes in children”
Chris Gibbons (some pubs) who focuses on health disparity (e.g. barriers to inclusion in clinical trials and the potential of eHealth systems).