Infant Oxygen Monitoring

Hospitalized children on continuous oxygen monitors generate >40,000 data points per patient each day. These data do not show context or reveal trends over time, techniques proven to improve comprehension and use. Management of oxygen in hospitalized patients is suboptimal—premature infants spend >40% of each day outside of evidence-based oxygen saturation ranges and weaning oxygen is delayed in infants with bronchiolitis who are physiologically ready. Data visualizations may improve user knowledge of data trends and inform better decisions in managing supplemental oxygen delivery.

First, we studied the workflows and breakdowns for nurses and respiratory therapists (RTs) in the supplemental oxygen delivery of infants with respiratory disease. Secondly, using end-user design we developed a data display that informed decision-making in this context. Our ultimate goal is to improve the overall work process using a combination of visualization and machine learning.

Visualization mockup for displaying O2 saturation over time to nurses.
Visualization mockup for displaying O2 saturation over time to nurses.

Probabilistic Input

Increasingly natural, sensed, and touch-based input is being integrated into devices. Along the way, both custom and more general solutions have been developed for dealing with the uncertainty that is associated with these forms of input. However, it is difficult to provide dynamic, flexible, and continuous feedback about uncertainty using traditional interactive infrastructure. Our contribution is a general architecture with the goal of providing support for continual feedback about uncertainty.

Our architecture tracks multiple interfaces – one for each plausible and differentiable sequence of input that the user may have intended. This paper presents a method for reducing the number of alternative interfaces and fusing possible interfaces into a single interface that both communicates uncertainty and allows for disambiguation.

Rather than tracking a single interface state (as is currently done in most UI toolkits), we keep track of several possible interfaces. Each possible interface represents a state that the interface might be in. The likelihood of each possible interface is updated based on user inputs and our knowledge of user behavior. Feedback to the user is rendered by first reducing the set of possible interfaces to a representative set, then fusing interface alternatives into a single interface, which is then rendered.


Julia Schwarz
, Jennifer Mankoff, Scott E. Hudson:
An Architecture for Generating Interactive Feedback in Probabilistic User Interfaces. CHI 2015: 2545-2554

Julia Schwarz, Jennifer Mankoff, Scott E. Hudson:
Monte carlo methods for managing interactive state, action and feedback under uncertainty. UIST 2011: 235-244

Julia SchwarzScott E. Hudson, Jennifer Mankoff, Andrew D. Wilson:
A framework for robust and flexible handling of inputs with uncertainty. UIST 2010: 47-56

Replacing ‘Wave to Engage’ with ‘Intent to Interact’

Schwarz, J., Marais, C., Leyvand, T., Hudson, S., Mankoff, J. Combining Body Pose, Gaze and Motion to Determine Intention to Interact in Vision-Based Interfaces. In Proceedings of the 32nd Annual SIGCHI Conference on Human Factors in Computing Systems (Toronto, Canada, April 26 – May 1, 2014). CHI ’14. ACM, New York, NY.

 paper  | video summary  | slides

Vision-based interfaces, such as those made popular by the
Microsoft Kinect, suffer from the Midas Touch problem:
every user motion can be interpreted as an interaction. In
response, we developed an algorithm that combines facial
features, body pose and motion to approximate a user’s
intention to interact with the system. We show how this can
be used to determine when to pay attention to a user’s actions and when to ignore them. To demonstrate the value of
our approach, we present results from a 30-person lab study
conducted to compare four engagement algorithms in single
and multi-user scenarios. We found that combining intention to interact with a “raise an open hand in front of you”
gesture yielded the best results. The latter approach offers a
12% improvement in accuracy and a 20% reduction in time
to engage over a baseline “wave to engage” gesture currently used on the Xbox 360