Final Project

The goal of your final project is to explore an accessibility issue in more depth than you’ve been able to do in our projects so far. In choosing this project, you may want to draw from personal expertise, literature, or user data should you have access to it.

Your final project will have three phases:

Proposal

  • Proposal: Your proposal be a slide deck with 5 slides that describe your
    • promise: How the world will be better based on your project
    • obstacle: Why we don’t have this already.
    • solution: How you will achieve the promise. This will most likely be primarily technical, such as a novel device.
    • related work: It should also include a related work section with at least 5 references showing some evidence for the importance of this problem.
    • timeline: Finally, it should include a timeline showing that this is feasible.
  • Development: We will check in on projects in part of class and/or office hours on a weekly basis to help provide guidance about progress on the milestones laid out in your timeline

Midterm Writeup

Midway through the project you will turn in a brief update to your project. This should included an up-to-date written version of your promise, obstacle and solution (1-3 paragraphs) and a related work section, also updated based on feedback (3-4 paragraphs). The total should be less than a page long.

Final Project Writeup

The final 2-page report should be in the 2-column CHI template format: https://chi2020.acm.org/authors/chi-proceedings-format/

Requirements for this are below. In addition you should follow the writing guidelines put out by SIGACCESS for writing about disability.

In addition, you will participate in a poster session.

Poster

Your poster should cover the same basic items as your report, but in much less depth. It should have a section highlighting the key goals of the project, images of what you did and/or pictures that convey study results if you did one, and some explanation of how you accomplished things, as well as mentioning how a disability studies perspective informed your project.

It does not need a related work section, and you will want to put your names on it and a big title.

Written Document

The report should cover these main sections:

  • Introduction — 1-3 paragraphs: Present the promise/ obstacle/ solution for your project — what is the problem are you solving and why is it important to solve it? This can re-use text from your midterm report.
  • Related Work  — 3-4 paragraphs: Talk about relevant work that closely connects with your project. This can re-use text from your midterm report.
  • Methodology — about 1 page: What did you do in your project – If you worked with participants: how many people, what did they do. If you implemented a system, or designed something, what did you design?
  • Disability Studies Perspective – 1 Paragraph: How did a disability studies perspective inform your project
  • Conclusions — 1-2 paragraphs: describe what you learnt and how can this be extended/built on in the future
  • Personal reflection — 1-2 paragraphs, individual and handed in separately: describe what you personally learned from this project, and what your individual contributions were to the team.

Important notes and considerations

  • Language: You will be expected to use best practices in language and presentation. Here is the SIGACCESS guide on this.
  • The things we have emphasized in this class, namely a disability studies perspective and physical building, should be featured in your project as much as possible.
    • With respect to disability studies, you should think critically about whether and how your project empowers and gives agency to people with disabilities, as well as the extent to which it expects/engages the larger structural issues around the problem you’re trying to solve
    • With respect to physical computing, this is not required, but you should get approval from the instructor if you go in a different direction, and have a rationale
  • If you don’t have personal experience justifying the choice of problem, it is important to find studies that involved people with disabilities that help justify the sense of your proposed work. It is not feasible to do a full iterative design cycle in this project (and not necessarily an ethical use of the time of people with disabilities), but equally important not to come in with a ‘hero complex’ and simply believe you know what people need.
  • Your project can include designing and piloting a study, but only if you have significant experience already in this domain since we haven’t really taught that aspect of accessibility in this class. Better to spend time on skills you learned here! In addition, given the number of weeks available, be careful not to overcommit (e.g. creating a significant novel device and a lengthy study!)

3D printing on the Ultimaker

Cura is the software yo ushould use. It has built in slicing, runs on macs and windows, and has pre-configured options for all Ultimaker models in the add-a-printer dialogue (instructions for adding a printer).

You will need to first export your model as an STL from OpenSCAD: First you render, not just preview, then you 3D print (the menu option just under Render in the image at right. You may need to debug your model. The result will be an STL file.

When you load an STL file into Cura, you then prepare your print. There are MANY options to consider, which are documented in detail on the ‘Mastering Cura’ webpage.  Keep an eye on predicted print time

You saw in class how to start a print. First, save to GCODE from CURA. Then bring it to the Ultimaker. The Ultimaker resources I am linking to are part of a series (look for the arrows at bottom right and left of each page) that walks you through everything you need to make that first print. I’d recommend trying this out with something really small from the essential calibration set in our drive such as the thin wall box. It should be something that prints in 20 mins or less. You can also experiment with settings such as rafts and brims in that small format.

Please see the Slides about Printer Operation (accessible to people in the class only) for more detail.

Vivian G Motti (Visitor)

Vivian Genaro Motti, Assistant Professor, Information Sciences and Technology. Photo by: Ron Aira/Creative Services/George Mason University

I am an Assistant Professor on Human Computer Interaction at George Mason University where I lead the Human-Centric Design Lab. In the Fall 2019, I am a visiting scholar at the University of Washington’s Paul G. Allen School of Computer Science and Engineering. My research interests involve the design and evaluation of smartwatch applications to assist young adults with neurodiverse conditions. More specifically, I focus on how wearable applications can assist neurodiverse individuals with self-regulation, executive functions and activities of daily living. 

I am also interested on usable privacy for smart home devices, wearables, accessibility and mHealth.

For additional information, please visit my website: www.vivianmotti.org

Kelly Avery Mack

Avery is a Phd Student in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. They are advised by Prof. Jennifer Mankoff. They completed their bachelors in Computer Science at the University of Illinois at Urbana-Champaign in 2019, where Prof. Aditya Parameswaran and Prof. Karrie Karahalios advised them. They are an NSF Fellow and an ARCS Scholar.

Their research focuses on applying computer science to create or improve technologies that serve people with disabilities. Their current work focuses on 1) representation of people with disabilities in digital technologies like avatars and generative AI tools, and 2) how to support people with fluctuating access needs like neurodiverse people and people with chronic or mental health conditions. 

Visit Avery’s homepage at https://kmack3.github.io.

Detecting Loneliness

Feelings of loneliness are associated with poor physical and mental health. Detection of loneliness through passive sensing on personal devices can lead to the development of interventions aimed at decreasing rates of loneliness.

Doryab, Afsaneh, et al. “Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data.” JMIR mHealth and uHealth 7.7 (2019): e13209.

Objective: The aim of this study was to explore the potential of using passive sensing to infer levels of loneliness and to identify the corresponding behavioral patterns.

Methods: Data were collected from smartphones and Fitbits (Flex 2) of 160 college students over a semester. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire at the beginning and end of the semester. For a classification purpose, the scores were categorized into high (questionnaire score>40) and low (≤40) levels of loneliness. Daily features were extracted from both devices to capture activity and mobility, communication and phone usage, and sleep behaviors. The features were then averaged to generate semester-level features. We used 3 analytic methods: (1) statistical analysis to provide an overview of loneliness in college students, (2) data mining using the Apriori algorithm to extract behavior patterns associated with loneliness, and (3) machine learning classification to infer the level of loneliness and the change in levels of loneliness using an ensemble of gradient boosting and logistic regression algorithms with feature selection in a leave-one-student-out cross-validation manner.

Results: The average loneliness score from the presurveys and postsurveys was above 43 (presurvey SD 9.4 and postsurvey SD 10.4), and the majority of participants fell into the high loneliness category (scores above 40) with 63.8% (102/160) in the presurvey and 58.8% (94/160) in the postsurvey. Scores greater than 1 standard deviation above the mean were observed in 12.5% (20/160) of the participants in both pre- and postsurvey scores. The majority of scores, however, fell between 1 standard deviation below and above the mean (pre=66.9% [107/160] and post=73.1% [117/160]).

Our machine learning pipeline achieved an accuracy of 80.2% in detecting the binary level of loneliness and an 88.4% accuracy in detecting change in the loneliness level. The mining of associations between classifier-selected behavioral features and loneliness indicated that compared with students with low loneliness, students with high levels of loneliness were spending less time outside of campus during evening hours on weekends and spending less time in places for social events in the evening on weekdays (support=17% and confidence=92%). The analysis also indicated that more activity and less sedentary behavior, especially in the evening, was associated with a decrease in levels of loneliness from the beginning of the semester to the end of it (support=31% and confidence=92%).

Conclusions: Passive sensing has the potential for detecting loneliness in college students and identifying the associated behavioral patterns. These findings highlight intervention opportunities through mobile technology to reduce the impact of loneliness on individuals’ health and well-being.

News: Smartphones and Fitbits can spot loneliness in its tracks, Science 101

Use 3D printing to make something Accessible (Due 10/16)

The goal of this assignment is for you to develop basic familiarity with OpenSCAD. Your goal is to create a model of something that makes something more accessible for you or someone else. To keep this problem within reason for a first assignment, you should focus on things that are fairly simple to model. You should work in pairs on this assignment.

Examples would be a tactile label for something (such as a luggage tag), a guide (to make moving something along a path easier) or a lever (to make rotating something easier.

  • Your solution should be correctly sized (i.e. try to measure the thing you are modifying and to make sure that your printed object is appropriately sized).
  • You should use a simple method to attach things such as a zip-tie (simply requires small holes), or glue.
  • Your object should be small (be printable in 20 minutes to 2 hours)

You should create a Thingiverse “thing” which represents your object with a picture of your final object in use, your OpenSCAD file, and a picture of your model, along with a brief explanation of what problem it solves, how to correctly size it, how it attaches to or interacts with the real world. If you remixed something else on Thingiverse be sure to correctly attribute it (by creating a remix).

You should submit the link to your Thingiverse “thing” on Canvas.

You should also print it out to demo in class. Here is a page with information about using the Ultimaker printers. This slide deck about 3D printing also has lots of in.

The grading rubric for this assignment is as follows. When points are 1 or 0, this is pass fail (no nuance). When points are 0-3, use the following scale: 0 – Not done; 1 – Short shallow solution; 2 – Good solution; 3 – Outstanding solution.

Points Description Comments (by grader)
0-3 Create a 3D model that solves a problem
1 or 0Learn how to correctly size a model
1 or 0Apply an appropriate attachment method
1 or 0Learn the pipeline: Create a 3D printed object from your model
0-3Describe how a model should be used

Setting up your BlueFruit

Setting up your bluefruit is fairly straightforward, but there are a couple of things you will need to do. They are (almost) all documented on the AdaFruit website BlueFruit page. Some things you will need to do:

  • Install the Arduino software
  • Open Preferences and put ‘https://adafruit.github.io/arduino-board-index/package_adafruit_index.json’ in the Additional Boards Manager URL
  • Open Tools>Board…>Board Manager
  • Click on Adafruit nRF52 and click ‘Install’
  • Quit and re-open the Arduino IDE
  • Check if you have succeeded. You should be able to select the Bluefruit board from the Boards menu, select the correct port from Tools>Port and upload a sketch!

OS-specific Install Instructions

  • If you are on a mac, you will additionally need to install the USB to UART bridge drivers provided by Silabs. Be sure (within 30 minutes of install) to approve it in the Security and Privacy settings for your mac (you’ll see a button for this below “Allow apps downloaded from…”).
  • If you are Windows, you may need to install a driver (I did not have to on the windows machine in our classroom).
  • Check if you have succeeded. You should see a USB port in your Arduino Ports menu

Additional Software

  • You will need the Bluefruit LE Connect app on your phone, and here is the AdaFruit website describing how to install that
  • You’ll want the Bluefruit libraries and sample code. Go to Tools>Manage Libraries and search for bluefruit. Install the Adafruit BluefruitLE nRF51 suite

Sketches used in class

You can download the sketches used in our class from our class google drive, arduino folder.

The Limits of Expert Text Entry Speed

Improving mobile keyboard typing speed increases in value as more tasks move to a mobile setting. Autocorrect is a powerful way to reduce the time it takes to manually fix typing errors, which results in typing speed increase. However, recent user studies of autocorrect uncovered an unexplored side-effect: participants’ aversion to typing errors despite autocorrect. We present the first computational model of typing on keyboards with autocorrect, which enables precise study of expert typists’ aversion to typing errors on such keyboards. Unlike empirical typing studies that last days, our model evaluates the effects of typists’ aversion to typing errors for any autocorrect accuracy in seconds. We show that typists’ aversion to typing errors adds a self-imposed limit on upper bound typing speeds, which decreases the value of highly accurate autocorrect. Our findings motivate future designs of keyboards with autocorrect that reduce typists’ aversion to typing errors to increase typing speeds.

The Limits of Expert Text Entry Speed on Mobile Keyboards with Autocorrect Nikola Banovic, Ticha Sethapakdi, Yasasvi Hari, Anind K. Dey, Jennifer Mankoff. Mobile HCI 2019.

A picture of a samsung phone. The screen says: Block 2. Trial 6 of 10. this camera takes nice photographs. The user has begun typing with errors: "this camera tankes l" Error correction offers 'tankes' 'tankers' and 'takes' and a soft keyboard is shown before that.

An example mobile device with a soft keyboard: A) text entry area, which in our study contained study progress, the current phrase to transcribe, and an area for transcribed characters, B) automatically suggested words, and C) a miniQWERTY soft keyboard with autocorrect.

A bar plat showing typing speed (WPM, y axis) against acuracy (0 to 1). The bars start at 32 WPM (for 0 accuracy) and go up to approx 32 (for accuracy of 1).
Our model estimated expected mean typing speeds (lines) for different levels of typing error rate aversion (e) compared to mean empirical typing speed with automatic correction and suggestion (bar plot) in WPM across Accuracy. Error bars represent 95% confidence intervals.
4 bar plats showing error rate in uncorrected, corrected, autocorrected, and manual corrected conditions. Error rates for uncorrected are (approximately) 0 to 0.05 as accuracy increases; error rates for corrected are .10 to .005 for corrected condition as accuracy goes from 0 to 1. Error rates are  0 to about .1 for uncorrected as accuracy goes from 0 to 1. Error rates are variable but all below 0.05 for manual as accuracy goes from 0 to 1
Median empirical error rates across Accuracy in session 3 with automated correction and suggestion. Error bars represent minimum and maximum error rate values, and dots represent outliers

KnitPick: Manipulating Texture

Knitting creates complex, soft objects with unique and controllable texture properties that can be used to create interactive objects. However, little work addresses the challenges of using knitted textures. We present KnitPick: a pipeline for interpreting pre-existing hand-knitting texture patterns into a directed-graph representation of knittable structures (KnitGraphs) which can be output to machine and hand-knitting instructions. Using KnitPick, we contribute a measured and photographed data set of 300 knitted textures. Based on findings from this data set, we contribute two algorithms for manipulating KnitGraphs. KnitCarving shapes a graph while respecting a texture, and KnitPatching combines graphs with disparate textures while maintaining a consistent shape. Using these algorithms and textures in our data set we are able to create three Knitting based interactions: roll, tug, and slide. KnitPick is the first system to bridge the gap between hand- and machine-knitting when creating complex knitted textures.

KnitPick: Programming and Modifying Complex Knitted Textures for Machine and Hand Knitting, Megan Hofmann, Lea Albaugh, Ticha Sethapakdi, Jessica Hodgins, Scott e. Hudson, James McCann, Jennifer Mankoff. UIST 2019. The KnitPick Data set can be found here.

A picture of a knit speak file which is compiled into a knit graph (which can be modified using carving and patching) and then compiled to knitout, which can be printed on a knitting machine. Below the graph is a picture of different sorts of lace textures supported by knitpick.
KnitPick converts KnitSpeak into KnitGraphs which can be carved, patched and output to knitted results
A photograph of the table with our data measurement setup, along with piles of patches that are about to be measured and have recently been measured. One patch is attached to the rods and clips used for stretching.
Data set measurement setup, including camera, scale, and stretching rig
A series of five images, each progressively skinnier than the previous. Each image is a knitted texture with 4 stars on it. They are labeled (a) original swatch (b) 6 columns removed (c) 9 columns removed (d) 12 columns removed (e) 15 columns removed
The above images show a progression from the original Star texture to the same texture with 15 columns removed by texture carving. These photographs were shown to crowd-workers who rated their similarity. Even with a whole repetition width removed from the Stars, the pattern remains a recognizable star pattern.

Passively-sensing Discrimination

See the UW News article featuring this study!

A deeper understanding of how discrimination impacts psychological health and well-being of students would allow us to better protect individuals at risk and support those who encounter discrimination. While the link between discrimination and diminished psychological and physical well-being is well established, existing research largely focuses on chronic discrimination and long-term outcomes. A better understanding of the short-term behavioral correlates of discrimination events could help us to concretely quantify the experience, which in turn could support policy and intervention design. In this paper we specifically examine, for the first time, what behaviors change and in what ways in relation to discrimination. We use actively-reported and passively-measured markers of health and well-being in a sample of 209 first-year college students over the course of two academic quarters. We examine changes in indicators of psychological state in relation to reports of unfair treatment in terms of five categories of behaviors: physical activity, phone usage, social interaction, mobility, and sleep. We find that students who encounter unfair treatment become more physically active, interact more with their phone in the morning, make more calls in the evening, and spend less time in bed on the day of the event. Some of these patterns continue the next day.

Passively-sensed Behavioral Correlates of Discrimination Events in College Students. Yasaman S. Sefidgar, Woosuk Seo, Kevin S. Kuehn, Tim Althoff, Anne Browning, Eve Ann Riskin, Paula S. Nurius, Anind K Dey, Jennifer Mankoff. CSCW 2019.

A bar plot sorted by number of reports, with about 100 reports of unfair treatment based on national origin, 90 based on intelligence, 70 based on gender, 60 based on apperance, 50 on age, 45 on sexual orientation, 35 on major, 30 on weight, 30 on height, 20 on income, 10 on disability, 10 on religion, and 10 on learning
Breakdown of 448 reports of unfair treatment by type. National, Orientation, and Learning refer to ancestry or national origin, sexual orientation, and learning disability respectively. See Table 3 for details of all categories. Participants were able to report multiple incidents of unfair treatment, possibly of different types, in each report. As described in the paper, we do not have data on unfair treatment based on race.
A heatplot showing sensor data collected by day in 5 categories: Activity, screen, locations, fitbit, and calls.
A heatplot showing compliance with sensor data collection. Sensor data availability for each day of the study is shown in terms of the number of participants whose data is available on a given day. Weeks of the study are marked on the horizontal axis while different sensors appear on the vertical axis. Important calendar dates (e.g., start / end of the quarter and exam periods) are highlighted as are the weeks of daily surveys. The brighter the cells for a sensor the larger the number of people contributing data for that sensor. Event-based sensors (e.g., calls) are not as bright as sensors continuously sampled (e.g., location) as expected. There was a technical issue in the data collection application in the middle of study, visible as a dark vertical line around the beginning of April.
A diagram showing compliance in surveys, organized by nweek of study. One line shows compliance in the large surveys given at pre, mid and post, which drops from 99% to 94% to 84%. The other line shows average weekly compliance in EMAs, which goes up in the second week to 93% but then drops slowly (with some variability) to 89%
Timeline and completion rate of pre, mid, and post questionnaires as well as EMA surveys. Y axis
shows the completion rates and is narrowed to the range 50-100%. The completion rate of pre, mid, and post questionnaires are percentages of the original pool of 209 participants, whereas EMA completion rates are based on the 176 participants who completed the study. EMA completion rates are computed as the average completion rate of the surveys administered in a certain week of the study. School-related events (i.e., start and end of quarters as well as exam periods) are marked. Dark blue bars (Daily Survey) show the weeks when participants answered surveys every day, four times a day
Barplot showing significance of morning screen use, calls, minutes asleep, time in bed, range of activities, number of steps, anxiety, depression, and frustration on the day before, of, and after unfair treatment. All but minutes asleep are significant at p=.05 or below on the day of discrimination, but this drops off after.
Patterns of feature significance from the day before to two days after the discrimination event. The
shortest bars represent the highest significance values (e.g., depressed and frustrated on day 0; depressed on day 1; morning screen use on day 2). There are no significant differences the day before. Most short-term relationships exist on the day of the event, a few appear on the next day (day 1). On the third day one
significant difference, repeated, from the first day is observed.