Anukriti Kumar, Kate Glazko, Yueran Sun, Mark Harniss, Lucy Lu Wang and Jennifer Mankoff: “Beyond Readability Metrics: Plain Language Priorities in Disability Advocacy Organizations” FAccT 2026
NOTE: We provide the original abstract below. A plain language summary is here first.
Plain language is important for people who have trouble understanding complex writing. For example, people with disabilities may use plain language. With plain language, people can still get important information, such as about health, or policy. Because of this, many groups that support disabled people share information in plain language.
This is hard work, and we want to make it easier. But first, we need to find out what these groups do. We wanted to know how experts make plain language.
- We talked to experts in three groups that support disabled people.
- We collected example plain language that people made.
- We also tried using AI to make plain language, and asked experts what they thought.
- Experts often use scores, such as reading difficulty, to check plain language. We studied how all texts did on many different scores.
To our surprise, no score was high on every plain language example experts shared with us. AI was also not good at plain language, but it could help an expert get started. We think people need better tools for checking plain language, and better ways to support experts and communities in meeting their own needs.
Original abstract:
Plain language materials enable people with intellectual and developmental disabilities (IDD) to access critical information about policy, healthcare, and civic participation. Disability advocacy organizations routinely produce these materials, yet we know little about how practitioners approach this work, what standards guide their judgments, or whether current evaluation metrics align with their priorities. Through focus groups and interviews with 11 practitioners across three U.S. disability advocacy organizations, individual walkthroughs where practitioners evaluated AI-simplified documents, and systematic analysis of 33 pairs of original and simplified documents from four organizations using 28 readability metrics, we document plain language production as specialized expertise requiring policy knowledge, community accountability, and multi-stage validation processes. Practitioners who use AI tools report treating outputs as provisional starting points requiring complete human verification rather than autonomous producers of publication-ready content. Organization-produced documents averaged a Flesch-Kincaid Grade Level of 10.2, exceeding all published guideline targets ranging from 3rd to 8th grade, yet practitioners described these materials as successfully meeting community needs. This suggests that published text simplification guidelines may not capture dimensions practitioners and communities consider essential for high-stakes accessibility work. Based on our findings, we propose design principles for text simplification tools that center verification and transparency rather than automation, and call for evaluation frameworks that complement automated metrics with practitioner expertise and community accountability mechanisms.
