Ashley Shew is an associate professor of science, technology, and society at Virginia Tech, and specializes in disability studies and technology ethics.
Earlier this year, I was interviewed by a reporter about large language models (LLMs, like ChatGPT) and disability. I talked about the many concerns the disability community might have about LLMs not providing very good information, and about how biases against disabled people will be repeated and amplified in what LLMs spit out. As we know from critical data researchers like Damien P. Williams, Joy Buolamwini, and Meredith Broussard (among others), LLMs draw from existing data to synthesize responses. Existing information about disabled people is ableist, written by non-disabled people, sometimes in an authoritative scientific register, and reflects all kinds of misguided and malignant beliefs about disability (like the philosophy of eugenics broadly, or debunked theories about particular disabilities, theories that have done real harm). Existing information both leaves disabled people out when we don’t fit categories well and distorts what our experiences are like. Algorithms go: garbage in, garbage out.
Even if we got less garbagy data when it pertains to disability, there’s so much out there that’s already being sucked up in ways that amplify and reflect disability bias, and ableism ensures that even some of our new data is hot trash. I think here of the way autistic critics like Ann Memmott and Rua Williams highlight problematic current therapies and technologies for autistic people, and the work of my dissertating friend Jim Tillett and researcher Monique Botha, reviewing literature that shows the ways in which researchers express misconceptions about particular disabilities, echo biases against disabled people, and presume a lack on the part of autistic people. I am heartened when I see social media accounts by the disabled roast the breathless university PR pieces on the latest-greatest-ableist innovations aimed at disabled people (often without talking to us).
At the end of the interview—where I also talked about some cool uses of AI that I’ve heard about from disabled friends—the reporter asked what my greatest worry is related to the new technology. I replied I’m most worried about the high environmental cost of LLMs. We can’t afford to waste the water that LLMs take; we are being encouraged to use an app (and become data for companies that make an app) that takes so much water to power and stresses local power grids. The environmental impact of LLMs is startling and should make us reconsider some applications and institute environmental policies in their place (see theses articles from 2022, 2023, and 2024, for examples); this is a widely known problem as well as an issue of corporate and public responsibility that we shouldn’t let powerbrokers cripwash away by appealing to the disability-friendly uses some companies might imagine. The reporter, a fellow disabled person, chuckled. They said that they shouldn’t have assumed my greatest worry about a technology would be related to disability.
They understand: disabled people care about a lot of things, and each other. Many of us are acutely aware of our interdependence with others and with the environment and how we are more vulnerable in emergencies; emergency planning doesn’t plan for our survival. Climate change and resource scarcity will, like so many things, hurt disabled people first and on a greater scale than it will hurt nondisabled people. Disabled people were vocal about our vulnerability during the devastating wildfires in California, and about calling more attention to this issue. Disabled people are active in organizing and planning for community protection and “the right to be rescued.” But this issue is not new in disability advocacy and spans from prior to Hurricane Katrina in 2005, where there were devastating consequences to being disabled, to continuing advocacy over Covid-19 protections and long-Covid recognition and planning.
The Principles of Disability Justice, developed by Patty Berne and other queer and trans Black, Latinx, and Indigenous disabled people in the arts collective Sins Invalid, include a commitment to cross-movement organizing, interdependence, and cross-disability solidarity. Disability justice recognizes that a lot of our current infrastructure for disability falls short of supporting the vast majority of disabled people: disability rights didn’t take us far enough. The principles also include a commitment to leadership of the most impacted.
Since publishing my book Against Technoableism—which doesn’t specifically address artificial intelligence, but has resources for thinking about many new technologies—I’ve been keeping tabs on a series of stories that haunt my sometimes-haunted body. I love stories. In fact, my book focuses on the stories disabled people tell about technology that rarely get represented or told outside of disabled circles.
The stories that I’ve been drawing on since my book came out should serve as touchstones in our conversations about artificial intelligence, corporate and commercial technologies, and cyborgs made vulnerable by our existing infrastructures.
Touchstones for Cyborg Concerns
Aboulifa, Ariana, and Henry Claypool. “The Vast Surveillance Network That Traps Thousands of Disabled Medicaid Recipients.” Slate, July 26, 2023.
In this article, Aboulifa and Claypool lay out an important issue that has not gotten the press it should—and I knew mostly about it from the activism of my colleague Dom Evans. Electronic Visit Verification (EVV) is a system whereby Medicaid recipients of attendant care services are required to have a cell phone that reports to their state their locations and times of day when their attendant workers log in and log out. Aboulifa and Claypool explain:
While EVV was initially required as an alleged attempt to prevent public benefits fraud, whatever preventative benefit it may provide (most of which seems, at this point, to still be largely speculative) is largely outweighed by its detriments. On the financial front, according to the Guardian, the state of Arkansas secured only three convictions for personal care-services fraud in 2020, recovering a total of $1,930; as of mid-2021, EVV had cost the state $5.7 million to implement.
Location data about people’s whereabouts and activities means that some users feel especially vulnerable and are changing the patterns of their lives. Aboulifa and Claypool worry about cyberattacks and data breaches that could put disabled people who need to use EVV at additional risk. Huge worries also exist about how the extra data that states (and their contracted third party vendors) collect through EVV will be used (including cutting people’s assistance hours or as a justification for institutionalization). As with issues that are specifically about AI, it’s easy to see how lower expectations about privacy for disabled recipients of service lead to power dynamics that can cause harm.
Strickland, Eliza, and Mark Harris. “Their Bionic Eyes Are Now Obsolete and Unsupported.” IEEE Spectrum, February 15, 2022.
Strickland and Harris detail the case of Second Sight’s Argus II Retinal Implant, an implant that was discontinued abruptly without any warning to implantees in 2019 and whose manufacturer nearly went bankrupt in 2021. These journalists write:
More than 350 other blind people around the world with Second Sight’s implants in their eyes, find themselves in a world in which the technology that transformed their lives is just another obsolete gadget. One technical hiccup, one broken wire, and they lose their artificial vision, possibly forever. To add injury to insult: A defunct Argus system in the eye could cause medical complications or interfere with procedures such as MRI scans, and it could be painful or expensive to remove.
Strickland and Harris interview many users and discuss the hard decisions implantees have to navigate, and the cost of such navigation, after the obsolescence of their implants. The way market forces work on disability technologies makes cyborgs vulnerable. Even when a new technology is desired, cyborgs can be made subject to additional issues, failures, and hassle for their adoption of that technology—and be left worse off.
Friedner, Michele. “Who Pays the Price When Cochlear Implants Go Obsolete?” Sapiens, March 29, 2023.
Friedner, who uses a cochlear implant, brings us the case of cochlear implant users in India whose implants were paid for by the Indian government. A few years later, the families of the implantees (who were under 6 years old when receiving the implants) received notification of compulsory upgrades when the company decided to no longer provide service on the old models. A medical anthropologist, Friedner proposes the term planned abandonment to describe the situation these lower-income families are faced with now—and in connection with their children losing the ability to communicate smoothly and participate in ways they have learned. One dad interviewed said his daughter can no longer go to school because she cannot understand directions, and all of her progress in speaking and listening has come to a halt. Friedner asks: “What happens when a sense becomes obsolete because of corporate abandonment?”
Pamar, Arundhati. “Dexcom’s IT Outage Shows Fabulous Device-maker Foundering with Patient Communication.” MedCityNews, December 4, 2019.
This story is about the Dexcom outage of Thanksgiving 2019 in the United States. A popular brand of continuous glucose monitor— a patchlike worn device that bluetooths continuous monitoring of glucose levels to a cell phone and raises alarms when blood sugar is at a dangerous level (low or high or oscillating too quickly)—had a glitch in the software that caused an outage in coverage. Users were not notified of the outage until many hours later, and this situation was dangerous for people who thought their monitors were working, putting them at risk for grave danger and death. The company was bombarded by angry users and frightened parents/caretakers who, while they understand that glitches can happen, see as negligent the lack of notification even after the company knew of the problem.
Hamzelou, Jessica. “A brain implant changed her life. Then it was removed against her will.” MIT Technology Review, May 25, 2023.
This story highlights the plight of Rita Leggett, an Australian woman who received—in 2010, as part of a clinical trial—an experimental brain implant to help with her severe chronic epilepsy. A handheld device connected to the brain implant would tell her when a seizure was coming. Unfortunately, other trial participants did not have the fantastic, life-changing results that Leggett did. In 2013, the company, NeuroVista, which was failing and no longer exists, told participants to have their implants removed. Leggett was the last participant to have hers surgically removed, and she did so very much against her will. This article should provoke us to think about what it means to “participate” in research and what rights and protections should be in place. In this case, the brain implant helped Rita Leggett lead a better life—until it was forcibly removed.
There is also a growing list of articles about anti-disability bias in hiring algorithms, work performance surveillance technologies, benefits determination, educational assessment technologies, anti-cheating proctoring programs, and so much more, that I cannot fully cover in a blog post! The Center for Democracy and Technology provides one excellent roundup of ableist surveillance in education, policing, health care, and the workplace.
Back to The AI Future
Don’t get me wrong: I’m not anti-AI—though I’m not sure we have a good definition of intelligence. There are many algorithms and applications that I think are amazing and good. I love (and use) auto-captioning programs and am glad to see better programs for insulin management and alternative and augmentative communication; and woohoo for computer modeling that speeds up drug discovery. But in the end, we’re being sold a boosterism in AI writ large instead of considering applications carefully. We need to think about the tradeoffs that come with new technologies, consider the goals and presumptions and data for different applications, and pay close attention to the economic, corporate, and governmental systems we are embedded in with these technologies. The Silicon Valley slogan “move fast and break things” isn’t super fun when it’s your body, your livelihood, or your existence that is Hulk-smashed.
The sort of agency we assign to AI and the agency we take away from disabled people hinges on who and what we trust, and what we take as objective or unbiased. Intelligence is and has always been a eugenic project: we put a higher value on who is (or what is) considered intelligent in ways that present a hierarchy of value. I’ve been thoroughly convinced by the work of Damien P. Williams that we really don’t know what we mean by the word intelligence, especially as computer scientists work on artificial general intelligence. What’s more problematic right now is that so many people believe AI will be more objective instead of just biased in more ways as we apply biased existing data. Standardizing ways of enacting and projecting biases will make them much harder to escape—and relying on AI, bolstered by so many media claims about it, will make it much harder to dispute things disabled people have a real stake in: benefits and insurance denials, access failures, being passed over for jobs, etc. We can do a lot of harm when we think we’re being objective in our decision-making because of the use of AI or an algorithm in the process. This is something writers on racial and gender biases in technology have been raising the alarm on for years.1
This is where technoableism creeps into this story. So often, our culture exceptionalizes disability, and moves to praise all technology in the context of disability. Technoableism—and whoops that I haven’t defined this until now—is a powerful belief in the idea that technology is the answer to disability, with disability always being cast as a wrong way to be in the world. But disability has always existed, and we’re going to have more disabled people in the future—no matter what Silicon Valley elitists with their fancy AI and biotech fantasies might tell you. You can read in my book a lot more about that idea that we’ll have more (not less) disability in the future, but TL;DR: we need to be listening to disabled stories and disabled expertise (leadership of the most impacted!) as we plan for the future that’s both here and also barreling our way.
- See the contemporary work of Ruha Benjamin, Meredith Broussard, Mar Hicks, Temnit Gebru, Joy Buolamwini, Rua Williams, Damien P. Williams, as well as historical work from Edwin Black and Harriet Washington to understand how data collection is non-neutral. Organizations like the Center for Democracy and Technology, the Algorithmic Justice League, AI Now, and Data for Black Lives also have a myriad of resources and readings. ↩︎
MEET THE AUTHOR

Ashley Shew is an associate professor of science, technology, and society at Virginia Tech, and specializes in disability studies and technology ethics. Her books include Animal Constructions and Technological Knowledge (2017), Spaces for the Future (coedited), and Against Technoableism: Rethinking Who Needs Improvement (W. W. Norton). She lives in Blacksburg, Virginia.
Image Credit: EHB Photography