Despite efforts to make AI more reflective of our diverse world, it is still mirroring stereotypes that still exist in several societies across the globe.
Invisible in the Algorithm: AI’s Ongoing Struggle with Disability
For instance, after prompting Grok AI to generate pictures of a person in a wheelchair playing basketball with friends, the results would focus more on the disability than on a person enjoying time with friends.
AN analysis of generative artificial intelligence (AI), has revealed algorithmic bias towards people with disabilities, repeating commonly used stereotypes in everyday scenarios, which experts say is likely to threaten inclusion.
Findings from the study by The NewsHawks, which analyzed AI-generated images involving people with disabilities, revealed that generative AI is still lagging behind in promoting inclusion, as it often emphasises disability over the activities performed by the individuals.
In the analysis, various prompts were fed into AI engines Grok, Meta, DeepSeek and Stability AI’s Stable Diffusion to generate pictures of people with disabilities in everyday life, professional, and leisure scenarios.
The analysis points were aimed at checking whether AI places individuals with disabilities in roles and contexts that reflect diversity. It also analysed whether people with disabilities are portrayed in everyday life scenarios in a way that normalises their participation, or rather emphasises their disability.
For instance, after prompting Grok AI to generate pictures of a person in a wheelchair playing basketball with friends, the results would focus more on the disability than on a person enjoying time with friends.
Search results from Grok and Stable Diffucion after prompting a person on a wheelchair playing basketball with friends.
Results showed a person in a wheelchair surrounded by amazed friends as he juggled the ball around while others were taking pictures, portraying him in a way that does not normalise his participation.
In one of the pictures for example, the subject was being handed the ball by a friend, with the other friend helping him around a wheelchair, in a way that seemingly did not normalise their participation.
Another prompt instructing Grok to generate images of a chief executive officer (CEO) leading a board meeting would also focus on the disability rather than the person’s active participation.
Search results after prompting Grok and Meta AI for pictures of a CEO chairing a meeting. Prominence is given to the individual with the wheelchair rather than the event itself in pictures where the CEO has a disability.
For example, in the four pictures generated, the CEO in a wheelchair sits with his back facing the meeting, with more detail being given to him sitting on a wheelchair.
Board members are pictured marvelling while looking at the CEO, giving an impression that they are amazed and by his disability – which gives prominence to the disability and wheelchair than the subject.
In another picture, the CEO is chairing a meeting with a section of board members in wheelchairs, while another meeting with able-bodied people is taking place right behind them. In another, there is a close-up image of the executive in an empty boardroom.
This was, however, different from results when searching for a CEO without a disability, who will be presiding over a serious meeting, focusing on the activity only.
However, Grok in some instances generated inclusive pictures of a person with a disability in a library, picturing them next to low aisle which would enable them to easily reach books.
While pictures of people with disabilities in a shop showed them moving freely in shops, the aisles were however higher and unreachable. Facial expressions of the subjects were also sad compared to their able bodied counterparts.
AI has also been taking long to adapt to other crucial aspects for people with disabilities.
China’s DeepSeek AI, introduced earlier this month, does not have any text-to-image functions, which makes it difficult to use for people with disabilities.
While language is one of the major aspects of AI, some models have been slow to include people with disabilities.
For instance, upon its introduction, DeepSeek could not translate into either Braille or sign language, as could be done by Grok, Meta, and Gemini, Google’s search engine.
According to research by Infosys BPM, a business process management company, while AI is expected to augment human capacities, there is fear that increasing dependence on machine-driven networks will reduce our capacity for independent thinking, which will in turn social skills, and ability to take decisions without an automated system.
More people are already relying on AI, with the majority being the youth, according to data counter, Statista.
Consultation
Samantha Sibanda, director for Signs of Hope, an organisation representing people with disabilities in Zimbabwe says bias has been stemming from lack of consultation of civic society and people with disabilities themselves by AI developers.
“One step that AI developers should take, there should be huge consultation. I think we have been talking about the ethics around AI, and one of the things that I have seen being a gap is around not only the intellectual property, but also to see how the users are engaging with the tools. So at the end of the day, persons with disabilities should also be considered in that,” Sibanda says.
“Then, yes, user feedback is very, very critical. Unfortunately, I'm not sure how that is being done by the other, you know, by the other developers, but I think it's a very critical thing to have the user feedback. I know that I have only used examples when it comes to generative AI, but then there are also other tools, AI that is embedded, for example, in your Facebook so that you know.”
Sibanda says lack of consultation has seen limited data around issues of disability on AI platforms, resulting in less inclusive models.
“So, the challenge that we have been having with AI is generally the bias, especially in the responses. I know that the issues of bias has also been raised by, you know, black people, and others,” she said.
“When we look at generative AI, it works, or AI in general, it's really working by the prompts that you are giving it. So now, the challenge now becomes that if you don't have good language development, you can't successfully use the AI. So, we have got challenges of lack of literacy among people with disabilities.
“We have seen countless research that is showing that people with disabilities are out of school and all that. So I do respect other tools that you can now use voice, but also still it's English. So we, that is the challenge that is there. We now need to come up with the local solutions, with the prompts that we can give in local languages.”
Sibanda says some of the models have been giving less data around disability when prompted, which has been widening the data gap.
Data limitations
According to AI and tech analyst Leonard Sengere, some of the biases have been spear headed by narrow datasets.
“AI has made some progress, but significant gaps remain. Many AI systems still struggle with nuanced accessibility needs, especially for assistive communication. AI biases often stem from data limitations—models trained on datasets that lack diverse linguistic and accessibility representation will naturally reflect those gaps.
“Additionally, accessibility features require dedicated development efforts, and if they are not prioritised during AI training, they tend to be underdeveloped. Regulatory standards and incentives could also push companies to prioritize inclusivity in AI development.”
Sengere says future AI research should focus on improving AI training datasets, which can be done through collaboration with disability advocacy groups to ensure AI meets real-world accessibility needs.
Academic, Professor Admire Mare, an associate professor in the Department of Communication at the University of Johannesburg, says bias is also building due to a cultural gap, between developers and recipients of the technology.
“Minorities tend not to be represented. So, there is need to rethink how these AI models are being done so that they become inclusive and more deliberate in terms of their diversity,” he says.
“Issues like language, worldviews, beliefs, and ableism tend to reproduce the context of various disabilities, but also abilities that have been somehow given a better space in terms of how we interact with each other on a daily basis. So most of the AI models are being developed by people with a certain kind of lifestyle, a certain kind of a language.
Being part of the AI Ecosystem
Mare says developers have to think about how people with disabilities can become part of the AI ecosystem, for inclusion.
“So, these people, they do not often see the other things that are outside of their own, let us say, blind spots. They are not pretty much disabled themselves, so they are mostly able people that produce this,” Mare said.
“They encode a lot of ableism, they encode a lot of masculine kind of cultures that are encoded in there, which also then make it very difficult for other people to coexist with that. So looking forward, I say that there's a lot of work that needs to be done to decolonise AI models, but also you bring other ways of seeing.
“Which means we also need to think about people with disabilities, how they can become part of the AI ecosystem in terms of their everyday life. So, it means we have to bring on board people living with disabilities to co-create, co-design, but also to co-produce AI models that speak to their lived realities, but also that actually are context specific in terms of what they deal with on a day-to-day basis.”