AI + Disability. Coded Futures: Better opportunities or biased outcomes?
ChristineHemphill
120 views
39 slides
Mar 06, 2025
Slide 1 of 39
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
About This Presentation
A summary report into attitudes to and implications of AI as it relates to disability. Will AI enabled solutions create greater opportunities or amplify biases in society and datasets? Informed by primary mixed methods research conducted in the UK and globally by Open Inclusion on behalf of the Inst...
A summary report into attitudes to and implications of AI as it relates to disability. Will AI enabled solutions create greater opportunities or amplify biases in society and datasets? Informed by primary mixed methods research conducted in the UK and globally by Open Inclusion on behalf of the Institute of People Centred AI, Uni of Surrey and Royal Holloway University. Initially presented at Google London in Jan 2025.
If you prefer an audio visual format you can access the full video recorded at Google ADC London where we presented this research in January 2025. It has captioned content and audio described visuals and is available at https://www.youtube.com/watch?v=p_1cv042U_U. There is also a short Fireside Chat about the research held at Zero Project Conference March 2025 available at https://www.youtube.com/live/oFCgIg78-mI?si=EoIaEgDw2U7DFXsN&t=11879.
If Slide Share's format is not accessible to you in any way, please contact us at [email protected] and we can provide you with the underlying document.
Size: 80.22 MB
Language: en
Added: Mar 06, 2025
Slides: 39 pages
Slide Content
Research overview Who we asked. How we asked.
What we did: We conducted a consumer survey, expert roundtable and expert interviews in Oct-Nov 2024. Consumer perspectives Survey 511 People (UK nat. rep.) 311 Non-disabled 200 Disabled AI + Disability expert perspectives Round table 11 Industry experts (working in UK, EU, USA, Canada, Israel, Africa) Interviews 6 Industry experts (working in UK, EU, Australia, USA, Canada) 3 3
With, for and by: People with lived experiences of disability were involved throughout this project Researchers: Inclusive research design, facilitation, synthesis and delivery. Participants: Experts in AI + disability and consumers across the UK (over-represented 39%) Sight loss Hearing loss Mobility impairment Dexterity impairment Neurodiversity Chronic and mental health needs 4
Focus topics of the consumer attitudes survey: The research sought to understand attitudes to the following AI related topics Overall awareness and current usage of AI Attitudes to current and future uses of AI Attitudes to 3 specific use cases of AI Biometrics Voice technology Adapted media content Demographic questions included: Age, Gender, Region, Ethnicity, Sexual orientation, Religion, Disability, Use of assistive tech 5
Consumer survey findings – UK only Understanding the attitudes of consumers to AI, from those with and without personal lived experiences of disability.
On average, people feel neutral about the current and future use of AI 10 Extremely negative Extremely positive 5 More positive More negative Neither negative or positive Today: What impact do you think AI is having currently , overall, on society? Base: All responses (511) 5.41 Future: How do you feel about future use of AI overall and its impact on society? Base: All responses (511) 5.17 7
Today: Attitudes about the current impact of AI on society Disabled respondents were more negative than non-disabled respondents 10 Extremely negative Extremely positive 5 More positive More negative Neither negative or positive Current attitudes: Disabled people are less optimistic than non-disabled people about the impact of AI on society today. Mean average: Disabled 5.11 Non-disabled 5.6 0.49 Question: What impact do you think AI is having currently, overall, on society? Base: Disabled (200) vs. non-disabled (311) participants 8
Future: Attitudes about the future impact of AI on society Disabled respondents were more negative than non-disabled respondents 10 Extremely negative Extremely positive 5 More positive More negative Neither negative or positive Future attitudes: Disabled people are more negative than non-disabled people about the impact of AI on society in the future. Both groups are more pessimistic about the future relative to AI’s current impacts. Mean average: Disabled 4.89 Non-disabled 5.35 0.46 Question: How do you feel about future use of AI overall and its impact on society? Base: Disabled (200) vs. non-disabled (311) participants 9
Looking at specific use cases of AI, overall, people think that they will be beneficial Biometrics (including facial recognition) Voice technology Adapted media content Questions: To what extent are you concerned about the use of <insert use case>? & to what extent do you think the use of <insert use case> is beneficial to society? Base: All participants (511) 10 Mean av. benefit minus mean av. concern score Range is -5 to +5
Benefits of biometrics: People without a disability were more likely to think biometrics would allow people to complete tasks more easily The technology will allow people to carry out tasks faster and more easily Total 47% Disabled 42% Non-disabled 51% -9% 11
Concerns about biometrics: Disabled people had more concerns about how biometrics may impact users than non-disabled people If the technology makes a mistake, it will be difficult to know who is responsible. Total 49% Disabled 59% Non-disabled 43% +16% The technology will be less effective for some groups. Total 24% Disabled 30% Non-disabled 20% +10% 12
Usage of biometrics: Attitudes about the increased usage of biometrics Disabled respondents were more pessimistic than non-disabled respondents 10 Extremely pessimistic Extremely optimistic 5 More optimistic More pessimistic Neither pessimistic or optimistic Increased usage of biometrics: Disabled people are less optimistic than non-disabled people about the impact on society of future, increased use of biometrics. Mean average: Disabled 5.14 Non-disabled 5.69 0.55 Question: how do you feel about increasing the use of biometrics (including facial recognition) in the future and its impact on society? Base: Disabled (200) vs. non-disabled (311) participants 13
Making biometrics better : Disabled people wanted significantly more user empowering mitigations to make them feel comfortable with AI for biometrics What would make you feel more comfortable with biometrics being used? More or improved… Base: All (511) Information about how AI is being used Total 38% Disabled 45% Non-disabled 34% +11% Laws Total 38% Disabled 46% Non-disabled 32% +14% Industry or cross sector regulation Total 33% Disabled 40% Non-disabled 29% +10% 14
Benefits of AI-enabled voice technology : AI voice interfaces were seen to be beneficial to both disabled and non-disabled people in similar ways 15
Concerns about AI enabled voice technologies : Disabled people had more concerns than participants without a disability across several areas. Concerns about gathering personal information from voice which could be shared with 3 rd parties Total 55% Disabled 61% Non-disabled 50% +11% Voice technology won’t always work Total 33% Disabled 39% Non-disabled 30% +9% Voice tech will be less effective for some groups Total 26% Disabled 32% Non-disabled 22% +10% 16
Making voice AI better: Disabled people wanted risk mitigations to a higher degree to make them feel comfortable with AI enabled voice technology use 17 What would make you feel more comfortable with voice technologies being used? More or improved… Base: All (511) Security of personal information Total 56% Disabled 63% Non-disabled 51% +12% Information about how AI is being used Total 47% Disabled 55% Non-disabled 41% +14% Laws Total 38% Disabled 44% Non-disabled 34% +10%
Making voice AI better: Disabled people wanted risk mitigations to a higher degree to make them feel comfortable with AI enabled voice technology use (2) Industry or cross sector regulation Total 34% Disabled 40% Non-disabled 30% +10% Explanations of how AI decisions are made Total 32% Disabled 41% Non-disabled 27% +14% 18 What would make you feel more comfortable with voice technologies being used? More or improved… Base: All (511)
Benefits of AI adapted digital media : Greater access to media content and information was the key benefit to disabled and non-disabled people 19
Concerns about adapted media content: Disabled people had more concerns than participants without a disability It will lead to job cuts Total 37% Disabled 43% Non-disabled 34% +9% Some people may find it difficult to use the tech. Total 32% Disabled 40% Non-disabled 27% +13% 20
Making AI enabled digital media better: Disabled people want better safety and security regarding personal information, information on usage and laws What would make you feel more comfortable with AI adapted digital media being used? More or improved… Base: All (511) Security of personal information Total 40% Disabled 48% Non-disabled 34% +14% Information about how AI is being used Total 38% Disabled 45% Non-disabled 34% +11% Laws Total 34% Disabled 40% Non-disabled 30% +10% 21
Making AI enabled digital media better: Disabled people also want better safety when using AI-enabled digital media services through discrimination monitoring Monitoring to check for discrimination Total 26% Disabled 32% Non-disabled 23% +9% What would make you feel more comfortable with AI adapted digital media being used? More or improved… Base: All (511) 22
Engaging AI + Disability Experts Understanding the opportunities, concerns and considerations of a range of experts based in 5 countries on 4 continents, many who work globally and all who work at the intersection of disability and AI.
The fabulous group involved in the expert roundtable. Our thanks to you all! Who Where What they do Eyra Abraham Canada Disability-informed AI innovator and entrepreneur. Multiple advisory roles, accessibility standards, Steering Committee AI for American Association for People with Disabilities Sara Basson USA Accessibility Evangelist Google. Accessibility and Disability Inclusion in Emerging Markets, Speech recognition and alternative interface expert David D'Arcangelo USA Consultant in disability inclusion. Ex Commissioner for Disability Massachusetts. National Industries for the Blind board member. Inclusive workplace policy expert Jennie Day Canada Academic in Canadian Robotics and AI Ethical Design Lab University of Ottawa Pinar Guvenc USA Inclusive Design Leader, SOUR Studios and lecturer Parsons School of Design and podcaster. Works at the intersection of diversity, inclusive design and innovation, incl. in AI Michal Halperin Ben Zvi Israel Design strategist and UX researcher specialising in ageing. Works with JDC and Digital Israel including on AI, inclusion and ageing Christopher Harrison USA/ Africa Chairman Next Step Foundation. Working on AI for SDG achievement and for AT innovation in African continent contexts. Writer about and enabler of disability-inclusive tech. Lisa Liskovoi Canada Senior Inclusive Designer and Digital Accessibility Leader, OCAD University Inclusive Design Research Centre Arielle Silverman USA Research Director, American Federation for the Blind. Conducted workplace tech and other research focussed on sight loss incl. exploring AI and disability deeply in 2024 Jo Stansfield UK Inclusive tech entrepreneur, equality in tech and AI expert. Board of Directors For Humanity, Trustee BCS, Cranfield Uni Visiting Fellow Gregg Vanderheiden USA Global leader in emerging tech, disability and AT. Founder and Emeritus Prof. Trace Centre Uni of Maryland (assistive and accessible tech). Ran Future of Interface 2023 with Vint Cerf. Screenshot from the roundtable session, 30 th Oct 2024 24
The fabulous group engaged in 1:1 interviews. Our thanks to you all! Who Where What they do Moaz Hamid USA Disability innovation – entrepreneurship funding and support, CES board member, incl. Accessibility Committee. Ex Google. Marie Johnson Australia Global leader in disability inclusive AI, especially cocreation practices. Author NADIA AI and designer / developer of AI solutions for and by disabled people Kate Kalcevich Canada Head of Accessibility Innovation, Fable. Committee Member Accessible and Equitable AI. Accessibility Standards Canada. Lavonne Roberts USA Scott-Morgan Foundation creating AI enabled personalised communications avatar solutions for people with MND/ALS Susan Scott Parker UK Leads the Disability Ethical AI initiative. Researching, connector and catalysing progress in disability-inclusive ethical AI. Yonah Welker Switzerland Policy advisor EU and consultant on AI and disability. Wrote MOOC Course on AI and Disability. Neurodiversity expert. Materials shared by interview participants, Oct/Nov 2024 25
“I’m worried we will find ourselves in a situation when social gaps are just increasing … P eople that can use this technology can benefit and are aware of the dangers… they have strong digital literacy… The rest will find themselves in a position where they are victims of so many issues, when it comes to privacy, to decision making… And I think that people with disabilities could be a target audience for those issues.” Michal Halperin Ben Zvi 26
Summary of the expert conversations Concerns Privacy and security Society b iases Reality amplified Data biases The averages problem Incomplete, inaccurate, uncorrelated data The power of the creator, and who is / isn’t creating Asymmetric implications Left behind or explicit harm Opportunities AI empowered assistive technologies Reduce barriers Agency/independence Efficiency, cost, access AI enabled ac cessibility for better environments AI + (robotics, alt. interfaces, smart homes/spaces) Advanced personalization of interfaces, content or style Inclusion needs Awareness and education Inclusive e ngagement throughout the AI ecosystem Funding and focus of AI on valuable gaps / opportunities Inclusive datasets Governance Laws and regulation Work in the open Guidelines and guardrails Outcome monitoring and consequence assessment 27
Privacy and security Trade offs higher for disabled people: efficiency vs privacy Implications on others of AT (e.g. computer vision) Access needs or marginalised characteristics determined or assumed by AI Trojan horse risks of disability use cases We're building large language models replicating human beings. So there's so many pieces to that. How do you capture their humanity? How do you protect it? What happens after they pass away? Who's in charge of that data? And they're just layers and layers and layers of dangers and security around that. Lavonne Roberts, Scott-Morgan Foundation In some countries governmental agencies were accused to use some pieces of this data without consent to confirm the status of a disability. Just tracking your social media, your post to identify if you really have a disability or not use this information to pay pension and financial support to you or not. Yonah Welker EU AI + Disability Policy expert Arielle Silverman: AFB Director Research Case study : Can images of people be taken in public without people’s consent and then be described to blind people? Considerations around what is described – assumptions re gender, age, ethnicity? 28
Biases in data and reality Societal biases – AI can amplify the reality of ablism Data biases – AI may leverage incomplete, poorly correlated or inaccurate datasets Synthetic data –likely to amplify underlying data flaws yet look more complete I think we've talked about all the potential areas of bias, and so it's the data, it's the model development, it's the evaluation. So just be aware, you know, of all of those potential areas of bias, and then also identify the level of risk. What is the risk? Financial? Is it social? Is it life threatening, you know, for somebody with a disability, and so just keeping that in mind, what's the cost of an error here? Sara Basson, Google " And unlike human bias, AI bias may be much more difficult to detect and remediate. As a concrete example, the ablest belief that if somebody is making eye contact, that they're more engaged or that they're more reliable, could cause AI to reject job candidates because they're not looking at the camera disregarding the fact that blind people and possibly autistic people, can't look at the camera or have difficulty doing so.” Arielle Silverman, American Foundation for the Blind Susan Scott Parker: Founder Disability Ethical AI Case study : Deliveroo facial recognition for a job opportunity limiting a man with a facial difference. “You might get lucky and find a human recruiter with an open mind, but you will never get lucky if Deliveroo insists you send 3 photos in. It is embedded into the system.” 29
Biases in data and reality Societal biases – AI can amplify the reality of ablism Data biases – AI may leverage incomplete, poorly correlated or inaccurate datasets Synthetic data –likely to amplify underlying data flaws yet look more complete Yonah Welker: AI and Disability expert. EU policy and education “For instance, when we try and use AI for DNA data clustering for a medical solution, it is predominantly not efficient for smaller groups, and not just disability, but any smaller groups, even gender wise.” “Disability is a non average characteristic. AI is trained on averages, so when it encounters someone who is not average… if protective measures aren't taken, it's likely to reject or be biased against the person who's not average." Arielle Silverman, AFB Director Research AI itself is only as good as the data that it's sourcing…It starts with that, garbage in, garbage out , so making sure that the data sets that AI is referencing is good [is important].” David D'Arcangelo, Disability Consultant, ex Commissioner for the Blind Massachusetts 30
AI -enabled assistive tech (AT) Micro personalization of content, interfaces and style Robotics and connected tech (e.g. smart homes) to enable built environments More adaptive AT – e.g. more languages and atypical language Health monitoring "I myself was in a conference last week, where they used AI to generate the transcripts of all the audio in the room... not just the speakers up front which traditional captioning by a human will capture... but this was a wide room with people on various tables." Kate Kalcevich , Fable Tech, Accessible and Equitable AI board member, Canadian Accessibility Standards ” Aaron's avatar, the first one cost $25,000 to make... We can now give an avatar to someone... basically at almost no cost.” Lavonne Roberts Scott-Mogan Foundation using AI to create an avatar with personalized voice technology Gregg Vanderheiden , Prof Emeritus Trace Centre (Assistive Tech and Accessibility) Uni of Maryland Coding with AI: “ It [AI] will do so much if I call first pass coding and recoding and rewriting… People who don’t have 10 flying fingers and who can’t compete, well it now no longer matters how fast your fingers fly, it’s how you think.” 31
Inclusion needs Increase awareness, intent and capability Create value: Identify and apply AI to major disability-inclusion challenges targeting inequity Protect from harm: Governance, regulations, monitoring, transparency, guidelines and other protections Kate Kalcevich and Yonah Welker The importance of governance and controls “ Some countries are doing a good job of at least trying to keep up. Hopefully they will set the standards that others will follow as well. This is important, this is meaningful work. You can’t wait” Kate Kalcevich Jo Stansfield Board of Directors ForHumanity Education implications of AI system use not considerate of atypical learning styles. “It has been 100% dreadful” “ We can’t think about people with disabilities at the start as the start has already happened” Lisa Liskovoi , Senior Inclusive Designer OCAD 32
Inclusion needs Disability-informed involvement and cocreation throughout AI ecosystem Contextually as well as individually relevant solutions. Suited to access needs as well as culture, language and other relevant contexts Lavonne Roberts Scott-Morgan Foundation Codesign between developers and users. “This is a theme that is coming, going to come out this research of involving the people that disabled individuals in the design, development, deploy, deployment process in not in a tokenistic way, but in a co collaborative, co designed way. So, yeah, not just bolting on at the end, but baking in.” Jennie Day, Postdoc Robotics and Ethical Design Lab Uni of Ottowa Marie Johnson, AI health and social services support avatar creator and author Combining standardization and localisation “ Localisation is necessary to take into account people’s contexts. This also means by doing it that way you can scale quicker.” 33
Inclusion needs 34 Considering consequences “ It was algorithmic application to forcefully raise debts against welfare recipients based on a flawed methodology. These debts were not even real debts. They were forecast debts, debts that hadn't even occurred. …It was horrific because many people knew that they hadn't actually incurred these debts, but had no way of proving that they hadn’t. The Government did 2 things, applied this algorithmic assessment to to predict debts and then reversed the onus of proof to the recipient or to the welfare recipient. Usually the onus is on the agency raising the debt. A number of people suicided over this. People lost their homes. The the human suffering was horrific.” Marie Johnson about Robodebt in Australia Aligning financial resources to inclusive intent and outcomes Considering the implications of errors Addressing data discrimination Eyra Abraham, Disability-informed AI innovator and entrepreneur Where is the money? Who is getting it? “I'm in the startup ecosystem. I often do not see the conversation about disability inclusion in AI. If you were to look at any startup events that are happening and new emerging ideas and investment opportunities associated with AI, the there's no attachment for disability inclusion in that. I think that's a bit concerning, given that there’s an influx of investment going into AI without having that [disability inclusion] at the forefront in the decision making of the product and the technology. I think it's big. It's concerning”
So what ? What can I do ? A few practical things we can all do to reduce potential harms and improve the equity, value and positive impact of AI-enabled solutions.
A few things to think about if you are a… Policy maker Encouragement of Open Communication Work in the open Encouragement of Open Communication Build on the work of others Encouragement of Open Communication Monitor and evaluate outcomes for equity Encouragement of Open Communication Create safe learning and collaboration environments Procure AI systems Encouragement of Open Communication Who was involved designing, making and testing the product Encouragement of Open Communication What has been done to test fo r and remove bias risks Encouragement of Open Communication How inclusive were underlying training dataset/s Encouragement of Open Communication What commitments can the seller make about product safety Designer or creator of AI systems Encouragement of Open Communication Who is in the team or informing the solution (who isn’t) Encouragement of Open Communication What is in the dataset (and what isn’t) Encouragement of Open Communication Who will benefit most (and who may not benefit equally) Encouragement of Open Communication What are potential risks and unintended consequences of use or misuse Encouragement of Open Communication Start at the edges. Learn from and solve for those with differentiated needs Encouragement of Open Communication Share power. Beyond engagement cocreate solutions Encouragement of Open Communication Test and assess. Release and monitor. Limit use while limited knowledge 36
And for all of us. If you are human… Humans Get curious. Ask and learn. Be critical. Fact check hype and claims. Consider who benefits and who takes risks or costs. Be transparent. I f you use AI say so Use your privilege to make space for those with less Speak up when you know it is wrong. Dig deeper if unsure C onsider data flows, especially your own data. Consider the impact of value flows on actions Share what you are learning 37
“Technology is too important to leave to the technologists” Jeremy Dunning and Richard Roger 38