Ethics and sustainability of Artificial Intelligence

SonjaAits 253 views 27 slides Oct 04, 2024
Slide 1
Slide 1 of 27
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27

About This Presentation

Lecture slides with from a workshop on AI ethics and sustainability held at Nano Lund, Lund University on 20241004. This also includes group discussion questions.


Slide Content

Ethics and sustainability of AI Sonja Aits Lund University, 2024-10-03

Is AI a means or obstacle in the sustainable transformation?

Can we trust AI models?

“Hidden” biases in the training data affect model performance Barbu et al. Advances in Neural Information Processing Systems 32, pages 9448–9458. 2019.

Incorrect medical AI can cause real harm AI can be prejudiced or simply wrong “Garbage in – Garbage out”

Large-scale data collection raises serious privacy concerns

Who is responsible when things go wrong with AI?

AI tools are vulnerable to malicious attacks Goodfellow et al, ICLR 2015

Do you support open research?

What could you do with AI?

What could you do with AI if you were? the leader of an EU country the head of the WHO a primary care doctor in a developing nation a specialist in the world’s best hospital the head of Amnesty International

What could you do with AI if you were? the leader of an EU country the head of the WHO a primary care doctor in a developing nation a specialist in the world’s best hospital the head of Amnesty International the leader of a global tech company

What could you do with AI if you were? the leader of an EU country the head of the WHO a primary care doctor in a developing nation a specialist in the world’s best hospital the head of Amnesty International the leader of a global tech company a terrorist a dictator a white supremacist a criminal

Do you think AI application in your research domain fall under dual use?

Should we have open research in the field of AI?

AI for sustainability Sustainability of AI

Waste and pollution Energy Researcher Supercomputer Transport Production Raw materials

EU Ethics guidelines for trustworthy AI (1) lawful -  respecting all applicable laws and regulations (e.g. EU AI act) (2) ethical - respecting ethical principles and values (3) robust - both from a technical perspective while taking into account its social environment https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

7 requirements for ethical AI Human agency and oversight Technical Robustness and safety Privacy and data governance Transparency Diversity, non-discrimination and fairness Societal and environmental well-being Accountability

Group discussion Ethical, legal, environmental and societal issues related to AI

Discuss the ethical, legal, environmental and societal issues for each case Ownership of the training data, AI tools and AI output Sources and risks of bias Transparency regarding development and output Societal benefits and negative impact (e.g. on the job market) Environmental benefits and negative impact Accessibility (e.g. for non-English speakers, low-income countries, people with handicaps) Possibilities for abuse by malicious actors Responsibility for correctness and safety Possibilities for oversight by public authorities and by the end-user Consequences of forbidding the development and usage of this type of AI in the EU

Discussion case: Large language models and AI chatbots AI chatbots (e.g. ChatGPT, Gemini) answer questions on a very large variety of topics and generate text, code and images. There are many beneficial applications fo r both society and science. The underlying models are trained on extremely large collections of websites, scientific literature, books and other inputs. They are then refined by humans who give feedback on the answers, both employees (typically in low-income countries) and end-users. Training data, procedures, and model details are mostly not made publicly available. Even if details were available the development of these tools cannot easily be replicated as it requires extensive resources. Access to the best models is typically provided for a monthly fee whereas slightly less capable models are available for free but with input data from users typically being used for further development.

Discussion case: Computer vision models for mammography screening Most high-income countries have large screening programs for breast cancer detection in which all women of a specific age (40-74 years in Sweden) undergo regular (bi-yearly in Sweden) X-ray exams. X-ray images are then evaluated by specialized radiologists who decide whether the image is normal or potentially shows a tumor, with the latter leading to a follow-up examination. There is large interest in replacing this human evaluation with automated AI systems, because sufficient numbers of highly trained radiologists are not available everywhere and because AI systems could reduce costs and evaluation time and potentially avoid wrongful examination results.

How can we all contribute to a ethical and sustainable use of AI?