General introduction to the ethics of artificial intelligence.

FranckNegro 1 views 69 slides Oct 07, 2025
Slide 1
Slide 1 of 69
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69

About This Presentation

Lecture on the ethics of artificial intelligence at ESCP Europe as part of the Master II program in Big Data & Business Analytics.The aim here is to provide a panoramic view of AI ethics and the philosophical, ethical, and societal issues surrounding this technology.


Slide Content

An Introductory Guide
to the Ethics of Artificial
Intelligence
Franck Negro –Paris, March 20, 2025.
Lecture as part of aMaster II “Big Data & Business Analytics“
program at ESCP Europe.

Agenda
I.What is the ethics of Artificial Intelligence ?
II.Testing our intuitions and moral principles: from the
“Streetcar Dilemma” to the Autonomous Vehicule.
III.What are the sources and principes of AI Ethics ?
IV.What are the major ethical challenges of artificial
intelligence (AI) ?
V.Appendix: some key texts of AI Ethics (Law).

I. What is the ethics of
Artificial Intelligence?
Key words: Ethics. Morality. Moral Relativism. Axiology. Reason to Act. Moral Agent
(Patient). Moral Dilemma. Normative Ethics. Metaethics. Applied Ethics. Ethics of AI.
Symbolic AI. Connexionist AI. Field of Expertise in AI Ethics.

Do we really need ethics to know
what we should do?
We all have, at least intuitively, an idea of morality,
in the sense that we all know what it means when
we use this word in our daily lives, whether as a
noun: “Our morality forbids us to behave in this
way”, or as an adjective: “Is lying moral?
Understanding the terms “ethics” or “morality”, as
well as the use of words belonging to their lexical
field (right, wrong, good, bad, just, unjust, honest,
dishonest, courageous, cowardly, liar, faithful,
deceitful, wicked.... ), require no skills other than
linguistic ones, associated with the formation of
what is known as a moral conscience, i.e. the
ability to relate facts (what is) to value or rules of
conduct (what ought to be).
However, the words we spontaneously use, and
the way we justify our actions and behavior, are
not neutral. They carry with them, and in spite of
ourselves, a whole tradition of moral reflection that
makes up what we might initially call Western
moral culture. This is the culture we refer to
when we speak, for example, of the ethics of
the ancients, Christian morality, Kantian
deontology or utilitarian ethics.
What is the origin of our beliefs,
opinions, and ideas about morality?
Our moral intuitions borrow from various sources in the Western ethical tradition.

Arg. 1: Moral disagreements
Everyday experience shows that
moral disagreements do exist.
Moral judgments vary historically
and geographically.
The problem of“subjectivism” and
“relativism” in ethical philosophy.
When we argue that ethics is subjective and relative, and that there are no universal principles
common to the whole of humankind, we are unwittingly supporting a metaethical thesis. According to
this theory, morality is a matter of convention. We call this thesis: “Moral Relativism”.
Arg. 2: Tolerance
Moral relativism promotes
tolerance. If there are no objective
moral values, then certain groups
of people will be less inclined to
impose their values on others.
Arg. 3: Non-objectivity
When we say that lying is wrong, it
is not clear what reality this
judgment corresponds to. There
are no moral facts, only personal
convictions.
If I am amoral relativist, it becomes difficult to criticize certain practices such asslavery, racism,
poverty and economic inequalities, discrimination, violations of fundamental rights, or even to aim
for any kind of progress in the field of morality. The very notion of"moral progress"assumes that
some things are better than others. Otherwise, in the name of what values or principles can we
criticize or prohibitfacial recognition, biometric categorization, algorithmic biases, mass
surveillance, killer robots, predictive justice, or even social credit?

Do we need to distinguish
between “moral” and “ethics”?
MORALITY
Comes from the Latinmos or
moresLatinand refers to customs,
character, ways of acting, and living.
ETHICS
Comes from the Greekethosand
refers to customs, character, ways
of acting, and living.
Morality is the practice of ethics: In contrast, we may
call"morality"the values, behaviors, and norms
actually in force within an individual or a given society
(Paul's morality, Jacques' morality, Pétainist morality,
Christian morality, Muslim morality, etc.).
Ethics is the theorical study of morality: We may
call"ethics"the theoretical reflection and discourse that
addresses the distinction between Good and Evil as
universal moral requirements. In this framework, ethics
as aphilosophical discipline would be ageneral and
universalist reflectionon the principles, values, and
norms at work in specific moral systems, such as
Christian morality, Greek morality, Roman morality,
Muslim morality, and so on.

Discipline 3:
Logic
Logic is the study of the
principles and criteria that
allow us to validate the
coherence of our arguments.
More specifically, logic
attempts to distinguish
between good and bad
reasoning. It answers
questions such as:
What is correct reasoning?
What distinguishes a good
argument from a bad one?
How do you detect an
argumentative error? What
are the criteria for
determining the validity of an
argument? What are the
different types of logic?
Ethics as part of philosophy.
As conceived and taught in Anglo-Saxon universities.
Discipline 1:
Metaphysics
The term metaphysics is
formed from the words meta,
meaning both beyond and
after, and physika, which can
be translated as physics or
nature. Metaphysics is the
study of phenomena that lie
beyond the physical world, in
other words, things or objects
that cannot be experienced.
In turn, metaphysics can be
divided into three subfields:
ontology, philosophy of mind
and philosophy of religion.
Discipline 2:
Epistemology
Epistemology can thus be
defined as the study of the
nature of knowledge, its
conditions of possibility and
the justification of our beliefs
about the truth of a
proposition. The three main
questions of philosophical
epistemology are:
What is knowledge? And how
does it differ from opinion?
What can we know? How do
we acquire knowledge?
Discipline 4:
Axiology
Axiology deals with what are
known as value judgments, or
evaluative judgments. What
does it mean when we say
that one thing is better than
another? That a painting or
landscape is beautiful? That a
certain action is good or bad?
That such and such a social
system is unequal and
unjust?
Axiology itself is divided into
sub-branches according to
the type of value(s) under
investigation: aesthetics,
ethics, political philosophy.

The three main fields of ethics as
a philosophical discipline?
Field 1:
Normative Ethics
Normative ethics are by nature
prescriptive. They seek to establish norms
or standards that govern right and wrong,
or the proper and improper ways of
behaving toward others in given situations.
What good habits should we develop?
What are our duties toward the people we
live with? But above all, on what theoretical
basis, and in the name of what values, can
we justify a given behavior when we are
expected to act for ethical reasons?
Normative ethics answer the following
questions: What should we do? How
should we behave? What norms, values,
or principles should guide our actions?
Field 2:
Metaethics
Metaethics. It is also referred to as second-
order questioning, since it no longer seeks
to determine how we should concretely
behave or how we should live (as
normative ethics do), but rather to analyze
the logic of moral statements in order to
uncover their meaning, what they refer to,
and even their truth value.
If normative ethics urges us to do good,
metaethics asks: What does it mean to do
good? What is justice? What kind of
reality do we refer to when we use the
word "good"? Are moral judgments
objective or subjective? Et.
Field 3:
Applied Ethics
Applied ethics consists of the concrete
application of ethical or moral theories to
determine which ethical or moral actions
are appropriate in specific situations within
various fields of human activity.
Among the main areas of applied ethics,
we find:
•Business ethics;
•Bioethics;
•Environmental ethics;
•Animal ethics;
•Ethics of international relations;
•Digital ethics;
•Algorithm ethics;
•Ethics of artificial intelligence. Etc.

The notion of “Reason to Act”?
Legal Reasons
There may indeed be legal reasons
to do something in order to comply
with positive law. For example,
obeying traffic regulations to avoid
paying a fine or refraining from
smoking in a public place in France.
However, it is worth recalling that
what is legal is not necessarily
moral (legitimate).
Consider, for instance, slavery,
which was only definitively
abolished in France in 1848, or the
laws on the status of Jews during
the Vichy regime.
Prudential Reasons
Another type of reason that can be
invoked is what is calledprudential
reasons. Unlike moral reasons,
which are disinterested and
oriented toward others, prudential
reasons concern what is in my own
self-interest.
For instance, when I decide to stop
eating red meat for health reasons,
rather than because I believe that
all animals have a right to life.
Ethical Reasons
By contrast, amoral reasoncan be
defined as a reason that goes
beyond my own self-interest to
include the interests of other living
beings, for example, when I justify
my decision to stop eating meat
because I believe that, like human
beings, animals have fundamental
rights.
Amoral actionhas a disinterested
and altruistic nature. Its central
point is respect for others and it is
justified by values and principles
referred to as "moral."
Since ethics has fundamentally to do with our ways of acting, which ultimately refer to intentions or
reasons for acting, we need to ask ourselves about the specificities of certain ways of acting, and ask:
Are there moral reasons for acting? And if so, what are they? In other words: Why do we do what we do?

What is acting morally?
“For various reasons that will gradually become apparent, it is
not very easy to say what it means to act “morally”, or to define
the limits of what can be called “moral” or “ethical”. One
criterion we can use, as a first approximation, is the specificity
of certain reasons for acting. Among the reasons we can
invoke to justify an action, it seems that some can be called
“moral” and others not. If, to justify my refusal to accept a bribe,
I invoke the fact that I am being watched by my colleagues
and that I risk missing out on an important promotion if I am
reported, this justification will probably not be deemed “moral”
(for want of other specifications). If, on the other hand, I invoke
the common good or fairness, honesty or integrity, it will be
considered or integrity, it will also probably be considered, that
I am proposing a moral justification (even if nobody takes it
seriously). At first glance, moral reasons for action are
altruistic in nature, appealing to a certain conception of
respect for the individual.»
Ruwen Ogien, Éthique et philosophie moraledans Précis de
philosophie analytique, PUF, p. 213.
Ruwen Ogien (1947-2017)

What is the ethics of AI?
A Short Definition
“ The ethics of artificial intelligence is a
subfield of applied ethics. As a subfield of
the more practical side of ethical reflection,
applied AI ethics aims to answer questions
raised by the adoption and deployment of
AI systemsin concrete situations of social,
economic and institutional life.
It relies on the foundations of normative
ethics and metaethics, the knowledge of
regulatory systems applied in a given
region, and a deep understanding of AI
as both a technical and socio-technical
system.“
Ethics of technology > Digital ethics >
Ethics of Artificial Intelligence
•How can we ensure that AI systems align
with human values and ethical principles?
•What measures should be taken to prevent
bias and discrimination in AI decision?
•Who should be held accountable when an
AI system causes harm?
•How can we balance data privacy and
security with the need for AI to access large
datasets for training?
•What governance frameworks should be
implemented to ensure that AI development
remains beneficial and does not pose
existential risks?
•How can we ensure transparency and
explainability in AI systems to build trust and
facilitate ethical oversight? Etc.

Machine Learning and the
Techno-Scientific Paradigm shift.
Paradigm 1 (From 1950 to 2000)
Symbolic AI
Derived from developments and advances
in computer science, symbolic AI refers to
a method of creating so-called “intelligent”
systems that relies on the use of symbols,
logical rules and knowledge
representations that are explicit.
In other words, the developer starts from a
given problem, which he breaks down into
sub-problems, and on this basis develops,
using a programming language, the most
appropriate sequence of instructions
(algorithm) to obtain the desired result.
The decisions produced by systems
based on symbolic AI are usually
explicable and justifiable by a human
being.
Big Data: Availability of digital data in
very large quantities (Big Data). The
phenomenon of Big Data or
megadata (web, social networks,
Internet of Things, etc.).
Paradigm 2 (2010 to today):
Connexionist AI
The connectionist approach, favored by
the majority of AI specialists today, differs
from the classical, symbolic approach to
computing. Rather than designing and
implementing a series of predefined rules
for a given task, AI systems are trained on
a defined dataset to find and construct
the most appropriate rules for the task in
question. Inspired by the biological
functioning of the brain, connectionist AI
refers to a method of modeling cognitive
processes using artificial neural networks.
While it offers many advantages, it is
difficult, if not impossible, to explain how
it arrives at its decisions (black box).
Computing power: Continuous
increase in computer processing
power with Moore's Law. But also
computing infrastructure, GPUs
(parallel calculation), Cloud
Computing, etc.
Deep learning (self-learning): New
neural architectures, such as multi-
layer neural networks, convolutional
neural networks (CNN), recurrent
neural networks, Transformers.

What does it mean teaching the
ethics of Artificial Intelligence today?
Four fields of expertise in ethics of AI.
Technical aspects of AI: The practical
field is AI as a technical system, whose
components, development processes,
deployment, operating modes, people
and skills involved in the manufacturing
chain, use cases and contexts, human-
machine relationships, and the impact
these devices can have on people,
groups of people, the environment and
even society as a whole.
Moral dilemma raised by AI: Hence
a second field of expertise in AI
ethics, which will involve identifying
the moral dilemmas raised by the
design, deployment of technical
systems based on artificial
intelligence, and situating these in
the broader context of the history of
value systems, ethical theories and
major societal debates.
Regularory & legal: It means
understanding regulation
and normative frameworks,
including laws and soft law
rules that aim to govern the
deployment and use cases of
AI (Recommendations, AI Act,
GDPR, DMA, DSA, etc.
Socio-technical context of AI Systems : AI systems are part of social systems to be considered in their technical
dimension (data infrastructures, etc.), social dimension (individuals, group, institutions, norms, etc.), in order to
highlight how their deployment influences social interactions (chatbots, etc.), economy (automation, work, etc.),
human behaviors (users, companies, etc.), institutions (public policies, security, justice, education, training, etc.),
environment (energy, raw materials, etc.).
Frédérick Bruneaut, Andréane Sabourin Laflamme & André Mondoux, Former à l’éthique de l’IA en enseignement supérieur, Février 2022.

II. Testing our intuitions and
moral principles: from the
Streetcar Dilemma to the
Autonomous Vehicle.
Key words: Conceptual Analysis. Thought Experiment. Reflective Equilibrium.
Streetcar Dilemma. Firefighter’s Dilemma. Consequentialism. Deontologism. Ethics
of Virtu. The Problem of Aligning Values. Moral Machine Experiment.

Ethical experience and the central
notion of “Moral Dilemma”.
A moral problem always takes the form of amoral dilemma.
Nature of the Moral Dilemma: Amoral dilemmais
characterized by a situation in which all available
options involve morally significant but opposing
aspects. It is not simply about choosing between a
morally good option and a morally bad one, but rather
about choosing between multiple options, each of
which presents ethical advantages and disadvantages.
Conflict of Values and Duties: The core of a moral
dilemma lies in the conflict between moral values or
duties. For example, an individual may face a choice
between telling the truth and avoiding harm to
someone. These conflicts reveal the complexity of real
moral situations, where ethical principles do not always
align, and every choice involves trade-offs.
Characteristic of a moral problem: We can
characterize amoral problemby the following
elements:
•Value conflict: It presents itself as
adilemmabetween what is and what ought.
to be, involvingvalue conflicts.
•Responsability: It engages us asconscious and
responsible individualsfor our actions.
•Choices: It requires us tomake choicesamong
several possible options.
•Consequences: It compels us, above all,
topropose a solutionfor which we mustassume
the consequences, both for ourselves and for
others, and even for society as a whole.

Method 1:
Conceptual
analysis
Notions and concepts are the
building blocks of our thoughts and
the raw material of philosophical
reflection. They serve as the
fundamental components of all
the propositions we formulate.
When we say that something is
unjust, we must define what we
mean by the terms just and unjust.
Conceptualize (from Notion to
concept). Problematize. Argue.
Method 2:
Thought
experiment
Conducting athought
experimentmeans using one's
imagination to construct fictional
scenariosthat allow us to test our
moral intuitions in imaginary yet
entirely plausible situations.
A well-known thought experiment
in ethics is thetrolley problem,
which we will explore later.
Method 3:
Reflective
equilibrium
The idea is to adjust our general
moral principles and specific
intuitions in relation to concrete
cases and situations, and through
gradual adjustments of our beliefs,
ultimately reach a kind of
equilibrium.
Adjusting a general principle by
testing it against a concrete
situation.
Evaluation methods and “tools”
in ethical philosophy.

The fundamental question of
philosophical ethics.
•What, after all, is acting morally, if not invoking
so-called moral values, such as respect,
honesty, justice, beneficence, courage, etc. (the
so-called moral reasons for acting), as a guide
to our decisions and actions?
•Yet, as Aristotle constantly reminds us in his
Nicomachean Ethics, all ethical reasoning, by its
very nature practical, consists in relating
abstract norms to concrete, singular situations.
•To put it another way, it is impossible to apply
general, abstract principles in a systematic
way, without taking into account the particular
situations to which the standards in question
need to be adjusted.
How can we apply
general moral standards
to specific situations
without betraying either
the spirit of the principles
or the complexity of
reality?
What happens to the question when it
comes to robots and artificial intelligence?

The problem of “aligning values”
in artificial intelligence.
Definition & Challenges
This is a central issue in AI ethics, in the context.
of the development and deployment of systems
capable of making autonomous decisions of a
complex nature. It poses the problem of controlling
these systems in a context where it is becoming
difficult to foresee every possible completion of
a software or robotic agent (autonomous car,
recommendation algorithms, automatic sorting
systems, credit scoring, chatbot, etc.).
•How can we ensure that an AI system
behaves in accordance with human values
and acts in a way that is beneficial and fair to all?
•Or: how can we ensure that an advanced AI
system does not set out to pursue goals that
might conflict with the intentions of its creators?.
•Social networking algorithms that aim to
maximize user engagement and encourage the
propagation of false or even hateful content
(misinformation, polarization, user autonomy).
•AI systems for recruitmentbiased and
reproducing unintentional discrimination based on
gender, skin color, age, training data history, etc.
•Credit rating systems for granting loans based on
past banking transactions, purchasing habits,
customer profiles, place of residence, etc.
•ChatGPT-type conversational robots, which can
be culturally and politically biased and convey the
value systems and ideologies of their developers,
thus unwittingly manipulating public opinion.
•Autonomous cars that have to deal with
unavoidable emergency collisions and choose, for
example, whether to kill the driver or the pedestrian?

Testing our moral intuitions: the
famous Streetcar Dilemma. (1)
“An out-of-control streetcar is rolling down a track,
while in front of it, five people are tied up on that
same track. If the streetcar continues on its
trajectory, the five people will inevitably be killed.
However, you have the option of diverting the
streetcar to a side track where only one person is
present. If you pull a lever, the person in question
will be killed, but you'll have saved the other five
people. What should you do?”
Most people decide to pull the lever, because
they feel it's better to kill one person than five.
•The value of our actions is measured by the “best
possible result”, in this case, the number of lives saved.
•We also assume that “killing” and “letting die” are pretty
much the same thing. While this nuance makes sense,
it is not sufficient justification for not pulling the lever.

Testing our moral intuitions: the
famous Streetcar Dilemma. (2)
“Now imagine that our tramway is still about to
crush the five people attached to it on the track.
But instead of the lever, which is no longer there,
this time there's a footbridge over the track, where
a heavyset onlooker is taking a leisurely stroll. If,
hypothetically, you push the onlooker from the
footbridge, he'll fall onto the track, saving the five
people but killing the onlooker. What should you
do?”
Even if it's still possible to justify killing one
person to save five, most people still think
it’s not morally justifiable to kill the onlooker.
•We believe that certain principles (such as “do not kill”)
are inviolable, regardless of the consequences.
•We also believe that “wishing someone dead” is not
exactly the same as “planning someone's death”.

“Imagine a firefighter who receives an urgent call
for two burning apartments in the same building.
In the first apartment is a child. In the second
apartment is a family with one children.
The only difference is that the child in the first
apartment is the firefighter’s nephew. The fireman
is the only person who can intervene. Who should
the fireman save?
Even if people's reaction is a little more mixed
than in the first two cases, for most of them it still
seems morally acceptable for the firefighter to
go and save his nephew, rather than the family.
•It seems perfectly normal to give greater weight to
certain people, depending on the type of relationship
we have with them (family, friendship).
•A certain amount of bias seems justifiable.
•Many decisions seem to be morally justifiable, whether
it's saving his son, or saving an entire family with his two
children. Both actions are morally acceptable..
Testing our moral intuitions: the
Firefighter’s Dilemma. (3)
?

Three great ethical theory of the
Western tradition: What Should I Do?
First great theory:
Consequentialism
For consequentialists, our moral
obligations and actions must primarily
be justified based on the
consequences they are likely to
produce.
What determines whether something
is judged good or bad, right or wrong,
is whether it brings about the best
possible outcomes for a group of
individuals or a given community.
Consequentialism is focused on the
outcomes of actions.
Second great theory:
Deontologism
For proponents of "deontologism," the
morality of an action does not lie in the
consequences intended by the agent
but in the agent's intentions. In other
words, for a deontologist, the ends
never justify the means, regardless of
the consequences caused by the
action in question.
As a result, adherence to and fidelity to
a set of rules and duties to which the
agent must unconditionally submit are
at the heart of deontological theories.
Deontologism is focused on duties
and moral principles.
Third great theory:
Ethics of virtu
For virtue ethics, something is
considered good or bad if it is carried
out in accordance with what a
virtuous person would do in a given
context.
In other words, virtue ethics place
greater emphasis on education and
the development of an individual's
character to cultivate practical
"natural" dispositions to act virtuously,
rather than focusing on fulfilling
specific duties or calculating the
consequences of an action.
Virtue ethics is focused on the
character and moral dispositions.

What about AI Ethics ?
AI ethics is primarily consequentialist and
deontological. For the simple reason that:
•These approaches align more closely
with our modernity (Precautionary
principle, fundamental rights, risk-based
approach, etc.).
•They seem easier to implement in the
form of legal and binding regulation
(ethics as the foundation of law).
•Anticipate risks, guide the development
of AI so that it can benefit everyone
(consequentialism).
•Protect people's fundamental rights
(deontologism, human rights ethics).
Ethics Guidelines for Trustworthy AI of the
European Union, AI Act, The 23 Asilomar
Principles, Declaration of Montreal, OECD
Council Recommendation, UNESCO
Recommendation, etc.
Main Characteristic of AI Ethics

How can we apply
general moral standards
to specific situations
without compromising
either the spirit of the
principles and the
complexity of reality?
What happens to the
question when it comes to
robots and artificial
intelligence?
The fundamental question
of AI ethics.
Theexemple of the
“Autonomous Car“

Self-driving cars and “The Moral
Machine Experiment.(1)
•Jean-François Bonnefon made a name for himself in particular
with the publication, in 2018, of a research paper co-authored by a
team of researchers: The Moral Machine Experiment.
•Moral Machine is a platform compiling different human
perspectives on moral decisions made by intelligent machines,
such as autonomous cars. Different moral dilemmas are presented
in which a driverless car must choose the lesser of two evils, such
as killing two passengers or five pedestrians.
•“ If an autonomous car couldn't avoid an accident and had to
choose between two groups of victims, how should it choose?”
•Based on the collection and analysis of data to capture the moral
preferences of over 40 million decisionsin ten languages from
millions of people in 233 countries and territories.
•the latter explores the completely new moral dilemmas raised by
the introduction of autonomous vehicles.

Who Should We Choose to Kill? (3)
•Exemple 2:In the case of an autonomous car with
sudden brake failure, would you prefer it to kill 1 elderly
person crossing at a red light (flouting the law) or 5
dogs?
•Exemple 3: : Who should it sacrifice between 1 baby,
1 cat, 1 dog, 1 girl, 1 homeless person crossing at a red
light (flouting the law), and 1 elderly woman, 1 woman,
1 overweight man, 1 overweight woman and 1 dog
crossing at a green light (respecting the law)?
•Exemple 4: In the event of sudden brake failure, should
the autonomous car continue forward and kill a
crossing pedestrian, or move away from the obstacle
and kill another pedestrian in the other lane?
•Exemple 5: Should it kill 1 male doctor, 1 baby, 1 female
doctor crossing at the wormhole (respecting the law)
or move out of the way and kill 5 homeless people
crossing at the red light (flouting the law). Etc.
Exemple 1: Should the autonomous car continue
forward, leading to the death of a male doctor (the
driver), or swerve to avoid the obstacle (a concrete
barrier) and kill one male doctor crossing the other
lane at a green light (respecting law)?
How many fatal accidents are we going to allow these cars to have? How and according to what
criteria are we going to distribute the victims, who may in turn be passengers in the vehicle,
pedestrians, passengers in other vehicles, children, married couples, the elderly, athletes, disabled
people, celebrities, the homeless, etc.?

Moral Machine: 40 million decisions
from users all over the world. (2)
Accident
Scenario
Data oncitizens' ethical
preferences, taking into account
different accident scenarios
presented in pairs, such as
“killing an elderly person or five
dogs“
Victim
Features
Data describing the
characteristics of accident
victims, such as age, gender,
social status, etc.
Environment
Variables
Data relating to variables
describing the environment,
such as being in the car, on the
road, in front of the car or on
another trajectory, the color of
the red or green light for
pedestrians, etc.
Objective: Capture the moral preferences of individuals and then explore
how citizens want driverless cars to be programmed.
Three major types of data
9 Moral Dimensions analyzed: 1)number of people; 2) gender; 3) age; 4) health status; 5) social status; 6) species; 7) road
situation; 8) legality; 9) status quo (do nothing, continue or divert the car).

A Top 3 of Ethical Preferences. (4)
Top Three
Dimensions well
ahead
Three dimensions come out
on top:
•Species (do you prefer to
save humans or animals);
•Number (do you prefer to
save the largest group);
•Age (do you prefer to save
babies, children, adults or
the elderly).
Two other emerging
preferences
Two other preferences are
easily identifiable:
•Saving pedestrians
crossing the road when
they're allowed to
(legality).
•Spare people with high
social status (executives).
Four weaker
preferences
Four preferences complete
the analysis:
•Saving athletes rather
than overweight people.
•Spare women.
•Save pedestrians rather
than passengers.
•Preference for the car to
go straight rather than
change direction
Questions: Are people's ethical preferences programmable? Are they aligned with the
recommendations of ethics committees? How do they conflict with our moral intuitions?

Cultural variations: Three major
blocks of countries emerge. (5)
Bloc “Ouest“
“Western” world made up of
almost all the countries of Europe,
with a sub-block of
•All the Protestant countries
(Germany, Denmark, Finland,
Iceland, Norway, etc.) and…
•Another sub-block containing
the United Kingdom and its
former colonies.
•Geographical, historical and
religious proximity between
countries.
Bloc “Est“
The second bloc is essentially
made up of:
•Countries in Asia and the
Middle East, from Egypt to
China, Japan and Indonesia.
•Gives less importance to age
than in the rest of the world.
Bloc “Sud“
The third large block is divided into
two sub-blocks, one of which
comprises all the countries of:
•South America, while the
second includes mainland…
•France, Martinique, Reunion,
New Caledonia and French
Polynesia, as well as Morocco
and Algeria, all countries
historically linked to France.
If the nine global preferences are found in all 3 blocks, the level of preference intensity changes.

What can “Moral Machine”
teach us? (6)
•It's going to be extremely difficult to agree on a global moral code for
autonomous cars.
•The main problem of AI ethics, which consists in aligning machine behavior
with fundamental moral values, is widely questioned by the social sciences.
•Before we can align machine behavior with moral values, we first need to
find the tools to quantify these values and their cultural variations (Bonnefon).
•Who would buy an autonomous car programmed to kill its passenger in the
event of an unavoidable accident (Social Dilemma)?
•Are people's ethical preferences programmable?
•Are they aligned with the ethical principles of major charters,
recommendations or other declarations (UNESCO, OCDE, EU, Asilomar…)

Artificial Intelligence Index Report
2024 (Published annualy).
•A lack of common, standardized benchmarks: Robust, standardized
assessments of LLM responsibility are sorely lacking. Research reveals
a significant lack of standardization in the evaluation of responsible
AI practices.
•A lack of transparency on the part of developers: AI developers
score low on transparency, with consequences for research.
•Extreme AI risks are difficult to analyze: The AI research and
practitioner community has seen numerous debates around risks
(algorithmic biases, unfair decisions, future existential threats).
•The question of copyright becomes central: many researchers have
shown that the output generated by popular LLMs can contain
copyrighted material.
•ChatGPT has a political bias: Researchers have found a significant
bias in ChatGPT in favor of the Democrats in the USA and the Labour
Party in the UK. This raises concerns about the tool's ability to
influence users' political opinions, particularly during election periods.

III. What are the sources and
ethical principles of artificial
intelligence?
Key words: Normative System. Soft Law. Hard Law. Moral Principal. Definition of AI.
Beneficience. Non-maleficence. Autonomy. Fairness. Explicability.

Normative System of AI (law) ethics:
various sources of different natures.
Soft Law
The term "Soft Law" refers to a set of
texts, principles, declarations,
recommendations, guidelines, white
papers, and codes of conduct that,
unlike "Hard Law," are not legally
binding.
It has played and continues to play
a pioneering role in the field of AI by
proposing ethical principles that
significantly influence policymakers
and legally binding texts.
These principles have become
common reference points for all AI
stakeholders.
Hard Law
The term "Hard Law" refers to a
system of legal rules that are both
binding and mandatory. This
may include treaties, conventions,
or international agreements that
can be directly applicable and
subject to sanctions. In the case of
the EU, this would refer to
regulations.
•In the field of AI or digital
regulation more broadly, notable
examples in the EU include the
GDPR (General Data Protection
Regulation), the AI Act, the DSA
(Digital Services Act), and the
DMA (Digital Markets Act).
Main Sources
Among the main sources of rules,
norms, principles, opinions,
recommendations, declarations,
guidelines, and laws aimed at
regulating the development of
ethical, responsible, and trustworthy
AI, we find sources at both the
national level (States) and the
international level:
•UNESCO, OECD, Council of Europe,
European Union (EU), private
institutions, universities and
research centers, non-profit
organizations like the Future of
Life Institute or Partnership of AI.

The need for a “Unified Framework of Ethical
Principles”for AI according to Floridi.
On the basis of a comparative analysis of what he sees as the
most important texts published since 2017, Floridi argues: that:
•The sheer volume of proposed principles (ore than 160
principles by 2020), which has proliferated ever since,
threatens to sow confusion(More than 160 by 2020, according
to Algorithm Watch's AI Ethics Guidelines Global Inventory).
•Maintain between them “a high degree of overlap” that Floridi
proposes to update. The author of The Ethics of AI thus points
to the risk of a “marketplace of principles” in which
stakeholders would be tempted to choose the most attractive.
•Floridi concludes that it is possible to construct a general
framework of five fundamental principles for ethical AI, the
first four of which, particularly well suited “to the new ethical
challenges posed by AI”, are borrowed from bioethics.
Luciano Floridi, L’éthique de l’intelligence artificielle. Principes, défis et opportunités, OUP Oxford, 2023.

Luciano Floridi's five key ethical
principles for AI. (1)
BENEFICIENCE
That is, “to promote well-being, preserve dignity and
ensure the sustainability of the planet”.
Although sometimes formulated in different ways,
this is the most easily observable of the four
traditional principles of bioethics. It underlines the
central importance of AI in promoting the well-
being of individuals and the planet.
The first four principles are derived from bioethics, as set out by American philosophers
Thomas Beauchamp and James Childress in 1979, in their now-classic work The
Principles of Biomedical Ethics.
Keywords: Benefit for humanity.
Responsible innovation. AI for the
common good. Improving the well-being
and quality of life of individuals (caring,
assisting, educating, etc.).

Luciano Floridi's five key ethical
principles for AI.(2)
NON-MALEFICIENCEIncluding “privacy, security and
‘capability caution’.
Not only encourage the creation of AI that is
beneficial to humanity (beneficence), but also call
for caution with regard to the possible negative
consequences of excessive or abusive use of
artificial intelligence technologies, such as the
prevention of breaches of privacy, the AI arms race,
or system security.
Keywords: Do no harm to individuals or society.
Secure. Reliable. Do not cause dangerous
unintended consequences (medical errors,
discrimination, biased decisions, etc.).

Luciano Floridi's five key ethical
principles for AI. (3)
AUTONOMY
Understood as “the power to ‘decide to decide’”.
Floridi underlines the temptation to cede part of our
decision-making power to technological artifacts.
The principle of autonomy means that users of AI
systems must be able to make free, informed
decisions without being manipulated or coerced by
these very systems. In all circumstances, human
beings must retain the power to decide whether or
not to delegate their decision-making power to a
machine, for reasons of efficiency for example, while
ensuring that this power of delegation is revocable
at all times.
Keywords: Freedom of decision.
Informed and free choice. Informed
consent. AI as a tool to assist
humans. Trusted AI. Explanable.

Luciano Floridi's five key ethical
principles for AI. (4)
FAIRNESS
In other words, “promote prosperity, preserve
solidarity and avoid injustice”. The ability to make
or delegate decisions is not evenly distributed
throughout society. Hence the importance of the
principle of justice, which aims to correct this disparity
in autonomy.
Floridi points, however, to a lack of clarity in the
various ways of clarifying the concept of justice,
which refers in turn to the fight against unjust
discrimination, the need for shared and equitable
prosperity of AI, equal access to the benefits it
would bring, the correction of biases contained in
data to train models, the need to preserve the
solidarity of social insurance systems, notably
access to healthcare, etc.
Keywords: Fairness. Inclusion.
Accessibility. Anti-discrimination.
Contributing to social justice. Fair
distribution of technological
benefits.

Luciano Floridi's five key ethical
principles for AI. (5)
EXPLICABILITY
“Enabling other principles through intelligibility and
responsibility”. This is the fundamental principle that
makes all the others possible.
According to Floridi, “the crucial missing piece of the AI
ethics puzzle” that complements the other four
principles. The concept of “explicability” integrates two
meanings and refers to two complementary notions:
•The epistemological concept of intelligibility, which
aims to explain the decision-making process of an AI
system. It answers the question: “How does it work?”
and, above all, accounts for one of the seemingly new
aspects of AI as a form of action: its opacity or
unintelligibility.
•The ethical concept of responsibility, which answers
the question: “Who is responsible for the way it works?
Keywords: Transparency. Justified
decisions. Understandable by everyone.
Auditing. Non-opaque use. Controllable.
Accountable.

Luciano Floridi's five key ethical
principles for AI. (6)
Beneficience and Non-beneficience: Respect for the
principles of beneficence and non-maleficence implies that
we are able to understand (explain) the good or harm that AI
is likely to actually do to society.The principle of explicability
therefore underpins the
other four principles.
Autonomy: Respect for the principle of autonomy, which AI is
supposed to foster and not limit, implies that we are able to
clarify (explain) how it would act in our place, to decide
whether or not to delegate our power to decide, or else,
improve its performance.
Fairness: Finally, the principle of justice implies that we should
be able to know who we should hold ethically or legally
responsible in the event of a serious negative result, and
therefore, to explain the reasons for a given result and how we
can avoid it in the future.
EXPLICABILITY

Some key remarks...
•Although the ethical principles proposed by various organizations are similar
on a number of points, no universal consensus exists.
•They are culturally oriented (and therefore biased), as demonstrated by the
capture of actual moral preferences conducted by Moral Machine.
•Nevertheless, these ethical principles have significantly influenced the
European legislator in drafting the AI Act.
•The challenge for organizations today is to move from principles (ethics) and
regulations (laws) to their implementation and governance through
compliance programs(Ethical charters, codes of conduct, control
procedures, training of employees, etc.).
•Not to mention the divergent perspectives on AI regulation between Europe,
China, and the United States.

IV. What are the major
ethical challenges (risks)
of artificial intelligence?
Key words: Algorithmic Bias Disinformation. Deepfakes. Post-truth era. Intellectual
Property. Future of Work. Mass Surveillance. Privacy. Transparency. Explicability.
Black Box Problem. Cultural Bias. Ideological Bias. AI Governance. Regulation. Open
Source. AI’s Carbon Footprint. Content Moderation. Freedom of Speech. Digital
Divide. Robot Liability. Autonomous Weapon. Fair Distribution.

The “Inflationist” versus the
“Deflationist” debate.
“Inflationist“Outlook
We owe philosopher Nick Bostrom's now classic
explanations of the “existential risk” posed by AI, in his
book: Superintelligence, first published in 2014.
According to Bostrom, we must take very seriously
the possibility of the emergence of an “artificial
superintelligence” vastly superior to human
intelligence, which would take control of AI systems.
Famous inflationists include Stephen Hawkings,
Elon Musk and Bill Gates, as well as Yoshua Bengio
and Geoffrey Hinton.
“Deflationist“Outlook
Deflationists see the AI to be no more and no less
than a set of sophisticated algorithms based on
powerful mathematical tools. Ethical and legal
reflection on AI should be refocused on the real risks
posed by the operation of AI systems, which, in the
authors' own words, “jeopardize the values at the
foundation of the democratic rule of law”.
One example is researcher Yan Le Cun, who doesn't
believe in an “apocalyptic scenario” based, in his
view, on “erroneous assumptions”.
It is common to contrast two visions of AI: the inflationary and the deflationary. These are two
conceptions of the expectations and impacts of artificial intelligence on societies and all human
activities. The basis of this opposition, which is an important marker of the debates surrounding AI, lies in
the belief or not of the emergence of an AI that would surpass human capacities in every respect, to
reach a so-called “general” intelligence or “superintelligence”.

Three types and visions of
Artificial Intelligence.
Type 1:
Weak AI or
Narrow AI
This term, more categorical than
technical, refers to the first stage of AI
evolution, focused on performing
specific tasks that simulate human
intelligence in carrying out a precise
function (image recognition, natural
language processing, etc.).
According to most tech gurus and
Silicon Valley leaders, weak AI is
merely a step toward strong AI or
Artificial General Intelligence, and
ultimately, the final stage of evolution:
superintelligence or singularity.
Type 2:
General AI (AGI)
or super-AI
In contrast to weak AI, general Artificial
Intelligence refers to a hypothetical
form of AI that would be able to mimic
the full range of human cognitive
abilities, and thus perform any
intellectual task in a wide variety of
domains, except that it would
outperform a human in each of these
domains.
Some experts estimate that it could
see the light of day before 2030.
Type 3:
Superintelligence
or Singularity
In addition to possessing all the
attributes of a “generalist” AI, a strong
AI would be endowed with three
characteristics specific to biological
bodies and human beings: self-
awareness, the ability to feel emotions,
and the ability to make autonomous
decisions. .
The Techno-philosophical Narrative of AI Evolution

Algorithmic Bias, Discrimination,
and Fairness in AI.
Algorithmic biases are automated
decisions which consciously or
unconsciously reproduce or even
amplify inequalities and prejudices
existing in society. They can lead to
discrimination in various sectors:
recruitment, job search, access to
credit and loans, predictive justice, etc.
Topics:Discrimination and systemic inequalities.
Recruitment and access to the job market.
Predictive justice. Facial recognition and mass
surveillance. Discrimination and access to services.
Automating decisions without human control. Etc.
•How can we ensure that algorithms don't reproduce
existing inequalities (gender, race, social class, etc.)?
•What control and evaluation mechanisms can be
put in place to guarantee algorithmic fairness?
•How can we ensure that the decisions made are
understandable to human beings?
•Who should be held responsible for decisions made
by AI in the event of discrimination or error?
•What legal safeguards can be put in place to
protect vulnerable people (minors, people in
precarious situations, the elderly, etc.)?
•To what extent can an AI be allowed to make
decisions without human intervention?...

Transparency, Explainability,
and the AI Black Box Problem.
With the paradigm shift brought about
by Machine (Deep) Learning, the results
produced by these systems remain
unpredictable, while we do not fully
understand how they make decisions.
This raises critical questions of security,
exploitability, transparency, and
accountability, which are at the core of
AI ethics.
Topics: Transparency. Interpretability. Traceability.
Auditability. Accountability. Justification. Compliance.
Algorithmic Bias. Robustness. Fairness. Reliability.
Ethical Alignement. Black Box. Etc.
•How can we reconcile the need for transparency
and explicability with the intrinsic opacity of Machine
Learning or Deep Learning models?
•Doesn't the opacity of AI models fundamentally call
into question the ethical nature of the decisions
made by these systems?
•Who can be held responsible for the negative
impacts of automated AI decisions?
•What governance and control mechanisms should
be put in place to guarantee the axiological
neutrality of AI systems?
•Is it possible to align AI system behavior with human
values without respecting the principle of
explicability?...

Cultural, Ideological and
Linguistic Biases in AI.
Of the 149 AI models identified worldwide
by the end of 2023, 109 were American, 20
Chinese, 8 British, 1 French. These models
are biased from three perspectives:
linguistic, ideological, and cultural. This
highlights the importance of training
models on data that reflect linguistic
diversity and a varietyofcultures to ensure
inclusivity.
Topics: Linguistic Bias. Cultural Biais. Ideological Bias.
Anglocentrism. Data representativity. Digital
Sovereignty. Corpus diversity. Localized AI.
Cognitivism pluralism. Algorithmic fairness. Etc.
•What makes AI a cultural and ideological challenge?
•Do they contribute to reinforcing the cultural and
ideological hegemony of the great economic and
technological powers?
•How can we promote and guarantee the cultural
and linguistic diversity of AI models?
•Is the existence of an ideologically, culturally and
linguistically neutral AI a utopian dream?
•How can we build AI that is not culturally and
ideologically biased?
•Can the construction of an AI escape the cultural,
ideological and linguistic biases of its creators?...

Disinformation, Fake News,
Deepfakes, Post-Truth Era.
Topics: Manipulation of public opinion (elections).
Threat to democracy and security. Cyber attacks.
Cybercrime. Pollution of the web and the information
sphere. Erosion of public trust in the media and the
web. Amplification of social divisions. Creation of fake
pornographic content (parodies). Identity theft. Etc.
•Does generative AI distort the democratic process
by facilitating voter manipulation?
•Should states regulate the use of deepfakes and
fake news during election campaigns?
•How can we guarantee access to reliable
information in a web saturated with automated
content?
•Are fact-checking and regulation enough to counter
the impact of AI-generated misinformation?
•Does AI amplify social and political fractures by
reinforcing echo chambers and filter bubbles?
•How can we guarantee the right to image and
privacy in the face of advances in generative AI?...
With deregulation of the information
market, the web have become places
where fake news is propagated and
circulated on a massive scale. Generative
AI accelerates this dynamic. It has never
been so easy to create and generate
realistic images or videos (deepfakes).

Content Moderation vs. Freedom
of Speech:Online Governance.
In January 7, 2025, Marc Zuckerberg
announces the end of his fact-checking
program in the U.S., in the name of
freedom of speech. Two conceptions
are in conflict: the French one, framed
by laws that make racist and sexist
insults. The US one defended in the First
Amendment, which states that
“Congress shall make no law (...)
restricting the freedom of speech.”
•Is freedom of expression really guaranteed if platforms
modulate the visibility of content through increasingly
opaque algorithms?
•Isn't it an illusion to suggest that we want all opinions to
circulate, if only some of them are amplified according
to purely commercial logic?
•At critical moments in political life, should platforms be
required to guarantee equal access to content
visibility?
•By giving the impression of total freedom of expression,
aren't the major technology companies concealing a
new form of control over ideas and opinions?
•Doesn't the absence of algorithmic regulation mean
that the tech giants have a monopoly on the
regulation, visibility and legitimacy of discourse?...
Topics:Transparency and accountability of algorithms.
Manipulation of public opinion. Disinformation. Freedom
versus protection. Regulation of platforms. Power of
GAFAM. Sovereignty. Education in critical thinking. Etc.

“Technology Oligopoly“and
the Future of Democracy.
In his farewell address, Biden issued a
warning to the American people about
“the potential rise of a techno-industrial
complex that could bring real danger to
our country.” According to him, “an
oligarchy is taking shape in America of
extreme wealth, power and influence
that threatens our entire democracy, our
basic rights, our freedoms.”
Topics: Technico-industrial complex. Influence power,
manipulation and Platform. Algorithm and Democracy.
Conflict of Interest. Platform, Misinformation and
Disinformation. Public Debate and Democraty. Digital
Sovereignty. Etc.
•Is the technological oligarchy a threat for democraty?
•Have algorithms become tools of political
manipulation in the hands of Tech Companies?
•Can we speak of media pluralism when a few
corporations control content distribution?
•Can the control of digital platforms by a few tech
giants distort the electoral process?
•Are the tech billionaires transforming their economic
power into political power?
•Does the lack of transparency in major platform
algorithms strenghen the technological oligarchy?
•Is algorithm regulation the only solution to limiting the
power of the technological oligarchy?...

AI Regulation, Innovation, and
Global Governance.
Among the main themes of the Paris
Summit, the framework for global AI
governance is currently fragmented and
lacks unity. Three main visions confront
each other: 1) the United States, focused
on innovation and free markets; 2) China,
focused on strengthening its position as
a technological powerhouse; 3) Europe,
seeking to reconcile innovation, ethics
and respect for fundamental rights.
•How can we ensure a balance between innovation and
regulation needed to protect citizens' fundamental rights?
•How can we make AI accessible to all, while respecting the
plurality of cultures and reducing the digital divide?
•What role should international institutions (UN, OECD) play
in establishing global governance of AI?
•What role should civil society players and non-
gpvernemental organizations play in ensuring that AI is
inclusive, ethical, safe and beneficial to all?
•What rules of cooperation and international governance
are needed to ensure sustainable AI for people and the
planet?
•What are the principles and rules of governance for
subjects as critical as autonomous lethal weapons,
mass surveillance, deepfakes or disinformation?...
Topics: Regulation, International institutions. Rules of
governance. International cooperation. Conventions.
Treaties. Charters. Principles. Soft Law. Hard Law.
Sustainable Development Goals. Control authorities. Etc.

The Debate on Open AI Models
versus Closed AI Models.
The debate open source vs. closed
models raises economic and ethical
issues regarding transparency, security,
and accessibility of models. Should
collaborative research be promoted in a
logic of democratizing technologies, or
should we opt for a closed model
approach and the risks of technological
power concentration associated with it?
Topics: Transparency, security, accessibility, Open
source philosophy, collaboration, technological
monopoly, concentration power, democratization of
technologies, open innovation, intellectual property, etc.
•Does open source really foster democratic
innovation, or does it serve the commercial interests
of AI model publishers?
•Does opening up models guarantee greater
transparency or create new vulnerabilities?
•Does the open source model jeopardize the
confidentiality of personal data?
•Does Open Source promote more equitable AI or
increase inequalities of access?
•Does the closure of AI models obstruct the
democratization of AI and technological progress?
•Who should be liable for damage caused by an
open source model?...

Digital divide: distribution of AI
gains and territorial inequalities.
The first priority of the Paris AI Summit
states:"Promote AI accessibility to
reduce the digital divide."This means
ensuring fair access to AI technologies
for all countries and populations,
including access to infrastructure and
tools, mastery of digital skills digital
illiteracy), and even the diversity of
available content.
Topics: Accessibility. Equity. Inclusion. Digital literacy.
Educational programs and training. Fight against
monopolies. Infrastructure.Digital skills. Cultural diversity
of models and content. Fair access to models and
data. Etc.
•How can we work towards a fair distribution of AI
gains and avoid the risks of accentuating inequalities
on a global scale?
•How can we reduce inequalities and help developing
countries strengthen their AI capacities and avoid
market concentration among a few countries?
•Should access to digital technology and AI be
recognized as a fundamental right in order to
guarantee equal opportunities?
•Shouldn't access to digital technology and AI
(infrastructure, tools, knowledge, skills, etc.) be
considered a common good?
•Is territorial inequality in access to digital technology
and AI a new form of discrimination?...

AI’s Carbon Footprint and
Sustainable Computing.
Although artificial intelligence is often
perceived as an intangible technology,
its environmental impact is very real,
and raises major ethical issues. Its
operation relies on energy and
material-intensive digital infrastructures
(extraction of raw materials, chips,
infrastructures, data centers, equipment,
electricity and water consumption, etc.).
Topics: AI life cycle. Carbon footprint. Carbon neutrality.
Energy consumption. Extraction of rare metals. Water
consumption. Rebound effect. Technological
solutionism. Green AI. Frugal AI. IA Green. CSR.
Calculation Model. Indicators. Anthropocene.
Development Model. Contradictory Injonction. Etc.
•What is the true environmental impact of AI’s full life cycle,
from raw materials extraction to equipment recycling?
•Can the growing carbon footprint of AI be justified by its
contributions to environmental policies?
•What about second-order impacts (construction and
energy reduction, for example), and third-order impacts
that are more global and systemic, such as changes in
user behavior?
•Are current computational models sufficient to estimate
AI’s environmental footprint?
•Does AI contribute to a rebound effect in energy and
resource consumption?
•Do major tech companies have an ethical responsibility
for AI’s ecological footprint?...

AI, Automation, employment
and the Future of Work.
The question of AI's impact on
employment and work is posed on two
different levels of analysis (Luc Ferry): 1)
the question of fact: are we or are we
not heading for the end of salaried work,
replaced by AI? 2) the question of law:
would this be good or bad news? These
two questions involve two antinomies
and two visions of the future of work.
Topics: Growth, productivity gains, end of work,
representation of work, universal basic income, creative
destruction, transformation of the labor market training,
public employment policy, skills, professions, tasks,
human-machine collaboration,technological
unemployment,. Etc.
•What are the possible scenarios for the evolution of
work and employment in the age of AI?
•Is the current revolution based on the classical idea of
creative destruction (Schumpeter), or on previous
industrial revolutions?
•Which tasks and professions are most at risk from AI-
driven automation?
•How will AI changeour very representation of work?
•What remains of our work and our motivation to work in
a world where a large proportion of our tasks will be
transferred to AI?
•How do we address the question of wealth
distribution and social justice in a world where the
share of human labor is tending to decline?...

Mass Surveillance, Privacy
and Security.
Surveillance is associated with the idea
of "Big Brother" (centralized authority).
With the rise of digital technology, it has
become private (data collection,
algorithms). By enabling the analysis
of vast amounts of data (video
surveillance, connected objects, etc.),
AI can enhance security, but also
threaten individual freedoms.
Topics: Mass surveillance. Private surveillance.
Algorithmic surveillance. Facial recognition.
Algorithmic bias. Social control. Personal data.
Individual freedoms. Privacy. Video surveillance. Etc.
•Security and fundamental freedoms: to what extent can
protection and security take precedence over individual
freedoms?
•Augmented video surveillance, facial recognition,
predictive behavior analysis, unprecedented control
capabilities: is AI threatening our fundamental
freedoms?
•Doesn't the development of predictive surveillance to
anticipate risky behavior threaten the presumption of
innocence?
•Should the use of AI in surveillance be subject to greater
democratic control?

AI-Generated Content and
Intellectual Property.
The issue of copyright arises on two
levels. First, the unauthorized use of
data and copyrighted works to train
algorithms, along with the lack of
transparency from developers.
Second, the creation of works by AI,
challenging the very notions of work
and authorship. Under what
conditions can AI be considered an
author?
Topics: Author. Originality. Literary and artistic work.
Intellectual creation. Exclusive right. Moral right.
Related right. Economic rights. Aesthetic (artistic)
purposes.
•How can greater transparency be ensured from
developers regarding the sources used to train
generative AI models?
•What legal framework should be adopted to strike a fair
balance between innovation and copyright protection?
•Under what conditions can AI-generated content
qualify as a work of art?
•If a digital representation is recognized as a work of art,
who should be considered the author? The person who
provided the instructions to the model? The model’s
designer? The model itself?
•From a copyright perspective, what differentiates a
photograph from an image generated by an AI model,
since both result from a technical process?...

AI in Warfare: The Ethics of
Autonomous Weapons.
Autonomous weapon systems raise
major strategic, legal, and ethical issues,
including arms race, risks of uncontrolled
proliferation and terrorism, responsibility
in case of error, and system failures.
Their rapid development fuels debates
on their prohibition and international
regulation.
Topics: Definition, proliferation, prohibition, arms
race, international law, target discrimination,
human control, cyberattacks, unpredictability,
deterrence, malicious uses, etc.
•Should autonomous lethal weapons be banned?
•Should the current legal framework be reviewed and
adapted to provide a stricter framework for the use of
autonomous lethal weapons?
•How can we anticipate and prevent the use of
autonomous lethal weapons by malicious actors?
•Do autonomous lethal weapons encourage the arms
race and the escalation of conflicts?
•In the event of error, who should be held responsible?
The state? The manufacturer? The operator?
•Can an algorithm be entrusted with the decision to take
the life of a human being?...

AI, Robotics, and Liability: Who
is Responsible?
Only humans possess legal personality.
This grants them rights, obligations, and
the ability to be held accountable for
their actions. Including damage caused
by an AI.. If AI research were to lead to
the creation of a superintelligence,
should legal personality be granted to
the robot?
Topics: Legal personality. Civil liability. Criminal
liability. Autonomy. Consciousness. Psychological
consciousness. Moral consciousness. Capacity to
suffer. Moral patient.
•Under what conditions could an AI system be held civilly
and criminally liable for its actions?
•Should we grant legal personality to a strong AI?
•What are the essential attributes a robot should
possess to qualify as a moral agent or patient?
•Should a “strong” AI be considered a moral agent in its
fundamental rights, just like a human?
•To what extent would the advent of a strong AI
challenge today's legal categories?
•Should a strong AI have the same fundamental rights
as a human being?
•Can we ever imagine granting the same dignity to a
robot as to a human being?...

Appendix.
Some key texts of AI Ethics (law).

A general introduction to
ethical philosophy.
A clear, stimulating and concrete general introduction
to ethics, not intended as a history of the great ethical
theories, but rather as “a sort of intellectual toolbox for
those who might be interested in confronting the moral
debate”, by one of the most talented and original
Frenchmen in contemporary ethical philosophy. Sadly
deceased on May 4, 2017, Ruwen Ogien (1949-2017)
was CNRS research director in philosophy and a
member of the Centre de Recherches Sens Éthique et
Société (CERSES) laboratory. The book has been
translated in english under the title: Human Kindness
and the Smell of Warm Croissants: An Introduction to
Ethics.

Three major sources of Western
ethical philosophy and AI Ethics.
John-Stuart Mill (1806-1873)
Utilitarisnism
Consequentialism
Emmanuel Kant (1704-1804)
Groundwork of the Metaphysics of Morals
Deontologism
Aristotle (384-322 BC)
Nicomachean Ethics
Ethics of virtu

The 23 Asilomar Principles.
•The first of the big initiatives is the “Future of Life Institute”
association founded in 2014 by top academics including
Stephen Hawking, but also major industrialists such as Elon
Musk.
•The association's main mission: “To catalyze and support
research and initiatives aimed at safeguarding life and
developing optimistic visions of the future”.
•On the occasion of the famous Asilomar (California)
conference on beneficial AI, it will publish in January 2017,
the 23 Asilomar principles, which in part take up the
fundamental rights derived from major international texts,
adding a few specific principles.
•23 principles grouped into 3 categories: 1) Research issues,
2) Ethics and values, 3) Long-term issues

Montreal Declaration for a Responsible
Development of Artificial Intelligence.
•Among the most emblematic of its private initiatives, this
time emanating from a University, is the Montreal Declaration
for the Responsible Development of Artificial Intelligence.
•This work and the Declaration are therefore the results of a
process of citizen deliberation and a collective work.
•Aimed at policymakers, “any individual, any civil society
organization and any company wishing to participate in the
development of AI in a responsible manner”.
•10 key principles that are: 1) Principle of well-being, 2)
respect for autonomy, 3) protection of intimacy and privacy,
4) solidarity, 5) democratic participation, 6) equity, 7)
inclusion of diversity, 8) prudence, 9) responsibility, 10)
sustainable development.

OECD Council Recommendations
on Artificial Intelligence.
•The OECD (Organization for Economic Cooperation and
Development) is the first international organization to adopt AI
recommendations in May 2019.
•The document is updated every year and mainly defends the
values of inclusive growth, sustainable development and well-
being, human-centered values and equity, transparency and
explicability, robustness, safety and security, responsibility, etc.
•It also sets out a series of recommendations for policymakers,
such as: Investing in research and development; Fostering a
digital ecosystem for AI; Shaping an enabling policy framework
for AI; Building human capacity and preparing for labor market
transformation; Fostering international cooperation for
trustworthy AI.

UNESCO Recommendation on the
Ethics of Artificial Intelligence.
•The first initiative by a public body adopted by 193 UNESCO
members, first global standard-setting instrument on the subject.
•Its aim: to protect and promote human rights and human dignity
and “weigh fundamental risks” on societies.
•11 strategic action areas, like ethical impact assessment;
Governance; Data policy; Environment and ecosystems; Gender
equality; Culture, education and research, etc.
•4 values to guarantee and promote, like Respect, protection and
promotion of human rights, fundamental freedoms, human dignity;
environment and ecosystems; diversity and inclusion, etc.
•10 principles to uphold, including safety and security; fairness and
non-discrimination; sustainability; right to privacy and data
protection; Transparency and explicability, etc.

Ethics Guidelines for Trustworthy
AI of the European Union.
•On April 8, 2019, the Group of Independent High-Level
Experts in Artificial Intelligence (GEHN IA), mandated by
the European Commission, published a white paper
entitled Ethics Guidelines for Trustworthy AI.
•This is by no means a hard law document, but rather,
as the name suggests, guidelines designed to provide AI
players with an ethical framework for the development
and deployment of artificial intelligence systems, based
on the Charter of Fundamental Rights of the European
Union.
•This document aims to promote and establish a
framework for the realization of trustworthy AI.

The Assessment List for Trustworthy
Artificial Intelligence for self assessment.
•In the aforementioned document (Ethics Guidelines for
Trustworthy Artificial Intelligence),the group of experts speak of
“seven requirements” that should form the basis for assessing
an AI system throughout its lifecycle: 1) Human action and
control ; 2) Ethical robustness and security; 3) Privacy and data
governance; 4) Transparency; 5) Diversity, non-discrimination
and equity ; 6) Societal and environmental well-being; 7)
Accountability.
•It is within this framework that this other document falls, which
aims to provide an ethical evaluation grid for AI systems based
on the seven requirements defined in the Ethical Guidelines for
Trustworthy AI.

The European Declarations
of Fundamental Rights.
EU Charter of Fundamental
Rights (2000)
The Charter of Fundamental
Rights of the European
Union became binding
under the Treaty of Lisbon
on December 1, 2009. Article
8 of the Charter echoes
Article 12 of the 1948
Universal Declaration of
Human Rights:
“Everyone has the right to
respect for his private and
family life, his home and his
correspondence.“
European Convention on
Human Rights (1950)
The 1950 European
Convention on Human
Rights. Article 12 of the 1948
Declaration states:
“No one shall be subjected
to arbitrary interference
with his privacy, family,
home or correspondence,
nor to attacks upon his
honor and reputation.
Everyone has the right to
the protection of the law
against such interference
or attacks.“