The Ethics of Artificial Intelligence in Digital Ecosystems
washikmaryam
106 views
34 slides
May 05, 2024
Slide 1 of 34
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
About This Presentation
The ethics of AI go beyond just the technology itself. When we consider AI within the complex web of digital platforms and services (the digital ecosystem), new ethical concerns arise.
A big focus is on how AI decisions can be biased, reflecting the data it's trained on and potentially leading ...
The ethics of AI go beyond just the technology itself. When we consider AI within the complex web of digital platforms and services (the digital ecosystem), new ethical concerns arise.
A big focus is on how AI decisions can be biased, reflecting the data it's trained on and potentially leading to discrimination. We also need to be mindful of privacy issues and how AI might be used to manipulate users.
To ensure ethical AI in digital ecosystems, we need to consider these potential pitfalls during development and use frameworks to make responsible choices. This includes reflecting on the decision-making process and how AI can be used for good.
Size: 5.36 MB
Language: en
Added: May 05, 2024
Slides: 34 pages
Slide Content
The Ethics of AI in Digital
Ecosystems
Maryam Washik
Department of Informatics
University of Oslo
Digital Ecosystems
Networks of diverse entities connected through different levels of collaboration and unique
complementarities, without the presence of hierarchical controls.
Artificial Intelligence (AI)
John McCarty (1956): "the science and engineering
of making intelligent machines.”
The simulation of human intelligence processes by
machines, especially computer systems.
“Systems that extend human capability by sensing,
comprehending, acting and learning.” (Daugherty and
Wilson, 2018, p 3)
Major areas/subfields: Machine Learning, Natural
Language Processing, Expert Systems, Robotics,
Computer Vision, Speech Recognition,
Reinforcement Learning…)
Starting from search engines, call-center chatbots, to
AI-enabled humanoid robots, there is a whole range
of artificial intelligence products and services used in
digital platforms/ecosystems today.
Examples of AI-driven Digital Ecosystems
Image: Ova Digital Ecosystems
How AI Works in
Digital Ecosystems
Ecosystem-based business models require and generate
large volumes of data.
Data comes from all participants in the ecosystem, and AI is
often required to make sense of the data for productivity and
better efficiency.
Feedback loop called the virtuous cycle of AI-driven growth
Begins with a business problem, data collection, train AI
model, serve users, gain more users, gain more data,
improve AI model…
Data drives growth, growth generates more data, data leads
to better AI models, AI drives further growth, positive
feedback loop results in exponential growth.
AI improvement comes from observing and learning
diligently. It needs to continuously evolve to keep up with
the growth.
Roles of AI in an Ecosystem
Network Effects and Growth: drive network effects in digital ecosystems by enhancing user
experiences, increasing engagement, and attracting more users, which in turn amplifies the
value of the ecosystem for stakeholders involved. YouTube employs AI for personalized video
recommendations, which increase user engagement. This, in turn, attracts more users to the
platform. The network effect, driven by AI, amplifies the value of YouTube for both creators
and viewers.
Personalization: personalize digital experiences based on user behaviour and preferences.
E.g, Amazon recommends products based on purchase history.
Security Enhancement: strengthen digital security by detecting and preventing cyber threats.
E.g, Google's AI algorithms scan incoming emails for signs of phishing, such as suspicious
links, sender inconsistencies, or mismatched domain names.
Role of AI in an Ecosystem
Decision Optimization: AI uses data analysis to provide insights for better decision-making in
digital ecosystems. E.g, Uber employs AI pricing decisions, by adjusting ride fares in real-time
based on factors like demand, traffic, and weather conditions, balancing supply and demand.
Task Automation: AI automates tasks in digital ecosystems, freeing up humans for more
strategic work. E.g, AI chatbots working round-the-clock in customer support, handling
routine inquiries and resolving issues.
Innovation in Products and Services: AI drives innovation by enabling the creation of new
digital products and services. E.g, autonomous vehicles that navigate roads without human
intervention, redefining the future of transport and mobility services.
Advanced Analytics: AI conducts advanced analytics on large datasets to identify hidden
patterns and generate insights. (Examples: AI analyses user data for businesses and
generates insights on preferences, behaviours, and purchase trends for targeted marketing.
AI has been a game-
changer for many
platforms and
ecosystems but not
without its challenges.
Even with exciting promises and many possibilities,
there are important ethical problems we need to pay
attention to.
As we give AI more important tasks, we have to
think about what's right, fair, and good in how it
works.
What is ethics?
AI Ethics
Ethics is a set of moral principles which help us discern
between right and wrong.
Since the middle of the 2010s, a visible discourse on
the ethics of artificial intelligence (AI) has developed.
AI ethics is a multidisciplinary field that studies how to
optimize AI's beneficial impact while reducing risks and
adverse outcomes.
Provide guidance in the development of practical
interventions.
Most prominent AI ethics issues are also human rights
issues (privacy, equality and non-discrimination).
We study AI ethics to ensure that AI is developed and
used in a way that is beneficial to society, reduces
unfavourable outcomes and aligned with human values.
What is it about AI that gives rise to
ethical issues in digital ecosystems?
Information
AI systems are trained on large amounts of
data, which can include sensitive
information about people, such as their
personal data, health data, or financial data.
This information can be used to invade their
privacy and safety.
E.g, AI systems in navigation or mapping
applications collect location data. If not well
protected, this data can be used to track
people’s movements, compromising their
privacy and safety.
Investopedia: Ellen Lindner
Extrapolation
Being trained on a certain range of data and being
able to predict on a different range of data.
AI systems can extrapolate from the data they are
trained on to make predictions about the future.
Predictions can be inaccurate or biased, which
can lead to ethical problems.
For example, An AI medical system might over-
diagnose a patient with a serious disease, making
the patient go through unnecessary treatment
and bills.
Image: towardsdatascience.com
Automation
AI systems can automate tasks and decision-
making that were previously done by humans.
This can lead to job losses and can also create
ethical problems like bias and discrimination.
For example, automated decision-making
systems might reflect society’s biases and
stereotypes. Like an AI system that is trained on
data about past criminal convictions could be
used to decide who is likely to commit a crime
in the future. This could lead to inaccurate
judgments and people being discriminated
against or even being denied opportunities.
AI makes decisions based on what it is fed.
Garbage in, garbage out.
Images: propublica.com
Imitation
AI systems can be designed to
imitate human behaviour. This can be
used for good, such as in the
development of virtual assistants that
can provide companionship or
emotional support. However, it can
also be used inappropriately.
For example, deepfakes can be used
to spread misinformation or to
damage someone's reputation.
Deepfakes are AI-altered media that
convincingly impersonate people in
photos, videos, or audio.
Ethical Challenges of AI
Prominent cases of algorithmic biases: Amazon's recommendation algorithm has been
criticized for its bias towards recommending products that are more expensive or popular,
rather than products that are more relevant to the user's needs.
Backlashes against privacy breaches: China's social credit system is a system that uses AI to
track and monitor citizens' behaviour. The system uses this data to assign each citizen a
score, which can then be used to determine their access to jobs, housing, and other services.
The social credit system has been criticized for its potential to violate people's privacy rights
and to create a dystopian society.
Use of data to manipulate deepfakes: Deepfakes are photos, videos or audio recordings that
have been manipulated using AI to make it look or sound like someone is saying or doing
something they never actually said or did. Deepfakes can be used to spread misinformation,
damage someone's reputation, or even blackmail them.
Harm/offenses caused: Job losses, loss of life, discrimination.
Ethical Challenges of AI
Control: AI is increasingly making split-second decisions. What are the chances of humans
being involved in the decision-making process? E.g, autonomous cars and high-frequency
trading.
Power balance: Monopoly by huge platforms like Facebook, Amazon and Google.
Ownership: Who owns it? Who has the intellectual property rights? Who should be held
responsible? Who should get paid?
Environmental Impact: Power hungry infrastructure used for training AI. Computing power
required to train AI increases dramatically (doubles every 3.4 months since 2012). Training a
single large AI model can produce about 626,000 pounds of carbon dioxide equivalent to the
emissions from 300 round-trip flights between New York and San Francisco.
Humanity: How does AI impact our feeling of being human? What will our contribution be?
How will it affect human dignity and flourishing?
Are there any standardised steps?
How do we think about or approach such ethical issues / Dilemmas?
•
Markkula Centre for Applied Ethics Bard ChatGpt
Identify the ethical issues Identify the ethical dilemma Define the problem
Get the facts Gather information Gather information
Evaluate the alternative actions Identify the ethical principles Identify options
Choose an option for action and test it Weigh the options Evaluate consequences
Implement your decision and reflect on
the outcome
Make a decision Apply ethical frameworks
Reflect on the decision Make a decision
Implement the decision
Monitor, review, learn and improve
There is no absolute or standard ethical
framework or steps that work for every
ethical issue.
On what basis can we decide between right
and wrong, good and bad, when working
with AI?
There are theories and lenses that can be
used to guide ethical decision-making.
How Should We Decide?
Ethical Theories/
Lenses/Perspectives
What makes an action ethically better or worse than an
alternative action?
Ethical theories provide us with a framework for thinking
about moral problems, and ethical lenses help us to see
ethical issues from different perspectives.
Ethical theories are more general, while ethical lenses are
more specific.
•Identify the relevant moral values and principles that
are at stake in a situation.
•Consider the potential consequences of our actions
for different people and groups.
•Develop a rationale for our decisions that is
consistent with our moral values.
Consequentialism
Greatest good
The right action is the one that produces the
greatest good for the greatest number of people.
(The Principles of Morals and Legislation by Jeremy
Bentham, 1989)
Whether or not an action is right or wrong
depends solely on the consequences of that
action.
Example: Imagine a social media platform that
utilizes AI algorithms by using personal data to
curate users' newsfeeds. Consequentialism would
evaluate the rightness or wrongness of this
algorithm based on its consequences for the
greatest number of users.
•
Consequentialism
Maximizing value, well-being for the greatest number of people (common good).
Consequentialism defines value/utility (e.g., happiness, justice) and asserts that an action is morally
right if it maximizes this value. The specifics of consequentialist theories vary based on their
understanding of value.
Who benefits from these consequences?
Is it just me? Is it everyone but me? Is it all of humanity?
The standard answer is that an action is morally right if its consequences are more favourable than
unfavourable for everyone who can benefit from what is valuable, such as pleasure and well-being,
including the person taking the action.
Utilitarianism (Jeremy Bentham, John Stuart): form of consequentialism, with a specific
account of value. Pleasure is good, pain is bad. Thus, we should maximize pleasure and minimize pain.
Deontology
Immanuel Kant - 18th century
Non-consequentialism: some kinds of actions
are wrong in themselves, not just because they
produce bad consequences (e.g, killing, torture).
Whether or not an action is right or wrong does
not depend fully or partially on its consequences.
Moral rules and duties: The right action is the
one that conforms to moral rules or duties,
regardless of the consequences.
Example: When a person chooses to tell the
truth because they believe that it is their duty to
be honest, even if they know that it will hurt
someone's feelings.
Deontology
Imagine a situation where a large technology company is developing an AI chatbot for
customer support. In deontology, the chatbot should provide honest information, even if it
could potentially harm the company's interests or reputation.
An action can be considered right even if it produces bad consequences.
Rights and duties: Individuals have rights, these rights correspond to duties.
Deontology typically take our negative duties to be stricter than our positive duties.
The intention of a person can help determine whether or not an act is permissible. If two
actions have the same consequence, one may be permissible depending on the intention.
Virtue Ethics
Virtuous character.
Virtue ethics: the right action is the one that a virtuous
person would perform. (Nicomachean Ethics by Aristotle)
Emphasis on Character: Unlike the ethical theories that
prioritize consequences or actions, virtue ethics places a
strong emphasis on the moral character of an individual. It
suggests that a morally good person will naturally make
morally good choices.
E.g: A virtuous person might choose to be brave in the face
of danger, even if it puts them at risk, because they believe
that courage is an important moral virtue.
When faced with a decision to report a security breach that
could potentially harm her company's reputation, Sarah
chooses to disclose the breach to affected customers,
even though it may have negative consequences because
the decision aligns with her virtue of honesty and
transparency.
Feminist Ethics
Gender-based approach (e.g, Carol Gilligan, Nel Noddings,
and Virginia Held)
A branch of ethics that centers on the experiences and
perspectives of women.
It is concerned with addressing and challenging gender-based
injustices and inequalities.
While feminist ethics may incorporate virtue ethics elements, it
primarily focuses on issues related to gender, power, and
social justice.
Feminist ethics calls for a shift in societal norms and
organizational practices to reduce gender-based injustices in
the workplace, recognizing the importance of addressing
power imbalances and promoting social justice.
Example: Addressing gender-related biases from the hiring
process by AI systems to contribute to a fairer and more
equitable job market, promoting gender equality and social
justice.
Care Ethics
Caring relationships (Noddings,1984)
Emphasizes the importance of caring
relationships and responsibilities in moral
decision-making.
It highlights the role of empathy, compassion,
and care in addressing moral dilemmas.
Care ethicists stress the significance of
considering the well-being of individuals in one's
care.
For example, a nurse facing a moral dilemma
involving a terminally ill patient might consider
the patient's emotional and psychological well-
being alongside medical treatment.
Viewing AI from Ethical Perspectives/Lenses
The Utilitarian Lens (The consequences of actions) would ask whether AI systems produce the greatest
good for the greatest number of people. E.g, evaluating whether an AI-powered healthcare platform
optimizes treatment options for best overall health outcomes.
The Common Good Lens (The well-being of the community as a whole) would ask whether AI systems are
used in a way that benefits society as a whole, not just individuals or specific groups. E.g, assessing if an
AI-driven educational platform provides accessible and equitable learning opportunities to all students,
thereby enhancing the educational well-being of the entire community, not just select groups.
The Rights Lens (The protection of individual rights) such as the right to privacy, the right to freedom of
speech, and the right to non-discrimination. E.g, examining whether an AI-powered surveillance system
respects individuals' right to privacy by securely handling personal data and limiting surveillance to lawful
and necessary purposes.
The Justice Lens (Fairness and equality) would ask whether AI systems are used in a way that is fair to all
people, regardless of their race, gender, religion, or other factors. E.g, whether an AI-based hiring platform
provides equal opportunities and eliminates biases, ensuring that all job applicants are treated fairly,
regardless of their demographic backgrounds, such as race, gender, or religion.
Markkula Center for Applied Ethics
Viewing AI from Ethical Perspectives/Lenses
The Virtue Lens (The character of the people who develop and use AI systems) would ask
whether these people are acting in a virtuous way, such as being honest, fair, and transparent.
E.g, ensuring that the team responsible for developing AI-powered financial tools acts with
integrity and transparency, embodying virtues such as honesty and responsibility in their work.
Feminist Lens (Gender perspective) emphasizing the recognition and correction of gender-
based injustices and inequalities, as well as the promotion of women's autonomy and well-
being. E.g, whether AI technologies in reproductive healthcare respect women's autonomy
and reproductive rights, and whether they address and rectify historical biases and gender
disparities in healthcare access and treatment.
The Care Lens (The relationships between people and AI systems) would ask whether AI
systems are designed and used in a way that respects and protects these relationships. E.g,
whether AI-powered care robots for the elderly are designed and operated in a manner to
provide companionship, emotional support, and maintain a sense of dignity for the elderly.
Markkula Center for Applied Ethics
Thinking Exercise
During a healthcare crisis with limited ventilators, AI is employed to allocate ventilators to
patients with breathing difficulties. One approach prioritizes patients with the highest likelihood
of recovery and another approach follows a "first come, first served" method.
Which approach might be considered by which ethical theory, and why?
Do you see any other approaches?
Next: Group Discussions
Group Discussions
YouTube uses AI to recommend videos to it’s
users. The recommendation algorithm is
designed to keep users engaged on the
platform by suggesting videos that they
might be interested in. The algorithm works
by tracking the videos that users watch, the
videos that they like, and the videos that
they engage with. It also takes account of
the videos that are popular among other
users who have similar interests.
Consider potential ethical implications of this
scenario for different stakeholders involved.
Develop arguments for the identified ethical
implications from different ethical lenses.
Lens/Perspective Main Focus of Approach
Common Good Lens Well-being of the community as a whole
Utilitarian Lens Pleasure is good, pain is bad
Rights Lens Protection of individual rights
Justice Lens Fairness and equality
Virtue Lens Character of the individual
Feminist Lens Gender, power, and social justice
Care Ethics Lens Relationships between parties