Robots should be slaves: Perceptions of Bulgarians towards potential robot rights and obligations

stanislavhivanov 173 views 32 slides Aug 29, 2024
Slide 1
Slide 1 of 32
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32

About This Presentation

Presentation at the Robophilosophy conference "Social robots with AI: Prospects, risks, and responsible methods", 20-23 August 2024, Aarhus University, Aarhus, Denmark.

The paper empirically checks Gunkel’s robot rights matrix using a sample of 215 respondents in Bulgaria. It evaluates ...


Slide Content

Robots should be slaves:
Perceptions of Bulgarians towards potential
robot rights and obligations
Stanislav Ivanov
Varna University of Management, Bulgaria
ZangadorResearch Institute, Bulgaria
David Gunkel
Northern Illinois University, USA

Authors
2
Stanislav Ivanov
www.stanislavivanov.com
David Gunkel
www.gunkelweb.com

Rationale and Background
3

Rationale
4

Rationale
5

Rationale
6

Empirical studies
7

Robot rights matrix
8
Based on Gunkel (2018)

Robot rights matrix
Arguments:
•Robots (and artificial autonomous agents) cannot and should not have
rights because they are just tools;
•Robots (and artificial autonomous agents) can and should have rights
because they have some intelligence and autonomy and these will
increase in the future;
•Robots (and artificial autonomous agents) can but should not have
rights in order to protect humans and social institutions;
•Robots (and artificial autonomous agents) cannot but should have
rights to avoid being mistreated by humans.
9

Robot rights matrix
10

Our contribution
•Research is overwhelmingly conceptual –a handful of previous empirical
studies only
•Research focuses on rights and disregards obligations
•Research focuses on rights as a whole, but not all rights are made equal
•Research focuses on robots and largely disregards other artificial
autonomous agents
Our study:
•Empiricalresearch on the public support for the rights and obligationsof
robots and other artificial autonomous agents on the granularlevel of
separate rights and obligations.
11

Methodology
12

Methodology
•Data collected during the period February-March 2024 in Bulgaria through
an online questionnaire developed on Google Forms.
•The research population included every Bulgarian above 18 years old with a
special focus on 2 groups –people with IT, AI, Robotics education and
people with Law education.
•The link was shared on social media and by email.
•Support in distribution was provided by admins of FB groups and pages.
•An FB ad was run to reach diverse respondents (over 40 000 visualisations).
•A link to a free PDF of an e-book was provided to stimulate participation.
•The sample includes 215 respondents (including 18 with education in Law
and 61 with education in AI, Robotics, IT).
13

Methodology
•26 rights and obligations were derived from the Universal Declaration of
Human Rights, the International Covenant on Civil and Political Rights, the
International Covenant on Economic, Social and Cultural Rights and two
previous studies (de Graafet al., 2022; Lima et al., 2020).
•The list of the rights and obligations and their translations into Bulgarian
language was consulted with 2 lawyers.
14

Methodology
•Block 1: Demographic information about the respondents.
•Block 2: Level of knowledge about AI; Education in Law, AI/Robotics/IT.
•Block 3: Attitudes towards AI. It included 5 positive and 5 negative societal
impacts of AI. A 5-point level of agreement scale was used (reverse coding
was applied for the statements about the negative impacts of AI).
•Block 4: Opinions whether robots and other agents can have
rights/obligations (coded as 1-Definitely cannot have the right/obligation to
5-Definitely can have the right/obligation).

15

Methodology
•Block 5: Opinions whether robots and other agents should have
rights/obligations (coded as 1-Definitely should not have the
right/obligation to 5-Definitely should have the right/obligation).
•The order of statements in blocks 3, 4 and 5 were randomized to avoid order
effects
16

Key results
17

Key results
•As a whole, respondents did not think that robots and AI were capable of having
rights per se or that they should be given rights.
•Respondents were most sceptical towards political rights, namely Vote for public
officials(M
can=1.45, M
should=1.50) and Be a candidate for public office(M
can=1.58,
M
should=1.60). Some economic rights such as Strike(M
can=1.65, M
should=1.58), Own
property(M
can=1.60, M
should=1.63) and Receive wage/salary(M
can=1.90,
M
should=1.84) were not supported either. On the other extreme, respondents
supported rights related to the existence of robots/AI such as Not to be abused
(M
can=3.77, M
should=3.72) and Receive software/hardware updates/maintenance
(M
can=3.70, M
should=3.68) but not the right Not to be terminated/deleted
(M
can=2.34, M
should=2.44).
•These results were consistent among respondents with education in Law,
education in AI/Robotics/IT, and the overall sample.
18

Key results
19
•Full sample

Key results
20
•Education in AI,
Robotics, IT

Key results
21
•Education in Law

Key results
The findings drew a picture where robots were
perceived as slaves without the rights to
reproduce, own property, strike, receive a salary,
vote or be elected but with the obligations to
adhere to regulations and respect humans.
22

Key results
•Respondents’ answers about whether robots canand shouldhave
specific rights and obligations were positively, strongly and
statistically significantly correlated (ρmin=0.579, ρmax=0.871, all
p<0.001) while there were no statistical differences between them.
•Moreover, the overall coefficient of correlation between the mean
responses to the canand shouldquestions showed a nearly perfect
positive correlation (ρ=0.996, p<0.001), including for respondents
with education in Law (ρ=0.980, p<0.001) and with education in IT,
AI, and robotics (ρ=0.988, p<0.001).
•Therefore: 
23

Key results
Respondents, regardless of their background, did
not distinguish between the two options
(can haveand should have the right/obligation).
24

Key results
•The two-step and K-means cluster analyses revealed the existence of
two clusters. Cluster 1 respondents had positive perceptions towards AI
and were more supportive towards robot rights while Cluster 2
respondents were reserved towards AI and robot rights. Hence: 
25

Key results
General perceptions towards AI and robots shape
support of AI/robot rights/obligations
26

Key results
•The Mann-Whitney U-test and the Kruskal-Wallis tests revealed that
respondents were quite uniform in their responses because biological
sex, education in Law, education in IT, educational level, age, place of
living, economic wellbeing and political orientation had no or
marginal impact on respondents’ perceptions towards robot rights and
obligations.
27

Conclusion
28

Conclusion
29
•Humans want robots and artificial
autonomous agents to be slaves
•Robots and artificial autonomous
agents do not want to be slaves

ROBONOMICS:
The Journal of the Automated Economy
•Published by Zangador Research Institute
•The journal addresses the economic, social, political, legal, ethical,
technological, and environmental aspects of automation technologies
and robonomics as an economic system
•First issue in 2021
•1 continuous volume per year
•Diamond open access (CC BY 4.0) –no fees for authors or readers
•AI-co-created articles are welcome!
•https://journal.robonomics.science
30

References
•de Graaf, M., Hindriks, F. A., & Hindriks, K. V. (2022). Who Wants to Grant Robots Rights? Frontiers in AI and Robotics, 8, 781985.
https://doi.org/10.3389/frobt.2021.781985
•Gunkel, D. J. (2018). Robot rights. Boston: MIT Press.
•Lima, G., Kim, C., Ryu, S., Jeon, C., & Cha, M. (2020). Collecting the public perception of AI and robot rights. Proceedings of the ACM on Human-Computer
Interaction, 4(CSCW2), Article 135, pp.1-24. https://doi.org/10.1145/3415206
31

THANK YOU FOR THE
ATTENTION!
QUESTIONS?
32