ARTIFICIAL INTELLIGENCE CLASS 10 AI ETHICS

104 views 10 slides Feb 01, 2025
Slide 1
Slide 1 of 10
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10

About This Presentation

Class 10 AI ethics


Slide Content

AI Ethics

Ethics
Ethics is defined as:The moral principles governing the behavior or actions
of an individual or a group.
In other words, the “rules” or “decision paths” that help determine what is
good or right.
Ethics of tech is simply the set of “rules” or “decision paths” used to
determine its “behavior”.

AI Ethics
Softwareproducts,whendesignedandtestedwell,do
arriveatpredictableoutputsforpredictableinputsvia
suchasetofrulesordecisionpaths.
Buthowdoestheteamdeterminewhatisagoodor
rightoutcome,andforwhom?
Isituniversallygoodoronlyforsome?
Isitgoodundercertaincontextsandnotinothers?
Isitgoodagainstsomeyardsticksbutnotsogoodfor
others?
Therein lies ethics.

The ethics of AI lies in the ethical quality of its prediction, the ethical quality of
the end outcomes drawn out of that and the ethical quality of the impact it has
on humans.
We can divide the ethical concerns into two sub problems:
Ethical Concerns related to data management
Adoption of AI technology

Concerns related to data management
Privacy:
The collection, storing and usage of data raises serious privacy concerns. For eg: in
android apps, and googleapps we need to provide permissions that they have
access to Contact list, location, e-mail, photos, hangouts etc.
All our details regarding search history, internet browsing habits, phone using habits
etcare with Google.
Added to this, the data that other companies like Microsoft, Facebook,Twitter,
Amazoneetcare having is too high.

Biasinreal-worlddata:
AIsystemslearnsfromthereal-worlddatafedintoit.
Acomputersystemtrainedonthedataforlast200yearsmightfindthatmore
femaleswereinvolvedinspecificjobs/morepercentageofsuccessfulbusinesses
wereestablishedbymen.
Andconcludethatspecificgendersarebetterequippedforhandlingcertainjobs.
(genderbias)

The problem of inclusion
AI systems trained on biased real-world data creates the problem of inclusion, i.e
some people are left out of AI decision making system.
Eg: AI system used by Amazon for recruitment.
Issue: many eligible females were left out of consideration.
Amazonliterallywantedittobeanenginewhere“I’mgoingtogiveyou100
resumes,itwillspitoutthetopfive,andwe’llhirethose.”
Butby2015,thecompanyrealizeditsnewsystemwasnotratingcandidatesfor
softwaredeveloperjobsandothertechnicalpostsinagender-neutralway.
ThatisbecauseAmazon’scomputermodelsweretrainedtovetapplicantsby
observingpatternsinresumessubmittedtothecompanyovera10-yearperiod.
Mostcamefrommen,areflectionofmaledominanceacrossthetechindustry.

Problem of facts and their interpretation
Rootcauseofbias:problembetweenfactsandinterpretationoffacts.AI
systemscanscandataanddrawconclusions/learningsfromthem,butarenot
equippedtounderstandthereasonbehindaparticularconclusionorlearning.
Eg:Taywasanartificialintelligencechatterbotthatwasoriginallyreleased
byMicrosoftCorporationviaTwitteronMarch23,2016;itcausedsubsequent
controversywhenthebotbegantopostinflammatoryandoffensivetweets
throughitsTwitteraccount,causingMicrosofttoshutdowntheserviceonly16
hoursafteritslaunch
Tay'smisbehaviorwasunderstandablebecauseitwasmimickingthe
deliberatelyoffensivebehaviorofotherTwitterusers,andMicrosofthadnot
giventhebotanunderstandingofinappropriatebehavior.

Some controversial tweets by Tay
These are just examples and you are not expected to act/comment the same way.

Ethical concerns related to adoption of AI systems
Job Loss
Increasing inequalities
Negative adoptions
Black box problem
Tags