Investigating Fairness of Decision Making

steffenstaab 129 views 41 slides Aug 30, 2024
Slide 1
Slide 1 of 41
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41

About This Presentation

Presentation given at 5th Int Conference on Social Computing, Guangzhou, August 30, 2024


Slide Content

KI – Institute for Artificial Intelligence Investigating Fairness of Decision Making Steffen Staab @ ststaab https:// www.ki.uni-stuttgart.de https:// semanux.com https:// southampton.ac.uk / research / institutes-centres /web-internet- science

My background Simulation and ML in Sciences and Engineering Semantics of Data Human-Machine Interaction Stuttgart Research Focus Reflecting on Intelligent Systems Democracy, Diversity , Demography Semantics of Interaction Assistive Technologies Knowledge Graphs Semantics of Text Semantics in Programs Machine Learning Web Science Social Network (De-)Growth Semantics by Imitation Social Group Construction Fairness Circular Factory Architecture, Construction

“Code is law” Lawrence Lessig 1999

“Code is law” Lawrence Lessig 1999 Distributive justice? Fair political system? Underlying political philosophy? How do algorithmic decisions affect humans?

1.9.2024 5 Where to consider Fairness in Software Development?

1.9.2024 6 Where to consider Fairness in Software Development? Yes, I will talk about this Yes, I have arguments I believe ”yes”

1.9.2024 7 Where to consider fairness in AI Software Development? (Amershi et al 2019)

1.9.2024 8 Where to consider fairness in AI Software Development? (Amershi et al 2019) (Mougan et al 2023) Yes, I have arguments I believe ”yes” Yes, I will talk about this

Considering Analysis and Design with Q. Ramadan, M. Konersmann , A. S. Ahmadian , J. Jürjens

1.9.2024 10 Where to consider Fairness in Software Development? (Ramadan et al 2020; Ramadan et al 2024)

11 Individual fairness A decision-making software preserves individual fairness if it produces the same decision for every two individuals whose data are identical except for the protected characteristics or their proxy attributes . Otherwise, their data are identical Bob Alice Decision Making Software OR Rejected , Rejected Granted , Granted

12 Violations from Data Dependencies Data depending on protected characteristics may provoke indirect discrimination Decision-making software 2 Protected characteristics Decision-making software 1 Else Dependent intermediate outcome Join(Protected characteristic, Else) (Dwork & Ilvento 2018)

13 Software model example inspired by real banking terms and conditions income:int healthy:boolean goodCreditHistory / hasLifeInsurance:boolean= false / hasCreditCard:boolean= false / ownPoints:int=0 / hasFreeAccount:boolean= false verifyLifeInsApp() verifyCreditCardApp() exit () CustomerProfile Idle VerifyLifeInsApp [healthy ==true] / hasLifeInsurance=true; ownPoints+=25 VerifyingLifeInsuranceApplication DecisionMaking AcceptedLifeInsApp [else] / hasLifeInsurance=false; verifyLifeInsApp() [hasFreeAccount== false && goodCreditHistory==true && ownPoints>=50] / hasFreeAccount=true; exit () End Start / verifiedLifeInsApp VerifyCredit CardApp VerifyingCreditCardApplication AcceptedCreditCardApp [else ] / hasCreditCard=false; [income>=2500] / hasCreditCard=true; ownPoints+=25 verifyCreditCardApp() / verifiedCreditCardApp Class Attribute Operation State machine State Transition Guard Action We define: Proxy for age: income Decision data attribute: hasFreeAccount

14 The challenge Given a UML model, given a specification of protected characteristics and proxy attributes, the formal problem is defined as: Does the behavior of a specific “DecisionMaking” state machine in the UML model violate individual fairness with respect to protected characteristics? Empirical evaluation indicates that analyzing the UML models manually produces unreliable results with a high chance of 54% that analysts overlook true-positive discrimination.

For any initialization of the attributes, the final setting of the decision data attribute in any feasible path of the system is independent from the setting of all protected characteristics and proxy attributes. 15 Verifying Individual Fairness in Labeled Transition Systems Prove or disprove verification objective:

MBFair : Verify Labeled Transition Systems with respect to Individual Fairness Verify Preconditions Verify Decision Independence Constraint Generate Initializations For any initialization of the attributes, the final setting of the decision data attribute in any feasible path of the system is independent from the setting of all protected characteristics and proxy attributes.

For any initialization of the attributes , the final setting of the decision data attribute in any feasible path of the system is independent from the setting of all protected characteristics and proxy attributes. 17 Verifying Individual Fairness in Labeled Transition Systems Prove or disprove verification objective: income:int healthy:boolean goodCreditHistory / hasLifeInsurance:boolean= false / hasCreditCard:boolean= false / ownPoints:int=0 / hasFreeAccount:boolean= false verifyLifeInsApp() verifyCreditCardApp() exit () CustomerProfile Attributes with guard(s) defined over them Derived data attributes

18 Generate Initializations Derived data attributes Attributes with guard(s) defined over them

MBFair : Verify Labeled Transition Systems with respect to Individual Fairness Verify Preconditions Verify Decision Independence Constraint Generate Initializations Check out technicalities at (Ramadan et al 2024)

Given a transition system a decision data attribute is independent of all protected characteristics iff for any two initializations and , and any value in domain , the following holds:   20 Decision Independence Constraint             iff Linear temporal logics : at some point always from there on

21 Results: (1) Does not hold Does not hold

2 Explanation Disparity Carlos Mougan et al. : Demographic Parity Inspector: Fairness Audits via the Explanation Space. Draft. Arxiv 2023

Limits of Individual Fairness When two people are never the same (similarity) When decisions can only pick one e.g. hiring for a job When randomness is involved (probabilistic verification)

1.9.2024 24 Fairness in AI Software Development – what I will talk about (Amershi et al 2019) Yes, I will talk about this To my knowledge: Statistical notions of fairness have only been applied after the fact

1.9.2024 25 Unsupervised Model Monitoring

1.9.2024 26 Quality Assessment of Unsupervised Model Monitoring Often impossible! You do not know how good the person would have been if you would have hired them, when you did not

Example Demographic parity   1.9.2024 27 Statistical notions of fairness wrt protected  

Issues with demographic parity Assume artificial data with for good/bad grades for low/high beauty for accepted/reject for protected attribute (gender) but is accepted for beauty and is accepted for good grades   1.9.2024 28

Unsupervised criteria Demographic parity Fairness by unawareness do not use in issue: correlations to   Supervised criterion Equal opportunity   1.9.2024 29 Statistical notions of fairness wrt protected  

Unsupervised criteria Demographic parity Fairness by unawareness do not use in issue: correlations to   Key idea: common denominator Use Shap values to judge outcome and (un)awareness 1.9.2024 30 Statistical notions of fairness wrt protected  

Feature Attributions Explanations: SHAP values 1.9.2024 31 Model        

Local Explanations SHAP values for one item Distributions of Explanations Global Explanations averaging SHAP valuesover sets of items Not yet studied in related work! 1.9.2024 32

Properties of SHAP values Efficiency:  Sum of all contributions equal to the prediction Allows for studying equal outcome Uninformativeness : If a feature does not contribute to the prediction Allows for studying unawareness from protected attributes Linear + IID: Simplification Easy to understand intuitively 1.9.2024 33   if feature is irrelevant   For  

Methodology

Methodology

Methodology No explanation disparity Explanation disparity

1.9.2024 37 Example results

1.9.2024 38 Example results

3 Conclusion

Polity group of people formally/informally organized for governance Politics the set of activities that are associated with making decisions in groups Policy/Rule/Law Structures and institutes of software development stakeholders, analysts, programmers, coders social scientists, lawyers, … Processes of software development co-creation? Code How to craft political structures for a fair society?

Technical how to involve fairness in all steps of the software development process? how to support the developer? how to guarantee fairness? Socio-Technical which notion of fairness to use when? how to collect, aggregate and consider societal opinion? Open issues wrt Fairness in Software Development 1.9.2024 41