Responsible AI ML Pipeline_ Integrating OpenShift and IBM AI Fairness 360.pdf

TosinAkinosho 184 views 21 slides Aug 20, 2024
Slide 1
Slide 1 of 21
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21

About This Presentation

The Responsible AI ML Pipeline: Integrating OpenShift and IBM AI Fairness 360 repository demonstrates how to build a responsible AI pipeline by integrating Red Hat OpenShift with IBM AI Fairness 360 (AIF360) to detect and mitigate bias in machine learning models. It provides examples and tools for d...


Slide Content

Introduction:
Building Ethical
AI
Integrating OpenShift and IBM AI Fairness 360

Introduction: Building Ethical AI

A framework for developing machine learning models with a focus on fairness and
transparency.
Mitigating bias in AI is essential for ethical decision-making and preventing discriminatory
outcomes.
Integration of Red Hat’s OpenShift ( Container and AI platform) with IBM AI Fairness 360 (bias
detection toolkit).
What is the Responsible AI ML Pipeline?
Why does it matter?
A Solution?

The Problem: Bias in Machine Learning
Healthcare, finance, criminal justice hiring decisions and many others.
Models can inert biases from training data, leading to unfair outcomes for certain groups.
Discrimination, inaccurate predictions, loss of trust in AI systems.
Machine Learning is Everywhere.
… But it’s Not Alway Fair
The Consequences are Serious

Key features of the Pipeline
Utilizes IBM AI Fairness 360’s extensive set of metrics and algorithms to identify various types
of bias.
Leverages Red Hat OpenShift Kubernetes-based Architecture for efficient scaling and
management of ML Workloads.
OpenShift security features ensure the protection of sensitive data throughout the pipeline.
Comprehensive Bias Detection
Scalable Deployment
Robust Security

IBM AI Fairness 360 (AIF360)
An open-source toolkit designed to help detect and mitigate bias in machine learning models.
Provides metrics to quantify bias, algorithms to mitigate it, and explanations to understand its
origins.
Supports different types of data, models, and fairness definitions.
What is AIF360?
How it Works?
Wide Range of Functionality

Red Hat OpenShift
A Kubernetes platform that simplifies the deployment, management, and scaling of
containerized applications.
Efficient resource utilization, high availability, automated scaling and simplified model
deployment.
Provides a secure and reliable infrastructure for running AI workloads at scale.
What is OpenShift?
Key Benefits for AI
Ideal for the Responsible AI ML Pipeline

Getting Started: Prerequisites
A developer sandbox environment for experimenting with the pipeline.
For accessing additional resources and tutorials.
Use for Using the provided notebooks and interacting with the pipeline.
Red Hat OpenShift AI
IBM Cloud Account (Optional)
Python and Jupyter Notebooks

Creating a Developer
Sandbox

Configuring a Data
Science Project

Detecting Bias with AIF360
Use pandas to load our dataset and perform necessary preprocessing steps.
Specify the sensitive attributes (e.g.,race, gender) that you want to analyze for bias.
Use AIF360s metrics to quantify bias in your dataset and model predictions.
Load and Preprocess Data
Define Protected Attributes
Calculate Bias Metrics

Understanding Bias Metrics
Measures the difference in outcomes between different groups
Checks if different groups have equal chances of being correctly classified
Statistical parity difference, average odds difference, etc.
Disparate Impact
Equal Opportunity
Other Metrics

Mitigating Bias with AIF360
Adjust the training data to reduce bias before model training.

Modify the learning algorithm to directly address bias during training.
Adjust the model's predictions after training to make them fairer.
Preprocessing
In Processing
Post Processing

Mitigating Bias in Advertising
Examine a real-world scenario of mitigating bias in an advertising model.
The model may show ads show ads disproportionately to certain demographic groups.

Apply AIF360's mitigation techniques to create a fairer advertising model.

Use Case:
The Problem:
The Solution

Deploying Bias-Aware Models on OpenShift
Create a Docker image that includes your model, dependencies, and AIF360.
Make the image accessible from OpenShift.
Specify how the model should be deployed and scaled on OpenShift.
Containerize Your Model
Push the Image to a Registry
Create a Deployment Configuration
Make the model accessible to other applications and users
Expose the Model as a Service

OpenShift Empowering Automated Fairness
Streamline bias detection and mitigation with the OpenShift Python client library.
Reduce manual effort and ensure consistent bias check throughout your AI lifecycle.
oc create -k applications/tutorial_bias_advertising/deployment


Simplify Bias Management
Automate for Efficiency
Code Example

OpenShift: Accelerating Responsible AI
Easily handle growing AI workloads as your demands increase.
Accelerate iterations, quickly identify issues, and deliver fairer AI solutions faster.
curl --location 'https://${OPENSHIFT_URL}/measure_mitigate_bias' \
--header 'Content-Type: application/json' \
--data '{
"target_label_name": "predicted_conversion",
"scores_name": "predicted_probability",
"random_seed": 150
}'


Scale with Confidence
CI/CD for Faster Innovation?
Code Example

Contributing to the Project
Create your copy of the project on GitHub.
Implement new features, fix bugs, or improve documentation.
Propose your changes to the project maintainers for review.
Fork the Repository
Make Changes
Submit a Pull Request

Acknowledgments
For developing AI Fairness 360 and providing resources for responsible AI.
For creating OpenShift and supporting the development of this pipeline.
IBM
Red Hat

Conclusion: A Fairer Future for AI
●Mercury is the closest planet to the Sun and the smallest one in the Solar System—it’s
only a bit larger than the Moon. This planet’s name has nothing to do with the liquid metal
●Bias in machine learning is a real problem with significant consequences.
●The Responsible AI ML Pipeline offers a practical solution for addressing bias.
●OpenShift and AIF360 are powerful tools for building and deploying fair AI models.
Key Takeaways

Resources
GitHub Repository:
●https://github.com/tosin2013/responsibl
e-ai-ml-pipeline

IBM AI Fairness 360:
●https://aif360.res.ibm.com/
Project Website:
●https://tosin2013.github.io/responsible-a
i-ml-pipeline/

Red Hat OpenShift:
●https://www.redhat.com/en/technologie
s/cloud-computing/openshift

CREDITS: This presentation template was created by Slidesgo, and
includes icons by Flaticon, and infographics & images by Freepik
Thanks!
Do you have any questions?

Please keep this slide for attribution