Explainable_Artificial intelligence in engineering

SwarnaMugi2 0 views 15 slides Nov 01, 2025
Slide 1
Slide 1 of 15
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15

About This Presentation

Explanation about explainable AI


Slide Content

Explainable AI (XAI) and Interpretability Challenges Seminar Presentation Your Name | Roll Number | Institution

Introduction AI models are often 'black boxes' – difficult to understand Explainable AI (XAI) = techniques to make AI decisions understandable Goal: Improve trust, transparency, and adoption

Why Do We Need XAI? Trust in AI predictions and decisions Accountability (e.g., GDPR compliance) Debugging and bias detection in models Encourages adoption of AI systems

XAI Approaches Model-Specific: Decision Trees, Attention Mechanisms Model-Agnostic: LIME, SHAP, Counterfactual Explanations

LIME (Local Interpretable Model-Agnostic Explanations) Explains individual predictions locally Approximates complex model with a simple interpretable one Useful for understanding black-box models

SHAP (SHapley Additive exPlanations) Based on game theory (Shapley values) Shows contribution of each feature Provides global + local interpretability

Counterfactual Explanations Answer 'what if' questions Show how small input changes can alter output Helpful in fairness and decision justification

Interpretability Challenges High complexity of deep learning models Trade-off: accuracy vs interpretability Bias and fairness issues in datasets Different stakeholders need different explanations Real-time scalability issues

Applications of XAI Healthcare: Diagnosis explanations Finance: Loan approval, fraud detection Autonomous Vehicles: Explaining driving actions Cybersecurity: Intrusion detection explanation

XAI in Healthcare AI predicts disease from X-rays/CT scans XAI highlights regions responsible for predictions Helps doctors trust AI decisions

XAI in Finance Loan approval explanations Fraud detection reasoning Improves fairness and accountability

Future of XAI Human-centered explanations Compliance with AI regulations (EU AI Act, GDPR) Interactive XAI (ask 'why/why not' questions) Combining ethics with explainability

Benefits of XAI Builds user trust in AI systems Encourages adoption in sensitive domains Improves debugging and model reliability Supports legal and ethical AI usage

Summary XAI = bridge between AI accuracy and human trust Techniques: LIME, SHAP, Counterfactuals Challenges: complexity, bias, scalability Future: interactive, ethical, and human-centered AI

Q&A Thank you! Questions are welcome.
Tags