Confusion Matrix for Multiclass Classification.pptx

ishwaryac2415 38 views 22 slides Jan 23, 2025
Slide 1
Slide 1 of 22
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22

About This Presentation

Confusion Matrix for Multiclass Classification ML


Slide Content

Multiclass Classification -Ishwarya C Binary VS Multiclass Confusion Matrix Performance Metrics ROC curve OVR or OVO Approaches Hands-on Code

Binary vs. Multiclass Classification

Confusion Matrix A Confusion Matrix is a table used to evaluate the performance of a classification model. It compares the predicted labels with the actual labels. It helps to see if the model is correctly classifying data or making errors. The matrix consists of four key elements: True Positives (TP): Correctly predicted positive cases. True Negatives (TN): Correctly predicted negative cases. False Positives (FP): Incorrectly predicted positive cases (Type I error). False Negatives (FN): Incorrectly predicted negative cases (Type II error). From these, various metrics like Accuracy, Precision, Recall, and F1-Score can be calculated.

Multi-Class ROC Curve A Receiver Operating Characteristic (ROC) curve is a graphical representation of a classifier's performance across different thresholds. ROC analysis is inherently binary. For multi-class problems, strategies include: One-vs-Rest ( OvR ): Compute ROC for each class versus all other classes. One-vs-One ( OvO ): Compute ROC for each pair of classes. Aggregate metrics like macro-average or micro-average are used to summarize performance .

Key Differences in Binary and Multiclass ROC Feature Binary ROC Multiclass ROC Number of Curves Single ROC curve One ROC curve per class ( OvR ) Thresholds Applied to probabilities of one class Applied to probabilities for each class Performance Metric AUC (Area Under Curve) AUC for each class (Macro/Micro averages) Interpretation Simpler More complex (requires per-class analysis) Key Differences in Binary and Multiclass ROC

ONE-VS-ALL (ONE-VS-REST)

ONE-VS-ALL (ONE-VS-REST)

ONE-VS-One(OVO)

ONE-VS-One(OVO)

key differences between One-vs-Rest (OVR) and One-vs-One (OVO)

Approaches to Construct ROC for Multiclass: One-vs-Rest ( OvR ) : For each class, treat it as the positive class and all other classes as the negative class . Steps: Calculate TPR and FPR for one class vs the rest. Repeat for all classes to generate one ROC curve per class. Metrics : Compute AUC for each class individually. Aggregate performance: Macro-Averaged AUC : Average AUC of all classes. Micro-Averaged AUC : Compute TPR and FPR globally across all classes.

THANK YOU
Tags