Basic Process of Classification with Example

seagatejogja1 19 views 59 slides Oct 12, 2024
Slide 1
Slide 1 of 59
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59

About This Presentation

Basic Process of Classification with Example


Slide Content

Modul 6: Classification

‹#› Classification: Definition Given a collection of records ( training set ) Each record contains a set of attributes , one of the attributes is the class . Find a model for class attribute as a function of the values of other attributes. Goal: previously unseen records should be assigned a class as accurately as possible. A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.

‹#› Illustrating Classification Task

‹#› Examples of Classification Task Predicting tumor cells as benign or malignant Classifying credit card transactions as legitimate or fraudulent Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil Categorizing news stories as finance, weather, entertainment, sports, etc

‹#› Classification Techniques Decision Tree based Methods Rule-based Methods Memory based reasoning Neural Networks Naïve Bayes and Bayesian Belief Networks Support Vector Machines

‹#› Example of a Decision Tree categorical categorical continuous class Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Splitting Attributes Training Data Model: Decision Tree

‹#› Another Example of Decision Tree categorical categorical continuous class MarSt Refund TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K There could be more than one tree that fits the same data!

‹#› Decision Tree Classification Task Decision Tree

‹#› Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data Start from the root of tree.

‹#› Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data

‹#› Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data

‹#› Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data

‹#› Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data

‹#› Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data Assign Cheat to “No”

‹#› Decision Tree Classification Task Decision Tree

‹#› Decision Tree Induction Many Algorithms: Hunt’s Algorithm (one of the earliest) CART ID3, C4.5 SLIQ,SPRINT

‹#› General Structure of Hunt’s Algorithm Let D t be the set of training records that reach a node t General Procedure: If D t contains records that belong the same class y t , then t is a leaf node labeled as y t If D t is an empty set, then t is a leaf node labeled by the default class, y d If D t contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. D t ?

‹#› Hunt’s Algorithm Don’t Cheat Refund Don’t Cheat Don’t Cheat Yes No Refund Don’t Cheat Yes No Marital Status Don’t Cheat Cheat Single, Divorced Married Taxable Income Don’t Cheat < 80K >= 80K Refund Don’t Cheat Yes No Marital Status Don’t Cheat Cheat Single, Divorced Married

‹#› Tree Induction Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting

‹#› How to Specify Test Condition? Depends on attribute types Nominal Ordinal Continuous Depends on number of ways to split 2-way split Multi-way split

‹#› Splitting Based on Nominal Attributes Multi-way split: Use as many partitions as distinct values. Binary split: Divides values into two subsets. Need to find optimal partitioning. CarType Family Sports Luxury CarType {Family, Luxury} {Sports} CarType {Sports, Luxury} {Family} OR

‹#› Multi-way split: Use as many partitions as distinct values. Binary split: Divides values into two subsets. Need to find optimal partitioning. What about this split? Splitting Based on Ordinal Attributes Size Small Medium Large Size {Medium, Large} {Small} Size {Small, Medium} {Large} OR Size {Small, Large} {Medium}

‹#› Splitting Based on Continuous Attributes Different ways of handling Discretization to form an ordinal categorical attribute Static – discretize once at the beginning Dynamic – ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering. Binary Decision : (A < v) or (A ≥ v) consider all possible splits and finds the best cut can be more compute intensive

‹#› Splitting Based on Continuous Attributes

‹#› How to determine the Best Split Before Splitting: 10 records of class 0, 10 records of class 1 Which test condition is the best?

‹#› How to determine the Best Split Greedy approach: Nodes with homogeneous class distribution are preferred Need a measure of node impurity: Non-homogeneous, High degree of impurity Homogeneous, Low degree of impurity

‹#› Measures of Node Impurity Gini Index Entropy Misclassification error

‹#› Measure of Impurity: GINI Gini Index for a given node t : (NOTE: p( j | t) is the relative frequency of class j at node t). Maximum (1 - 1/n c ) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information

‹#› Examples for computing GINI P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Gini = 1 – P(C1) 2 – P(C2) 2 = 1 – 0 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Gini = 1 – (1/6) 2 – (5/6) 2 = 0.278 P(C1) = 2/6 P(C2) = 4/6 Gini = 1 – (2/6) 2 – (4/6) 2 = 0.444

‹#› Alternative Splitting Criteria based on INFO Entropy at a given node t: (NOTE: p( j | t) is the relative frequency of class j at node t). Measures homogeneity of a node. Maximum (log n c ) when records are equally distributed among all classes implying least information Minimum (0.0) when all records belong to one class, implying most information Entropy based computations are similar to the GINI index computations

‹#› Examples for computing Entropy P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0 P(C1) = 1/6 P(C2) = 5/6 Entropy = – (1/6) log 2 (1/6) – (5/6) log 2 (5/6) = 0.65 P(C1) = 2/6 P(C2) = 4/6 Entropy = – (2/6) log 2 (2/6) – (4/6) log 2 (4/6) = 0.92

‹#› Splitting Criteria based on Classification Error Classification error at a node t : Measures misclassification error made by a node. Maximum (1 - 1/n c ) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information

‹#› Examples for Computing Error P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 – max (0, 1) = 1 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C1) = 2/6 P(C2) = 4/6 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3

‹#› Example: Splitting Based on ENTROPY Information Gain: Parent Node, p is split into k partitions; n i is number of records in partition i Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) Goal: maximize the GAIN Used in ID3 and C4.5 Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

‹#› Computing GAIN Split on Refund: Entropy(Refund=Yes) = 0 Entropy(Refund=No) = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183 Entropy(Children) = 0.3 (0) + 0.6 (0.9183) = 0.551 Gain = 0.9 × (0.8813 – 0.551) = 0.3303 Missing value Before Splitting: Entropy(Parent) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813

‹#› Splitting Based on ENTROPY Gain Ratio: Parent Node, p is split into k partitions n i is the number of records in partition i Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! Used in C4.5 Designed to overcome the disadvantage of Information Gain

‹#› Stopping Criteria for Tree Induction Stop expanding a node when all the records belong to the same class Stop expanding a node when all the records have similar attribute values Early termination

‹#› Decision Tree Based Classification Advantages: Inexpensive to construct Extremely fast at classifying unknown records Easy to interpret for small-sized trees Accuracy is comparable to other classification techniques for many simple data sets

‹#› Example: C4.5 Simple depth-first construction. Uses Information Gain Sorts Continuous Attributes at each node. Needs entire data to fit in memory. Unsuitable for Large Datasets. Needs out-of-core sorting. You can download the software from: http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz

Demo: Using C5 (www.rulequest.com)

‹#› What is See5 Data mining software for classification Technique: decision tree Decision tree algorithm: C5.0

‹#› Sample applications using See5 Predicting Magnetic Properties of Crystals To develop rules that predict whether a substance is magnetic or not (National Research Council Canada) 24641 cases, 120 attributes/case Profiling High Income Earners from Census Data To predict whether the individual's income is above or below $50,000 (US Census Bureau Database) 200,000 cases, 40 attributes/cases (7 numeric, 33 nominal atts)

‹#› Classification

‹#› Preparing data for See5

‹#› Preparing data for See5 Files used in See5 have names of the form filestem . extension

‹#› Preparing data for See5 Content of a hypothyroid.names file

‹#› Preparing data for See5 Types of attributes Continuous: numeric values Date: dates in the form YYYY/MM/DD or YYYY-MM-DD Time: times in the form HH:MM:SS Timestamp: times in the form YYYY/MM/DD HH:MM:SS Discrete N: discrete, unordered values. N is the maximum values A comma-separated list of names: discrete values [ordered] can be used to indicate that the ordering Example: grade: [ordered] low, medium, high. Label Ignore

‹#› Preparing data for See5 Isi dari file hypothyroid.data Isi dari file hypothyroid.test sama seperti hypothyroid.data

‹#› See5 GUI

‹#› Constructing Classifier Classifier construction options:

‹#› Decision Tree

‹#› Decision Tree

‹#› Decision Tree Evaluation Num. of leaves: 14 Misclassified: 7 Confusion matrix

‹#› Tree Construction Options Discrete value subsets Group attribute values into subsets and each subtree is associated with a subset rather than with a single value. Example: referral source in {WEST,STMW,SVHC,SVI,SVHD}: primary (4.9/0.8)

‹#› Tree Construction Options Rulesets generate classifiers called rulesets that consist of unordered collections of (relatively) simple if-then rules. Example:

‹#› Each rule consists of A rule number Statistics ( n , lift x ) or ( n / m , lift x ) One or more conditions A class predicted by the rule A confidence Laplace ratio (n-m+1)/(n+2)

‹#› Tree Construction Options Boosting generate several classifiers (either decision trees or rulesets) rather than just one. When a new case is to be classified, each classifier votes for its predicted class and the votes are counted to determine the final class. Example: Boost: 10 trials

‹#› Tree Construction Options Winnow To choose a subset of the attributes that will be used to construct the decision tree or ruleset. Example:

‹#› Using Classifier Used to predict the classes to which new cases belong. Since the values of all attributes may not be needed, the attribute values requested will depend on the case itself. Example:
Tags