CS583-association-rules presentation.ppt

l228296 4 views 53 slides May 19, 2024
Slide 1
Slide 1 of 53
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53

About This Presentation

Insertion:
Insert at the beginning: Add a new node at the beginning of the linked list.
Insert at the end: Add a new node at the end of the linked list.
Insert at a specified position: Add a new node at a specific position in the linked list.
Deletion:
Delete from the beginning: Remove the first nod...


Slide Content

Chapter 2:
Mining Association Rules

CS583, Bing Liu, UIC 2
Road map
Basic concepts
Apriori algorithm
Different data formats for mining
Mining with multiple minimum supports
Mining class association rules
Summary

CS583, Bing Liu, UIC 3
Association rule mining
Proposed by Agrawal et al in 1993.
It is an important data mining model studied
extensively by the database and data mining
community.
Assume all data are categorical.
No good algorithm for numeric data.
Initially used for Market Basket Analysisto find
how items purchased by customers are related.
Bread Milk[sup = 5%, conf = 100%]

CS583, Bing Liu, UIC 4
The model: data
I= {i
1, i
2, …, i
m}: a set of items.
Transactiont:
ta set of items, and tI.
Transaction Database T:a set of transactions
T= {t
1, t
2, …, t
n}.

CS583, Bing Liu, UIC 5
Transaction data: supermarket data
Market basket transactions:
t1: {bread, cheese, milk}
t2: {apple, eggs, salt, yogurt}
… …
tn: {biscuit, eggs, milk}
Concepts:
An item:an item/article in a basket
I:the set of all items sold in the store
A transaction:items purchased in a basket; it may
have TID (transaction ID)
A transactionaldataset: A set of transactions

CS583, Bing Liu, UIC 6
Transaction data: a set of documents
A text document data set. Each document
is treated as a “bag” of keywords
doc1: Student, Teach, School
doc2: Student, School
doc3: Teach, School, City, Game
doc4: Baseball, Basketball
doc5: Basketball, Player, Spectator
doc6: Baseball, Coach, Game, Team
doc7: Basketball, Team, City, Game

CS583, Bing Liu, UIC 7
The model: rules
A transaction tcontains X, a set of items
(itemset) in I, if Xt.
An association ruleis an implication of the
form:
XY, where X, YI, and X Y= 
An itemsetis a set of items.
E.g., X = {milk, bread, cereal} is an itemset.
A k-itemset is an itemset with kitems.
E.g., {milk, bread, cereal} is a 3-itemset

CS583, Bing Liu, UIC 8
Rule strength measures
Support:The rule holds with supportsupin T
(the transaction data set) if sup% of
transactionscontain XY.
sup= Pr(XY).
Confidence:The rule holds in Twith
confidence confif conf% of tranactions that
contain Xalso contain Y.
conf= Pr(Y| X)
An association rule is a pattern that states
when Xoccurs, Yoccurs with certain
probability.

CS583, Bing Liu, UIC 9
Support and Confidence
Support count: The support count of an
itemset X, denoted by X.count, in a data set
Tis the number of transactions in Tthat
contain X. Assume Thas ntransactions.
Then, n
countYX
support
). (
 countX
countYX
confidence
.
). (

CS583, Bing Liu, UIC 10
Goal and key features
Goal:Find all rules that satisfy the user-
specified minimum support(minsup) and
minimum confidence(minconf).
Key Features
Completeness:find all rules.
No target item(s)on the right-hand-side
Mining with data on hard disk(not in memory)

CS583, Bing Liu, UIC 11
An example
Transaction data
Assume:
minsup = 30%
minconf = 80%
An example frequent itemset:
{Chicken, Clothes, Milk} [sup = 3/7]
Association rulesfrom the itemset:
Clothes Milk, Chicken[sup = 3/7, conf = 3/3]
… …
Clothes, ChickenMilk, [sup = 3/7, conf = 3/3]
t1:Beef, Chicken, Milk
t2:Beef, Cheese
t3:Cheese, Boots
t4:Beef, Chicken, Cheese
t5:Beef, Chicken, Clothes, Cheese, Milk
t6:Chicken, Clothes, Milk
t7:Chicken, Milk, Clothes

CS583, Bing Liu, UIC 12
Transaction data representation
A simplistic view of shopping baskets,
Some important information not considered.
E.g,
the quantity of each item purchased and
the price paid.

CS583, Bing Liu, UIC 13
Many mining algorithms
There are a large number of them!!
They use different strategies and data structures.
Their resulting sets of rules are all the same.
Given a transaction data set T, and a minimum support and
a minimum confident, the set of association rules existing in
Tis uniquely determined.
Any algorithm should find the same set of rules
although their computational efficiencies and
memory requirements may be different.
We study only one: the Apriori Algorithm

CS583, Bing Liu, UIC 14
Road map
Basic concepts
Apriori algorithm
Different data formats for mining
Mining with multiple minimum supports
Mining class association rules
Summary

CS583, Bing Liu, UIC 15
The Apriori algorithm
Probably the best known algorithm
Two steps:
Find all itemsets that have minimum support
(frequent itemsets, also called large itemsets).
Use frequent itemsets to generate rules.
E.g., a frequent itemset
{Chicken, Clothes, Milk} [sup = 3/7]
and one rule from the frequent itemset
Clothes Milk, Chicken[sup = 3/7, conf = 3/3]

CS583, Bing Liu, UIC 16
Step 1: Mining all frequent itemsets
A frequent itemsetis an itemset whose support
is ≥minsup.
Key idea: The apriori property(downward
closure property): any subsets of a frequent
itemset are also frequent itemsets
AB AC AD BC BD CD
A B C D
ABC ABD ACD BCD

CS583, Bing Liu, UIC 17
The Algorithm
Iterative algo. (also calledlevel-wise search):
Find all 1-item frequent itemsets; then all 2-item
frequent itemsets, and so on.
In each iteration k, only consider itemsets that
contain some k-1 frequent itemset.
Find frequent itemsets of size 1: F
1
From k= 2
C
k
= candidates of size k: those itemsets of size k
that could be frequent, given F
k-1
F
k
= those itemsets that are actually frequent, F
k
C
k
(need to scan the database once).

CS583, Bing Liu, UIC 18
Example –
Finding frequent itemsets
Dataset TTIDItems
T1001, 3, 4
T2002, 3, 5
T3001, 2, 3, 5
T4002, 5
itemset:count
1. scan T C
1: {1}:2, {2}:3, {3}:3, {4}:1, {5}:3
F
1: {1}:2, {2}:3, {3}:3, {5}:3
C
2: {1,2}, {1,3}, {1,5}, {2,3}, {2,5}, {3,5}
2.scan T C
2: {1,2}:1, {1,3}:2, {1,5}:1, {2,3}:2, {2,5}:3, {3,5}:2
F
2: {1,3}:2, {2,3}:2, {2,5}:3, {3,5}:2
C
3:{2, 3,5}
3. scan T C
3: {2, 3, 5}:2 F
3:
{2, 3, 5}
minsup=0.5

CS583, Bing Liu, UIC 19
Details: ordering of items
The items in Iare sorted in lexicographic
order(which is a total order).
The order is used throughout the algorithm in
each itemset.
{w[1], w[2], …, w[k]} represents a k-itemset w
consisting of items w[1], w[2], …, w[k], where
w[1] < w[2] < … < w[k] according to the total
order.

CS583, Bing Liu, UIC 20
Details: the algorithm
Algorithm Apriori(T)
C
1init-pass(T);
F
1{f| fC
1, f.count/nminsup}; // n: no. of transactions in T
for(k= 2; F
k-1; k++) do
C
kcandidate-gen(F
k-1);
foreach transaction tTdo
foreach candidate cC
kdo
ifcis contained in tthen
c.count++;
end
end
F
k{cC
k| c.count/nminsup}
end
return F
kF
k;

CS583, Bing Liu, UIC 21
Apriori candidate generation
The candidate-genfunction takes F
k-1and
returns a superset(called the candidates)
of the set of all frequent k-itemsets.It has
two steps
joinstep: Generate all possible candidate
itemsets C
kof length k
prunestep: Remove those candidates in C
k
that cannot be frequent.

CS583, Bing Liu, UIC 22
Candidate-gen function
Functioncandidate-gen(F
k-1)
C
k;
forallf
1, f
2F
k-1
with f
1= {i
1, … , i
k-2, i
k-1}
and f
2= {i
1, … , i
k-2, i’
k-1}
and i
k-1< i’
k-1do
c{i
1, …, i
k-1, i’
k-1}; // join f
1and f
2
C
kC
k{c};
for each (k-1)-subset sof cdo
if (sF
k-1) then
delete cfrom C
k; // prune
end
end
return C
k;

CS583, Bing Liu, UIC 23
An example
F
3= {{1, 2, 3}, {1, 2, 4}, {1, 3, 4},
{1, 3, 5}, {2, 3, 4}}
After join
C
4
= {{1, 2, 3, 4}, {1, 3, 4, 5}}
After pruning:
C
4
= {{1, 2, 3, 4}}
because {1, 4, 5}is not in F
3
({1, 3, 4, 5} is removed)

CS583, Bing Liu, UIC 24
Step 2: Generating rules from frequent
itemsets
Frequent itemsets association rules
One more step is needed to generate
association rules
For each frequent itemset X,
For each proper nonempty subset Aof X,
Let B = X -A
A B is an association rule if
Confidence(A B) ≥minconf,
support(A B) = support(AB) = support(X)
confidence(A B) = support(A B) / support(A)

CS583, Bing Liu, UIC 25
Generating rules: an example
Suppose {2,3,4} is frequent, with sup=50%
Proper nonempty subsets: {2,3}, {2,4}, {3,4}, {2}, {3}, {4}, with
sup=50%, 50%, 75%, 75%, 75%, 75% respectively
These generate these association rules:
2,3 4, confidence=100%
2,4 3, confidence=100%
3,4 2, confidence=67%
2 3,4, confidence=67%
3 2,4, confidence=67%
4 2,3, confidence=67%
All rules have support = 50%

CS583, Bing Liu, UIC 26
Generating rules: summary
To recap, in order to obtain A B, we need
to have support(A B) and support(A)
All the required information for confidence
computation has already been recorded in
itemset generation. No need to see the data
Tany more.
This step is not as time-consuming as
frequent itemsets generation.

CS583, Bing Liu, UIC 27
On Apriori Algorithm
Seems to be very expensive
Level-wise search
K = the size of the largest itemset
It makes at most K passes over data
In practice, K is bounded (10).
The algorithm is very fast. Under some conditions,
all rules can be found in linear time.
Scale up to large data sets

CS583, Bing Liu, UIC 28
More on association rule mining
Clearly the space of all association rules is
exponential, O(2
m
), where m is the number of
items in I.
The mining exploits sparseness of data, and
high minimum supportand high minimum
confidencevalues.
Still, it always produces a huge number of
rules, thousands, tens of thousands, millions,
...

CS583, Bing Liu, UIC 29
Road map
Basic concepts
Apriori algorithm
Different data formats for mining
Mining with multiple minimum supports
Mining class association rules
Summary

CS583, Bing Liu, UIC 30
Different data formats for mining
The data can be in transaction form or table
form
Transaction form:a, b
a, c, d, e
a, d, f
Table form: Attr1Attr2Attr3
a, b, d
b, c,e
Table data need to be converted to
transaction form for association mining

CS583, Bing Liu, UIC 31
From a table to a set of transactions
Table form: Attr1Attr2Attr3
a, b, d
b, c,e
Transaction form:
(Attr1, a), (Attr2, b), (Attr3, d)
(Attr1, b), (Attr2, c), (Attr3, e)
candidate-gencan be slightly improved. Why?

CS583, Bing Liu, UIC 32
Road map
Basic concepts
Apriori algorithm
Different data formats for mining
Mining with multiple minimum supports
Mining class association rules
Summary

CS583, Bing Liu, UIC 33
Problems with the association mining
Single minsup: It assumes that all items in
the data are of the same natureand/or
have similar frequencies.
Not true:In many applications, some items
appear very frequently in the data, while
others rarely appear.
E.g., in a supermarket, people buy food processor
and cooking panmuch less frequently than they
buy breadand milk.

CS583, Bing Liu, UIC 34
Rare Item Problem
If the frequencies of items vary a great deal,
we will encounter two problems
If minsup is set too high, those rules that involve
rare items will not be found.
To find rules that involve both frequent and rare
items, minsup has to be set very low. This may
cause combinatorial explosionbecause those
frequent items will be associated with one another
in all possible ways.

CS583, Bing Liu, UIC 35
Multiple minsups model
The minimum support of a rule is expressed in
terms ofminimum item supports (MIS)of the items
that appear in the rule.
Each item can have a minimum item support.
By providing different MIS values for different
items, the user effectively expresses different
support requirements for different rules.

CS583, Bing Liu, UIC 36
Minsup of a rule
Let MIS(i)be the MIS value of item i. The
minsup of a rule Ris the lowest MIS value of
the items in the rule.
I.e., a rule R: a
1
, a
2
, …, a
k
a
k+1
, …, a
r
satisfies its minimum support if its actual
support is 
min(MIS(a
1
), MIS(a
2
), …, MIS(a
r
)).

CS583, Bing Liu, UIC 37
An Example
Consider the following items:
bread, shoes, clothes
Theuser-specifiedMISvaluesareasfollows:
MIS(bread)=2%MIS(shoes)=0.1%
MIS(clothes)=0.2%
Thefollowingruledoesn’tsatisfyitsminsup:
clothesbread[sup=0.15%,conf=70%]
Thefollowingrulesatisfiesitsminsup:
clothesshoes[sup=0.15%,conf=70%]

CS583, Bing Liu, UIC 38
Downward closure property
In the new model, the property no longer
holds (?)
E.g., Consider four items 1, 2, 3 and 4 in a
database. Their minimum item supports are
MIS(1)=10% MIS(2)=20%
MIS(3)=5% MIS(4)=6%
{1, 2} with support 9% is infrequent, but {1, 2, 3}
and {1, 2, 4} could be frequent.

CS583, Bing Liu, UIC 39
To deal with the problem
We sort all items in Iaccording to their MIS
values (make it a total order).
The order is used throughout the algorithm in
each itemset.
Each itemset wis of the following form:
{w[1], w[2], …, w[k]}, consisting of items,
w[1], w[2], …, w[k],
where MIS(w[1]) MIS(w[2]) … MIS(w[k]).

CS583, Bing Liu, UIC 40
The MSapriori algorithm
Algorithm MSapriori(T, MS)
Msort(I, MS);
Linit-pass(M,T);
F
1{{i} | iL, i.count/nMIS(i)};
for(k= 2; F
k-1; k++) do
ifk=2 then
C
klevel2-candidate-gen(L)
else C
kMScandidate-gen(F
k-1);
end;
foreach transaction tTdo
foreach candidate cC
kdo
ifcis contained in tthen
c.count++;
ifc –{c[1]} is contained in tthen
c.tailCount++
end
end
F
k{cC
k| c.count/nMIS(c[1])}
end
return F
kF
k;

CS583, Bing Liu, UIC 41
Candidate itemset generation
Special treatments needed:
Sorting the items according to their MIS values
First pass over data (the first three lines)
Let us look at this in detail.
Candidate generation at level-2
Read it in the handout.
Pruning step in level-k(k> 2) candidate
generation.
Read it in the handout.

CS583, Bing Liu, UIC 42
First pass over data
It makes a pass over the data to record the
support count of each item.
It then follows the sorted order to find the
first item iin Mthat meets MIS(i).
iis inserted into L.
For each subsequent item jin Mafter i, if
j.count/nMIS(i) then jis also inserted into L,
where j.countis the support count of jand nis
the total number of transactions in T.Why?
Lis used by function level2-candidate-gen

CS583, Bing Liu, UIC 43
First pass over data: an example
Consider the four items 1, 2, 3 and 4 in a data set.
Their minimum item supports are:
MIS(1) = 10% MIS(2) = 20%
MIS(3) = 5% MIS(4) = 6%
Assume our data set has 100 transactions. The first
pass gives us the following support counts:
{3}.count= 6, {4}.count= 3,
{1}.count= 9, {2}.count= 25.
Then L= {3, 1, 2}, and F
1= {{3}, {2}}
Item 4 is not in Lbecause 4.count/n< MIS(3) (= 5%),
{1} is not in F
1because 1.count/n< MIS(1) (= 10%).

CS583, Bing Liu, UIC 44
Rule generation
The following two lines in MSapriori algorithm
are important for rule generation, which are
not needed for the Apriori algorithm
ifc –{c[1]} is contained in tthen
c.tailCount++
Many rules cannot be generated without
them.
Why?

CS583, Bing Liu, UIC 45
On multiple minsup rule mining
Multiple minsup model subsumesthe single
support model.
It is a more realisticmodel for practical
applications.
The model enables us to found rare item rules
yet without producing a huge number of
meaningless rules with frequent items.
By setting MIS values of some items to 100% (or
more), we effectively instruct the algorithms not
to generate rules only involving these items.

CS583, Bing Liu, UIC 46
Road map
Basic concepts
Apriori algorithm
Different data formats for mining
Mining with multiple minimum supports
Mining class association rules
Summary

CS583, Bing Liu, UIC 47
Mining class association rules (CAR)
Normal association rule mining does not have
any target.
It finds all possible rules that exist in data, i.e.,
any item can appear as a consequent or a
condition of a rule.
However, in some applications, the user is
interested in some targets.
E.g, the user has a set of text documents from
some known topics. He/she wants to find out what
words are associated or correlated with each topic.

CS583, Bing Liu, UIC 48
Problem definition
Let Tbe a transaction data set consisting of n
transactions.
Each transaction is also labeled with a class y.
Let Ibe the set of all items in T, Ybe the set of all
class labels and I Y = .
A class association rule(CAR) is an implication of
the form
Xy, where XI, and yY.
The definitions of supportand confidenceare the
same as those for normal association rules.

CS583, Bing Liu, UIC 49
An example
A text document data set
doc 1: Student, Teach, School : Education
doc 2: Student, School : Education
doc 3: Teach, School, City, Game : Education
doc 4: Baseball, Basketball : Sport
doc 5: Basketball, Player, Spectator : Sport
doc 6: Baseball, Coach, Game, Team : Sport
doc 7: Basketball, Team, City, Game : Sport
Let minsup= 20% and minconf= 60%. The following are two
examples of class association rules:
Student, School Education[sup= 2/7, conf = 2/2]
game Sport [sup= 2/7, conf = 2/3]

CS583, Bing Liu, UIC 50
Mining algorithm
Unlike normal association rules, CARs can be mined
directly in one step.
The key operation is to find all ruleitemsthat have
support above minsup. A ruleitemis of the form:
(condset, y)
where condsetis a set of items from I (i.e., condset
I), and yYis a class label.
Each ruleitem basically represents a rule:
condset y,
The Apriori algorithm can be modified to generate
CARs

CS583, Bing Liu, UIC 51
Multiple minimum class supports
The multiple minimum support idea can also be
applied here.
The user can specify different minimum supports to
different classes, which effectively assign a different
minimum support to rules of each class.
For example, we have a data set with two classes,
Yes and No. We may want
rules of class Yes to have the minimum support of 5% and
rules of class No to have the minimum support of 10%.
By setting minimum class supports to 100% (or
more for some classes), we tell the algorithm not to
generate rules of those classes.
This is a very useful trick in applications.

CS583, Bing Liu, UIC 52
Road map
Basic concepts
Apriori algorithm
Different data formats for mining
Mining with multiple minimum supports
Mining class association rules
Summary

CS583, Bing Liu, UIC 53
Summary
Association rule mining has been extensively studied
in the data mining community.
There are many efficient algorithms and model
variations.
Other related work includes
Multi-level or generalized rule mining
Constrained rule mining
Incremental rule mining
Maximal frequent itemset mining
Numeric association rule mining
Rule interestingness and visualization
Parallel algorithms
…