106481" datareactid="8">
What Is Frequent Pattern Analysis?
 Frequent pattern: a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set
 First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of frequent itemsets and association rule mining
 Motivation: Finding inherent regularities in data
 What products were often purchased together?— Beer and diapers?!
 What are the subsequent purchases after buying a PC?
 What kinds of DNA are sensitive to this new drug?
 Can we automatically classify web documents?
 Applications
 Basket data analysis, crossmarketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis.
Why Is Freq. Pattern Mining Important?
 Freq. pattern: An intrinsic and important property of datasets
 Foundation for many essential data mining tasks
 Association, correlation, and causality analysis
 Sequential, structural (e.g., subgraph) patterns
 Pattern analysis in spatiotemporal, multimedia, timeseries, and stream data
 Classification: discriminative, frequent pattern analysis
 Cluster analysis: frequent patternbased clustering
 Data warehousing: iceberg cube and cubegradient
 Semantic data compression: fascicles
 Broad applications
Basic Concepts: Frequent Patterns
  itemset: A set of one or more items
 kitemset X = {x1, …, xk}
 (absolute) support, or, support count of X: Frequency or occurrence of an itemset X
 (relative) support, s, is the fraction of transactions that contains X (i.e., the probability that a transaction contains X)
 An itemset X is frequent if X’s support is no less than a minsup threshold

Basic Concepts: Association Rules
  Find all the rules X → Y with minimum support and confidence
 support, s, probability that a transaction contains X U Y
 confidence, c, conditional probability that a transaction having X also contains Y
Let minsup = 50%, minconf = 50% Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3, {Beer, Diaper}:3
 Association rules: (many more!)
 Beer → Diaper (60%, 100%)
 Diaper → Beer (60%, 75%)

 
Closed Patterns and MaxPatterns

A long pattern contains a combinatorial number of subpatterns, e.g.,
\[ {a_{1}, …, a_{100}} \]
contains
\[ (100^1)+(100^2)+...+(1^{1}0^{0}0^{0}) = 2^{100}1 = 1.27 * 10^{30}\]
subpatterns!

Solution:
Mine closed patterns and maxpatterns instead

An itemset X is closed if X is
frequent
and there exists
no superpattern
Y כ X,
with the same support
as X (proposed by Pasquier, et al. @ ICDT’99)

An itemset X is a maxpattern if X is frequent and there exists no frequent superpattern Y כ X (proposed by Bayardo @ SIGMOD’98)

Closed pattern is a lossless compression of freq. patterns

Reducing the # of patterns and rules
Closed Patterns and MaxPatterns
 Exercise. DB = {< a1, …, a100>, < a1, …, a50>}
 What is the set of closed itemset?
 < a1, …, a100>: 1
 < a1, …, a50>: 2
 What is the set of maxpattern?
 What is the set of all patterns?
Computational Complexity of Frequent Itemset Mining
 How many itemsets are potentially to be generated in the worst case?
 The number of frequent itemsets to be generated is senstive to the minsup threshold
 When minsup is low, there exist potentially an exponential number of frequent itemsets
 The worst case: MN where M: # distinct items, and N: max length of transactions
 The worst case complexty vs. the expected probability
 Ex. Suppose Walmart has 10^4 kinds of products
 The chance to pick up one product 10^(4)
 The chance to pick up a particular set of 10 products: ~10^(40)
 What is the chance this particular set of 10 products to be frequent 10^3 times in 10^9 transactions?
Scalable Frequent Itemset Mining Methods
 Apriori: A Candidate GenerationandTest Approach
 Improving the Efficiency of Apriori
 FPGrowth: A Frequent PatternGrowth Approach
 ECLAT: Frequent Pattern Mining with Vertical Data Format
The Downward Closure Property and Scalable Mining Methods
 The downward closure property of frequent patterns
 Any subset of a frequent itemset must be frequent
 If {beer, diaper, nuts} is frequent, so is {beer, diaper}
 i.e., every transaction having {beer, diaper, nuts} also contains {beer, diaper}
 Scalable mining methods: Three major approaches
 Apriori (Agrawal & Srikant@VLDB’94)
 Freq. pattern growth (FPgrowth—Han, Pei & Yin @SIGMOD’00)
 Vertical data format approach (Charm—Zaki & Hsiao @SDM’02)
Apriori: A Candidate Generation & Test Approach
 Apriori pruning principle: If there is any itemset which is infrequent, its superset should not be generated/tested! (Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)
 Method:
 Initially, scan DB once to get frequent 1itemset
 Generate length (k+1) candidate itemsets from length k frequent itemsets
 Test the candidates against DB
 Terminate when no frequent or candidate set can be generated
The Apriori Algorithm—An Example
The Apriori Algorithm (PseudoCode)
Ck: Candidate itemset of size k
Lk : frequent itemset of size k
L1 = {frequent items};
for (k = 1; Lk !=Ø; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that are contained in t
Lk+1 = candidates in Ck+1 with min_support
end
return Uk Lk;
Implementation of Apriori
 How to generate candidates?
 Step 1: selfjoining Lk
 Step 2: pruning
 Example of Candidategeneration
 L3={abc, abd, acd, ace, bcd}
 Selfjoining: L3*L3
 abcd from abc and abd
 acde from acd and ace
 Pruning:
 acde is removed because ade is not in L3
 C4 = {abcd}
How to Count Supports of Candidates?
 Why counting supports of candidates a problem?
 The total number of candidates can be very huge
 One transaction may contain many candidates
 Method:
 Candidate itemsets are stored in a hashtree
 Leaf node of hashtree contains a list of itemsets and counts
 Interior node contains a hash table
 Subset function: finds all the candidates contained in a transaction
Counting Supports of Candidates Using Hash Tree
Candidate Generation: An SQL Implementation
 SQL Implementation of candidate generation
 Suppose the items in L(k1) are listed in an order
 Step 1: selfjoining L(k1)
insert into Ck
select p.item1, p.item2, …, p.item(k1), q.item(k1)
from L(k1) p, L(k1) q
where p.item1=q.item1, …, p.item(k2)=q.item(k2), p.item(k1) < q.item(k1)  Step 2: pruning
forall itemsets c in Ck do
forall (k1)subsets s of c do
if (s is not in L(k1)) then delete c from Ck
 Use objectrelational extensions like UDFs, BLOBs, and Table functions for efficient implementation [S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with relational database systems: Alternatives and implications. SIGMOD’98]
Further Improvement of the Apriori Method
 Major computational challenges
 Multiple scans of transaction database
 Huge number of candidates
 Tedious workload of support counting for candidates
 Improving Apriori: general ideas
 Reduce passes of transaction database scans
 Shrink number of candidates
 Facilitate support counting of candidates
Partition: Scan Database Only Twice
 Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB
 Scan 1: partition database and find local frequent patterns
 Scan 2: consolidate global frequent patterns
 A. Savasere, E. Omiecinski and S. Navathe, VLDB’95
DHP: Reduce the Number of Candidates
 A kitemset whose corresponding hashing bucket count is below the threshold cannot be frequent
 Candidates: a, b, c, d, e
 Hash entries
 {ab, ad, ae}
 {bd, be, de}
 …
 Frequent 1itemset: a, b, d, e
 ab is not a candidate 2itemset if the sum of count of {ab, ad, ae} is below support threshold
 J. Park, M. Chen, and P. Yu. An effective hashbased algorithm for mining association rules. SIGMOD’95
Sampling for Frequent Patterns
 Select a sample of original database, mine frequent patterns within sample using Apriori
 Scan database once to verify frequent itemsets found in sample, only borders of closure of frequent patterns are checked
 Example: check abcd instead of ab, ac, …, etc.
 Scan database again to find missed frequent patterns
 H. Toivonen. Sampling large databases for association rules. In VLDB’96
DIC: Reduce Number of Scans
PatternGrowth Approach: Mining Frequent Patterns Without Candidate Generation
 Bottlenecks of the Apriori approach
 Breadthfirst (i.e., levelwise) search
 Candidate generation and test
 Often generates a huge number of candidates
 The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)
 Depthfirst search
 Avoid explicit candidate generation
 Major philosophy: Grow long patterns from short ones using local frequent items only
 “abc” is a frequent pattern
 Get all transactions having “abc”, i.e., project DB on abc: DBabc
 “d” is a local frequent item in DBabc abcd is a frequent pattern
Construct FPtree from a Transaction Database
Partition Patterns and Databases
 Frequent patterns can be partitioned into subsets according to flist
 Flist = fcabmp
 Patterns containing p
 Patterns having m but no p
 …
 Patterns having c but no a nor b, m, p
 Pattern f
 Completeness and nonredundency
Find Patterns Having P From Pconditional Database
 Starting at the frequent item header table in the FPtree
 Traverse the FPtree by following the link of each frequent item p
 Accumulate all of transformed prefix paths of item p to form p’s conditional pattern base
From Conditional Patternbases to Conditional FPtrees
 For each patternbase
 Accumulate the count for each item in the base
 Construct the FPtree for the frequent items of the pattern base
Recursion: Mining Each Conditional FPtree
A Special Case: Single Prefix Path in FPtree
 Suppose a (conditional) FPtree T has a shared single prefixpath P
 Mining can be decomposed into two parts
 Reduction of the single prefix path into one node
 Concatenation of the mining results of the two parts
Benefits of the FPtree Structure
 Completeness
 Preserve complete information for frequent pattern mining
 Never break a long pattern of any transaction
 Compactness
 Reduce irrelevant info—infrequent items are gone
 Items in frequency descending order: the more frequently occurring, the more likely to be shared
 Never be larger than the original database (not count nodelinks and the count field)
The Frequent Pattern Growth Mining Method
 Idea: Frequent pattern growth
 Recursively grow frequent patterns by pattern and database partition
 Method
 For each frequent item, construct its conditional patternbase, and then its conditional FPtree
 Repeat the process on each newly created conditional FPtree
 Until the resulting FPtree is empty, or it contains only one path—single path will generate all the combinations of its subpaths, each of which is a frequent pattern
Scaling FPgrowth by Database Projection
 What about if FPtree cannot fit in memory?
 First partition a database into a set of projected DBs
 Then construct and mine FPtree for each projected DB
 Parallel projection vs. partition projection techniques
 Parallel projection
 Project the DB in parallel for each frequent item
 Parallel projection is space costly
 All the partitions can be processed in parallel
 Partition projection
 Partition the DB based on the ordered frequent items
 Passing the unprocessed parts to the subsequent partitions
PartitionBased Projection
FPGrowth vs. Apriori: Scalability With the Support Threshold
FPGrowth vs. TreeProjection: Scalability with the Support Threshold
Advantages of the Pattern Growth Approach
 Divideandconquer:
 Decompose both the mining task and DB according to the frequent patterns obtained so far
 Lead to focused search of smaller databases
 Other factors
 No candidate generation, no candidate test
 Compressed database: FPtree structure
 No repeated scan of entire database
 Basic ops: counting local freq items and building sub FPtree, no pattern search and matching
 A good opensource implementation and refinement of FPGrowth
 FPGrowth+ (Grahne and J. Zhu, FIMI'03)
Further Improvements of Mining Methods
 AFOPT (Liu, et al. @ KDD’03)
 A “pushright” method for mining condensed frequent pattern (CFP) tree
 Carpenter (Pan, et al. @ KDD’03)
 Mine data sets with small rows but numerous columns
 Construct a rowenumeration tree for efficient mining
 FPgrowth+ (Grahne and Zhu, FIMI’03)
 Efficiently Using PrefixTrees in Mining Frequent Itemsets, Proc. ICDM'03 Int. Workshop on Frequent Itemset Mining Implementations (FIMI'03), Melbourne, FL, Nov. 2003
 TDClose (Liu, et al, SDM’06)
Extension of Pattern Growth Mining Methodology
 Mining closed frequent itemsets and maxpatterns
 CLOSET (DMKD’00), FPclose, and FPMax (Grahne & Zhu, Fimi’03)
 Mining sequential patterns
 PrefixSpan (ICDE’01), CloSpan (SDM’03), BIDE (ICDE’04)
 Mining graph patterns
 gSpan (ICDM’02), CloseGraph (KDD’03)
 Constraintbased mining of frequent patterns
 Convertible constraints (ICDE’01), gPrune (PAKDD’03)
 Computing iceberg data cubes with complex measures
 Htree, Hcubing, and Starcubing (SIGMOD’01, VLDB’03)
 Patterngrowthbased Clustering
 MaPle (Pei, et al., ICDM’03)
 PatternGrowthBased Classification
 Mining frequent and discriminative patterns (Cheng, et al, ICDE’07)
ECLAT: Mining by Exploring Vertical Data Format
 Vertical format: t(AB) = {T11, T25, …}
 tidlist: list of trans.ids containing an itemset
 Deriving frequent patterns based on vertical intersections
 t(X) = t(Y): X and Y always happen together
 t(X) ⊂ t(Y): transaction having X always has Y
 Using diffset to accelerate mining
 Only keep track of differences of tids
 t(X) = {T1, T2, T3}, t(XY) = {T1, T3}
 Diffset (XY, X) = {T2}
 Eclat (Zaki et al. @KDD’97)
 Mining Closed patterns using vertical format: CHARM (Zaki & Hsiao@SDM’02)
Mining Frequent Closed Patterns: CLOSET

Flist: list of all frequent items in support ascending order

Divide search space

Patterns having d

Patterns having d but no a, etc.

Find frequent closed pattern recursively

Every transaction having d also has
cfa
→
cfad
is a frequent closed pattern

J. Pei, J. Han & R. Mao. “CLOSET: An Efficient Algorithm for Mining Frequent Closed Itemsets", DMKD'00.
CLOSET+: Mining Closed Itemsets by PatternGrowth
 Itemset merging: if Y appears in every occurrence of X, then Y is merged with X
 Subitemset pruning: if Y כ X, and sup(X) = sup(Y), X and all of X’s descendants in the set enumeration tree can be pruned
 Hybrid tree projection
 Bottomup physical treeprojection
 Topdown pseudo treeprojection
 Item skipping: if a local frequent item has the same support in several header tables at different levels, one can prune it from the header table at higher levels
 Efficient subset checking
MaxMiner: Mining MaxPatterns
CHARM: Mining by Exploring Vertical Data Format
 Vertical format: t(AB) = {T11, T25, …}
 tidlist: list of trans.ids containing an itemset
 Deriving closed patterns based on vertical intersections
 t(X) = t(Y): X and Y always happen together
 t(X) ⊂ t(Y): transaction having X always has Y
 Using diffset to accelerate mining
 Only keep track of differences of tids
 t(X) = {T1, T2, T3}, t(XY) = {T1, T3}
 Diffset (XY, X) = {T2}
 Eclat/MaxEclat (Zaki et al. @KDD’97), VIPER(P. Shenoy et al.@SIGMOD’00), CHARM (Zaki & Hsiao@SDM’02)
Visualization of Association Rules: Plane Graph
Visualization of Association Rules: Rule Graph
Visualization of Association Rules (SGI/MineSet 3.0)
Interestingness Measure: Correlations (Lift)

play basketball
⇒
eat cereal
[40%, 66.7%] is misleading

The overall % of students eating cereal is 75% > 66.7%.

play basketball
⇒
not eat cereal
[20%, 33.3%] is more accurate, although with lower support and confidence

Measure of dependent/correlated events: lift
Are lift and X^2 Good Measures of Correlation?
 “Buy walnuts ⇒ buy milk [1%, 80%]” is misleading if 85% of customers buy milk
 Support and confidence are not good to indicate correlations
 Over 20 interestingness measures have been proposed (see Tan, Kumar, Sritastava @KDD’02)
 Which are good ones?
Comparison of Interestingness Measures
Analysis of DBLP Coauthor Relationships
Which NullInvariant Measure Is Better?
 IR (Imbalance Ratio): measure the imbalance of two itemsets A and B in rule implications
\[ IR(A,B) = \frac {sup(A)sup(B)} {sup(A)+sup(B)sup(AUB)}\]
 Kulczynski and Imbalance Ratio (IR) together present a clear picture for all the three datasets D4 through D6
 D4 is balanced & neutral
 D5 is imbalanced & neutral
 D6 is very imbalanced & neutral
Summary
 Basic concepts: association rules, supportconfident framework, closed and maxpatterns
 Scalable frequent pattern mining methods
 Apriori (Candidate generation & test)
 Projectionbased (FPgrowth, CLOSET+, ...)
 Vertical format approach (ECLAT, CHARM, ...)
 Which patterns are interesting?
 Pattern evaluation methods
References

Basic Concepts of Frequent Pattern Mining
(Association Rules)
 R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of items in large databases. SIGMOD'93.

(Maxpattern) R. J. Bayardo. Efficiently mining long patterns from databases. SIGMOD'98.

(Closedpattern) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering frequent closed itemsets for association rules. ICDT'99.

(Sequential pattern) R. Agrawal and R. Srikant. Mining sequential patterns. ICDE'95
 Apriori and Its Improvements
 R. Agrawal and R. Srikant. Fast algorithms for mining association rules. VLDB'94.
 H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for discovering association rules. KDD'94.
 A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining association rules in large databases. VLDB'95.

J. S. Park, M. S. Chen, and P. S. Yu. An effective hashbased algorithm for mining association rules. SIGMOD'95.

H. Toivonen. Sampling large databases for association rules. VLDB'96.

S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset
counting and implication rules for market basket analysis. SIGMOD'97.

S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association
rule mining with relational database systems: Alternatives and
implications. SIGMOD'98.
References (cont')

DepthFirst, ProjectionBased FP Mining
 R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for generation of frequent itemsets. J. Parallel and Distributed Computing:02.

J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation
.
SIGMOD’ 00.

J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by Opportunistic Projection. KDD'02.

J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining TopK Frequent Closed Patterns without Minimum Support. ICDM'02.

J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best Strategies for Mining Frequent Closed Itemsets. KDD'03.

G. Liu, H. Lu, W. Lou, J. X. Yu. On Computing, Storing and Querying Frequent Patterns. KDD'03.

G. Grahne and J. Zhu, Efficiently Using PrefixTrees in Mining Frequent Itemsets, Proc. ICDM'03 Int. Workshop on Frequent Itemset Mining Implementations (FIMI'03), Melbourne, FL, Nov. 2003
References (cont')

Vertical Format and Row Enumeration Methods
 M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for discovery of association rules. DAMI:97.

Zaki and Hsiao. CHARM: An Efficient Algorithm for Closed Itemset Mining, SDM'02.

C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A DualPruning Algorithm for Itemsets with Constraints. KDD’02.

F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER: Finding Closed Patterns in Long Biological Datasets. KDD'03.

H. Liu, J. Han, D. Xin, and Z. Shao, Mining Interesting Patterns from Very High Dimensional Data: A TopDown Row Enumeration Approach, SDM'06.