Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Data Mining - Rule - based classification, Study notes of Data Mining

Detail Summery about Classification and Prediction, Other classification methods, Classification by decision tree induction, Bayesian classification, Rule-based classification, Classification by back propagation.

Typology: Study notes

2010/2011

Uploaded on 09/04/2011

amit-mohta
amit-mohta 🇮🇳

4.2

(152)

89 documents

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
November 27, 2014 Data Mining: Concepts and
Techniques 1
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines
(SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
pf3
pf4
pf5

Partial preview of the text

Download Data Mining - Rule - based classification and more Study notes Data Mining in PDF only on Docsity!

November 27, 2014 Data Mining: Concepts and 1

Chapter 6. Classification and Prediction

  • (^) What is classification? What is

prediction?

  • (^) Issues regarding classification

and prediction

  • (^) Classification by decision tree

induction

  • (^) Bayesian classification
  • (^) Rule-based classification
  • (^) Classification by back

propagation

  • (^) Support Vector Machines

(SVM)

  • (^) Associative classification
  • (^) Lazy learners (or learning from

your neighbors)

  • (^) Other classification methods
  • (^) Prediction
  • (^) Accuracy and error measures
  • (^) Ensemble methods
  • (^) Model selection
  • Summary

November 27, 2014 Data Mining: Concepts and 2

Using IF-THEN Rules for Classification

  • (^) Represent the knowledge in the form of IF-THEN rules R: IF age = youth AND student = yes THEN buys_computer = yes
  • (^) Rule antecedent/precondition vs. rule consequent
  • (^) Assessment of a rule: coverage and accuracy
    • (^) ncovers = # of tuples covered by R
    • ncorrect = # of tuples correctly classified by R coverage(R) = ncovers /|D| /* D: training data set */ accuracy(R) = ncorrect / ncovers
  • (^) If more than one rule is triggered, need conflict resolution
    • (^) Size ordering: assign the highest priority to the triggering rules that has the “toughest” requirement (i.e., with the most attribute test )
    • (^) Class-based ordering: decreasing order of prevalence or misclassification cost per class
    • (^) Rule-based ordering (decision list): rules are organized into one long priority list, according to some measure of rule quality or by experts

November 27, 2014 Data Mining: Concepts and 4

Rule Extraction from the Training Data

  • (^) Sequential covering algorithm: Extracts rules directly from training

data

  • (^) Typical sequential covering algorithms: FOIL, AQ, CN2, RIPPER
  • Rules are learned sequentially , each for a given class Ci will cover

many tuples of Ci but none (or few) of the tuples of other classes

  • (^) Steps:
    • (^) Rules are learned one at a time
    • (^) Each time a rule is learned, the tuples covered by the rules are

removed

  • (^) The process repeats on the remaining tuples unless termination

condition , e.g., when no more training examples or when the

quality of a rule returned is below a user-specified threshold

  • (^) Comp. w. decision-tree induction: learning a set of rules

simultaneously

November 27, 2014 Data Mining: Concepts and 5

How to Learn-One-Rule?

  • (^) Star with the most general rule possible: condition = empty
  • (^) Adding new attributes by adopting a greedy depth-first strategy
    • (^) Picks the one that most improves the rule quality
  • (^) Rule-Quality measures: consider both coverage and accuracy
    • (^) Foil-gain (in FOIL & RIPPER): assesses info_gain by extending

condition

It favors rules that have high accuracy and cover many positive tuples

  • (^) Rule pruning based on an independent set of test tuples Pos/neg are # of positive/negative tuples covered by R. If FOIL_Prune is higher for the pruned version of R, prune R log ) ' ' ' _ ' (log 2 2 pos neg pos pos neg pos FOIL Gain pos      pos neg pos neg FOIL Prune R   _ ( ) 