
































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
SVM SUPPORT VECTOR MACHINES IN MACHINE LEARNING
Typology: Summaries
1 / 40
This page cannot be seen from the preview
Don't miss anything!
Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: http://www.cs.cmu.edu/~awm/tutori als
. Comments and corrections gratefully received. Thanks: Andrew Moore CMU And Martin Law Michigan State University
See Section 5.11 in [2] or the discussion in [3] for details
[1] B.E. Boser et al. A Training Algorithm for Optimal Margin Classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory 5 144-152, Pittsburgh, 1992. [2] L. Bottou et al. Comparison of classifier methods: a case study in handwritten digit recognition. Proceedings of the 12th IAPR International Conference on Pattern Recognition, vol. 2, pp. 77-82. [3] V. Vapnik. The Nature of Statistical Learning Theory. 2nd^ edition, Springer, 1999.
est denotes + denotes - f ( x , w ,b) = sign( w. x - b) How would you classify this data?
est denotes + denotes - f ( x , w ,b) = sign( w. x - b) How would you classify this data?
est denotes + denotes - f ( x , w ,b) = sign( w. x - b) Any of these would be fine.. ..but which is best?
est denotes + denotes - f ( x , w ,b) = sign( w. x - b)
est denotes + denotes - f ( x , w ,b) = sign( w. x + b)
Support Vectors are those datapoints that the margin pushes up against Linear SVM
denotes + denotes - f ( x , w ,b) = sign( w. x - b)
Support Vectors are those datapoints that the margin pushes up against
denotes + denotes - x wx +b = 0 (^2 ) 2 1
d i^ i
X – Vector W – Normal Vector b – Scale Value
Class 1 Class 2
The Dual Problem (we ignore the derivation)
Properties of i when we introduce the Lagrange multipliers The result when we differentiate the original Lagrangian w.r.t. b
(^) w is a linear combination of a small number of data points (^) This “sparse” representation can be viewed as data compression as in the construction of knn classifier
The decision boundary is determined only by the SV (^) Let t j ( j =1, ...,^ s ) be the indices of the^ s^ support vectors. We can write
Compute and classify z as class 1 if the sum is positive, and class 2 otherwise (^) Note: w need not be formed explicitly
6
Class 1 Class 2
1
2
3
4
5
7
8
9
10