Tuesday, October 30, 2012

Motivation for the SVM via the Perceptron

Many algorithms in Machine Learning have relationships with each other  One algorithm in particular, the Perceptron, is related to a lot. You can show how it evolves into numerous different algorithms. Its also very closely related to the Support Vector Machine.

What makes the SVMs really useful for low dimension problems is that they can solve the problem in a higher dimensional space without explicitly forming the space, allowing them to work in infinit dimensional spaces! People often find this confusion, and if they aren't on the same page as what that means mathematically, dont see what that gains us. So I've taken a simple 2D data set and run the Perceptron algorithm on it 8 times. Starting with a polynomial of degree 1, I've explicitly formed the higher dimensional space and computed the results in that space. You can watch as the perceptron slowly gains the ability to model more and more complex boundaries, until it gets to a point where its able to get most of it right. Granted, the decision boundary isn't great - thats why we dont usually use the Perceptron, but hopefully it provides a bit of insight. The One-vs-All method was used since it is a multi class problem.

Degree 1 Polynomial

Degree 2 Polynomial

Degree 3 Polynomial

Degree 4 Polynomial

Degree 5 Polynomial

Degree 6 Polynomial

Degree 7 Polynomial


Degree 8 Polynomial
So why is the SVM so great if the Perceptron can do this as well? Part of the answer is again, the SVM can do this without explicitly forming the space like I did for the Perceptron. However, the Perceptron can be modified to work in the same way as well. The real advantage is that the SVM finds a boundary that keeps it half way between both of the classes. You can see the Perceptron having the problem above where it often overlaps onto another classes space.

No comments:

Post a Comment