By now you've learned a few techniques for classification. You touched upon it when talking about Naive Bayes, and again when you saw some supervised learning techniques such as logistic regression and decision trees. Now it's time for another popular classification technique -- Support Vector Machines.
You will be able to:
- Describe what is meant by margin classifiers
- Describe the mathematical components underlying soft and max-margin classifiers
- Compare and contrast max-margin classifiers and soft-margin classifiers
The idea behind Support Vector Machines (also referred to as SVMs) is that you perform classification by finding the separation line or (in higher dimensions) "hyperplane" that maximizes the distance between two classes. Taking a look at the concept visually helps make sense of the process.
Imagine you have a dataset containing two classes:
In SVM, you want to find a hyperplane or "decision boundary" that divides one class from the other. Which one works best?
This would be a good line:
While this seems intuitive, there are other decision boundaries which also separate the classes. Which one is best? Rather than solely focus on the final accuracy of the model, Support Vector Machines aim to maximize the margin between the decision boundary and the various data points.
The margin is defined as the distance between the separating line (hyperplane) and the training set cases that are closest to this hyperplane. These cases define "support vectors". The support vectors in this particular case are highlighted in the image below. As you can see, the max-margin hyperplane is the midpoint between the two lines defined by the support vectors.
Why would you bother maximizing the margins? Don't these other hyperplanes discriminate just as well? Remember that you are fitting the hyperplane on your training data. Imagine you start looking at your test data, which will slightly differ from your training data.
Assuming your test set is big enough and randomly drawn from your entire dataset, you might end up with a test case as shown in the image below. This test case diverts a little bit from the training set cases observed earlier. While the max-margin classifier would classify this test set case correctly, the hyperplane closer to the right would have been classified this case incorrectly. Of course, this is just one example and other test cases will end up in different spots. Nonetheless, the purpose of choosing the max-margin classifier is to minimize the generalization error when applying the model to future unseen data points.
Before diving into the underlying mathematics, take a look at the image again:
Now you can start exploring the mathematics behind the image. First, define some numeric labels for the two classes. Set the circles to be -1 and the diamonds to be 1. Normally, 0 and 1 are used for class labels but in this particular case using -1 and 1 simplifies the mathematics.
Now some terminology: The lines defined by the support vectors are the negative (to the left) and the positive (to the right) hyperplanes, respectively. These hyperplanes are defined by two terms:
The
The
The equation describing the positive hyperplane is: $$ b + w_Tx_{pos} =1$$
and the equation describing the negative hyperplane is: $$ b + w_Tx_{neg} =-1$$
Remember, your goal is to maximize the separation between the two hyperplanes. To do this, first subtract the negative hyperplane's equation from the positive hyperplane's equation:
Next, normalize
Dividing the former expression by
The objective of the SVM is then maximizing
$ b + w_Tx^{(i)} \geq 1$ if
$ b + w_Tx^{(i)} \leq -1$ if
For
These equations basically say that all negative samples should fall on the left side of the negative hyperplane, whereas all the positive samples should fall on the right of the positive hyperplane. This can also be written in one line as follows:
Note that maximizing
Introducing slack variables
$ b + w_Tx^{(i)} \geq 1-\xi^{(i)}$ if
$ b + w_Tx^{(i)} \leq -1+\xi^{(i)}$ if
For
The objective function (AKA the function you want to minimize) is
You're basically adding these slack variables in your objective function, making clear that you want to minimize the amount of slack you allow for. You can tune this with
- A big value for
$C$ will lead to the picture on the left: misclassifications are heavily punished, so the optimization prioritizes classifying correctly over having a big margin. - A small value for
$C$ will lead to the picture on the right: it is OK to have some misclassifications, in order to gain a bigger margin overall. (This can help avoid overfitting to the training data.)
Great! You now understand both max-margin classifiers as well as soft-margin classifiers. In the next lab, you'll try to code these fairly straightforward linear classifiers from scratch!