### Bayesian Decision Boundaries

#### by landonlehman

I started reading *The Elements of Statistical Learning* by Hastie, Tibshirani, and Friedman and was curious about how to reproduce Figure 2.5. (The book is made available as a free and legal pdf here.)

So I figured out how to produce similar figures using Mathematica. I assume this is also fairly straightforward to do in R, but I don’t yet know enough R for this.

The authors explain the sampling method on pages 16 and 17. First, select 10 vectors from a multivariate Gaussian centered on the unit *x*-vector, then select another 10 from a Gaussian centered on the unit *y*-vector.

Here is the Mathematica code to do this:

Now, for each class (blue and orange), pick 100 points by first picking one of the 10 vectors in that class at random, then using that vector as the center of another distribution from which a point is selected. I call the classes of 100 points set 1 and set 2:

We can now plot the resulting 200 points:

Result:

Now for the Bayesian decision boundary, just set the probability that a point comes from set 1 equal to the probability that a point comes from set 2. This sets a sum over 10 Gaussians equal to another sum over 10 Gaussians, so it takes some time to evaluate a grid of points for a contour plot. You can choose a lower number of `PlotPoints`

to make this faster, but this makes the curve more jagged.

Here is the code, followed by the result of overlaying both plots:

Ok, so the cluster around *y* = 1.5 is not filled out, but we only sampled a total of 200 points, so that’s alright.

Now, this doesn’t look exactly like Figure 2.5 in Elements of Statistical Learning, but of course the specific form of the plot depends heavily on the initial 20 random vectors. Here are some sample results after rerunning the code:

I put the code on my github in the form of a loop to generate and save 10 of the above plots.