In my previous post, I introduced a class of algorithms for solving classification problems.  I also mentioned that Naive Bayes is based off of Bayes’ theorem.  In this post, I will derive Naive Bayes using Bayes’ theorem.

Deriving

According to Bayes’ theorem, we can find the probability of some event given some feature as follows: , where Now, what if you’re dealing with multiple features?  Let’s start with the probability of P(A|BC).  Here, we have an event that depends on the features B and C.  How would we break this down?

We know that: Breaking down P(ABC), we get: We can also break down P(AC) as: So, for P(ABC), we now have: In fact, what we just demonstrated is the Chain (or multiplicative) rule for probability.  The property can be generalized as follows: Note that is the intersection of variables from 1 to n.  Now, we could have instead broke down P(ABC) to: They’re equivalent, but the usefulness of each form depends on how the features depend on each other.

Plugging back into the formula, we get: However, there is a problem with the formula.  The features and events are dependent on each other.  This approach would make it too tedious to be used as a model.  So now what?

Recall that Naive Bayes assumes independent features.  When the variables are independent, we can drop variables in the given portion of the probability since the variables are no longer dependent on other variables: We don’t, however, drop A since B and C depends on A.

So, we can now rewrite P(A|BC) as: Now, generalizing from the equation above, we get the following: 