NoSQL Zone is brought to you in partnership with:

Coming from a background of Aerospace Engineering, John soon discovered that his true interest lay at the intersections of information technology and entrepreneurship (and when applicable - math). In early 2011, John stepped away from his day job to take up software consulting. Finally John found permanent employment at Opensource Connections where he currently consults large enterprises about full-text search and Big Data applications. Highlights to this point have included prototyping the future of search with the US Patent and Trademark Office, implementing the search syntax used by patent examiners, and building a Solr search relevancy tuning framework called SolrPanl. John is a DZone MVB and is not an employee of DZone and has posted 23 posts at DZone. You can read more from them at their website. View Full User Profile

Cassandra: How to Build a Naive Bayes Classifier of Users Based on Behavior

11.18.2013
| 11971 views |
  • submit to reddit

In our last post, we found out how simple it is to use Cassandra to estimate ad conversion. It’s easy, because effectively all you have to do is accumulate counts – and Cassandra is quite good at counting. As we demonstrated in that post, Cassandra can be used as a giant, distributed, redundant, “infinitely” scalable counting framework. During this post we will take the online ad company example just a bit further by creating a Cassandra-backed Naive Bayes Classifier. Again, we see that the “secret sauce” is simply keeping track of the appropriate counts.

In the previous post, we helped equip your online ad company with the ability to track ad conversion rates. But competition is steep and we’ll need to do a little better than ad conversion rates if your company is to stay on top. Recently, suspicions have arisen that ads are often being shown to unlikely customers. A quick look at the logs confirms this concern. For instance, there was a case of one internet user that clicked almost every single ad that he was shown – so long as it related to the camping gear. Several times, he went on to make purchases: a tent, a lantern, and a sleeping bag. But despite this users obvious interest in outdoor sporting goods, your logs indicated that fully 90% of the ads he was shown were for women’s apparel. Of these ads, this user clicked none of them.

Let’s attack this problem by creating a classifier. Fortunately for us, your company specializes in two main genres, fashion, and outdoors sporting goods. If we can determine which type of user we’re dealing with, then we can improve our conversion rates considerably by simply showing users the appropriate ads.

Naive Bayes Classifiers

With this goal in mind, let’s look at the theory behind Naive Bayes Classifiers so that we can build our own. The purpose of a classifier is to identify which group a sample belongs to based upon the given evidence. In this case, our “sample”, is an individual user, and based upon the evidence of which ads she clicks, we wish to identify which group she belongs to: fashion or outdoors. To put some math to the problem, consider the following question:

What is the probability that user is from group G given the fact that this user has clicked on ads A, B, and C?

To put this into equation form, we can write:

\displaystyle P(G|A,B,C)

This function returns a probability, a number from 0 to 1, representing how likely it is that this user is from a particular group based upon the fact that they have clicked on these ads. The goal, then, is to evaluate this equation with each group and then find which group leads to a bigger result. But how do you evaluate this equation? Fortunately for us, Thomas Bayes, a clergyman from the 18th century, provided an answer in the form of Bayes’s equation:

\displaystyle P(G|A,B,C,\cdots) = \frac{P(G)\times P(A,B,C,\cdots|G)}{P(A,B,C,\cdots)}

Here we’ve turned one probability into the function of three separate probabilities:

  • P(G) – the probability, in the absence of any evidence, that a user is from a particular group – this is called the prior
  • P(A,B,C,\cdots|G) – the probability that a user from group G will have clicked ads A, B, C, etc.
  • P(A,B,C,\cdots) – the probability that a user from any group will click ads A, B, C, etc.

This looks a little confusing, but bear with me a moment and we’ll see how this allows us to solve our classification problem. Let’s first look at the probability P(G). We happen to know that both the fashion group and the outdoor group are about equally strong, so for simplicity's sake, we assume that P(\textrm{fashion}) = P(\textrm{outdoors}) = 0.5. But remember, we ultimately intend to identify the group which maximizes this equation. Since P(G) = 0.5 in both cases, it does not affect the outcome and can safely be disregarded. Next up, P(A,B,C,\cdots). We could estimate the probability that users click on particular groups of ads, but here again we’re looking for the group that maximizes this above equation, and since the value of P(A,B,C,\cdots) is not a function of the group in consideration, this component remains constant across all groups and can also be safely be disregarded.

The only piece left is P(A,B,C,\cdots|G), the probability that a user from group G will click on ads A, B, C, etc. Since this piece is a function of G, we can not disregard it, so we must somehow compute it. And our goal, again, is to find the group G which maximizes this probability. … But we have a problem. This particular probability, as stated, can not be computed. It’s intractable. It’s mathematically infeasible to gather enough information to estimate the probability that a member of group G will click on any particular set of ads. So we do what any good applied mathematician will do when hitting a wall like this, we’ll make a simplifying assumption. If we assume that ad clicks are completely independent from one another, then we can deal with them each separately. Thus:

P(A,B,C,\cdots|G) \approx P(A|G)\times P(B|G)\times P(C|G)\times \cdots

Here, each piece, P(A|G), P(B|G), P(C|G), etc., is actually quite simple to estimate. And though this assumption might be a bit naive — this is, after all, reason that this classifier is called the Naive Bayes Classifier — the resulting classifier has empirically been show to work quite well across a wide range of applications and even in certain cases where this assumption is not only naive, but actually quite wrong.

Finally, we have arrived at something we can deal with. Let’s take a moment to recap: In order to find the most likely group G that a user belongs to based upon their ad clicks A, B, C, etc., we must find which group maximizes the equation:

P(G|A,B,C,\cdots)

But after using Bayes’s equation and discarding some unnecessary pieces we recognize that we can determine the most likely group by finding the group which maximizes this function:

P(A,B,C,\cdots|G)

And finally, after making the simplifying assumption regarding the independence of clicks, we see that determining the most likely group for the user based upon ad clicks is as simple as finding the group which maximizes this equation

P(A|G)\times P(B|G)\times P(C|G)\times \cdots

.

Published at DZone with permission of John Berryman, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)