In my previous post, I mentioned how there are many different algorithms that can be used to cluster a dataset. One of the most popular clustering methods used is called the k-means clustering algorithm.
by Joseph Woolf
If you were to go online and start shopping, chances are you're getting plowed by many suggestions from online sites. However, these suggestions aren't random, but rather based on what you recently browsed and purchased. How did they determine what to recommend and what to ignore?
The system described above is called a recommendation system. The actual implementation, though, is through the use of a method called clustering. Clustering, in itself, is part of Cluster Analysis.
In my post on Naive Bayes, I mentioned that there are multiple variants that can be used towards different problems. In this post, I will be introducing another variant of Naive Bayes that utilizes the Bernoulli distribution.
So far, I mainly discussed about classification algorithms that use probabilities to make decisions. However, there are algorithms that don't require the computation of probabilities. One of the algorithms that do this is called a support vector machine.
So far, I've talked about regression or classification algorithms that can be used to solve problems. Sometimes though, we just want to discover some associations within our data. These associations can, in turn, be used by a business to optimize profits.
One of the fundamental algorithms that can be used to solve these kind of problems is called Apriori algorithm.
In my decision tree post, I mentioned several different types of algorithms that can be used to create a decision tree. Today, I'll be talking about a decision tree called the Iterative Dichotomiser 3 (ID3) algorithm.
Recall from my Naive Bayes post that there are several variants. One of the variants that I'll be talking about today is Gaussian Naive Bayes.
In my previous algorithm post, I talked about a family of algorithms called Naive Bayes. These algorithms used Bayes' theorem, independence, and probabilities to determine whether a test case can be positively categorized. However, these algorithms don't take into account the relationships between features. Additionally, it would be nice to visualize how the model actually made decisions. Fortunately, decision trees allow us to visualize the relationship of each property for classifying categories.
So far, the algorithms that I talked about consisted of modeling the data in a linear manner. While these algorithms can be effective for simple problems, they don't suit well where there is a non-linear relationship between features and the output. Such problems include voice, text, and image recognition, anomaly detection, game playing bots, and any problem where there is no straightforward relationship with the features.
Some non-linear algorithm classes that can solve these kind problems include neural networks, decision trees, and clustering. These classes often have variants that suit different purposes. In this post, I'll be talking about a different classification algorithm called Naive Bayes.
As I previously mentioned in my Linear Regression post, linear regression works best with regression type problems. Linear regression wouldn't work well on classification problem since only numeric values on a continuous interval would be returned. Now, you could describe a range to classify the test case when an interval is reached, but it's not very good practice to use the algorithm in this manner.
So if you can't use linear regression for classification, what kind of algorithm can be used to classify test cases? While there are many different algorithms that can be used in classification, one of the most basic algorithms is logistic regression.