Deep learning, or deep neural network, is quite a hot topic today, which is unsurprising. Much of the AI technology around us today is driven by deep learning. Over and over we see how AI and deep learning are thrown around interchangeably like they are the same thing, which can cause confusion of what is what. Deep learning is a subfield of machine learning as shown in the image below. It is inspired by the function of the brain even though it’s still like a four-year-old kid rather than an adult.
Many of the tech giants use deep learning in their products, such as self-driving cars, chatbots, translating text between languages or detecting cancer in an MRI picture to name a few. These are typical examples of where deep learning shines.
When detecting an object, a face in a photo, for example, deep learning will learn many types of different patterns. Some patterns might be small edges in the photo, e.g. something around the eye, while another part of the deep neural network might detect the whole eye. When all those patterns are combined the algorithm will detect a face with very high accuracy.
When translating a text, a special type of neural network, LSTM (Long short-term memory), can learn long-term dependencies. When the text is about a female the LSTM will memorize that and utilize it later in the text by predicting she instead of he. However, the difference in accuracy between deep learning and traditional machine learning is often quite small (around 1%) in some of the examples mentioned above.
In other types of problems such as predicting a customer churn, predicting if a customer will buy another product or predicting the price of real estate given certain types of information, traditional machine learning algorithms will often outperform deep neural networks.
There has been much research regarding the difference between deep learning and traditional machine learning algorithms like XGBoost, Support Vector Machine, Random Forest and Cubist to name a few. The result is that there is no free lunch, i.e., no single algorithm is always the best. Their performance differs when dealing with different problems. However, there are a few algorithms that tend to beat others in many different problems, such as XGBoost, which is one of the algorithms that are often hard to beat.
Since their performance differs when dealing with different problems, there’s a little trick to get the best from all. XGBoost might learn a different pattern than a Support Vector Machine or Logistic Regression. Those three algorithms, or any other group of algorithms, can be combined using what we call stacking. By stacking models you take the best from each of those models and combine them into a final model that’s called a super learner, which should be at least as accurate as the best of the single algorithms. That means, if XGBoost achieves an accuracy of 86%, Support Vector Machine 83% and Logistic Regression 79% the super learner should achieve at least 86% accuracy, or possibly higher.
So, deep learning is nothing more than a subfield of machine learning and is powerful and popular for certain types of problems. Traditional machine learning algorithms, especially the tree-based ones, like XGBoost and Random Forest, are often the more powerful choices in other circumstances. At Sumo Analytics we understand the strengths and weaknesses of different algorithms, and therefore we analyse multiple algorithms for every problem we solve because we know there is no free lunch.