top of page

The evolution of machine learning


Machine learning engineering happens in three stages — data processing, model building and deployment and monitoring. In the middle we have the meat of the pipeline, the model, which is the machine learning algorithm that learns to predict given input data.

That model is where “deep learning” would live. Deep learning is a subcategory of machine learning algorithms that use multi-layered neural networks to learn complex relationships between inputs and outputs. The more layers in the neural network, the more complexity it can capture.

Traditional statistical machine learning algorithms (i.e. ones that do not use deep neural nets) have a more limited capacity to capture information about training data. But these more basic machine learning algorithms work well enough for many applications, making the additional complexity of deep learning models often superfluous. So we still see software engineers using these traditional models extensively in machine learning engineering — even in the midst of this deep learning craze.

But the bread of the sandwich process that holds everything together is what happens before and after training the machine learning model.

The first stage involves cleaning and formatting vast amounts of data to be fed into the model. The last stage involves careful deployment and monitoring of the model. We found that most of the engineering time in AI is not actually spent on building machine learning models — it’s spent preparing and monitoring those models.

Despite the focus on deep learning at the big tech company AI research labs, most applications of machine learning at these same companies do not rely on neural networks and instead use traditional machine learning models. The most common models include linear/logistic regression, random forests and boosted decision trees. These are the models behind, among other services tech companies use, friend suggestions, ad targeting, user interest prediction, supply/demand simulation and search result ranking.

And some of the tools engineers use to train these models are similarly well-worn. One of the most commonly used machine learning libraries is scikit-learn, which was released a decade ago (although Google’s TensorFlow is on the rise).

There are good reasons to use simpler models over deep learning. Deep neural networks are hard to train. They require more time and computational power (they usually require different hardware, specifically GPUs). Getting deep learning to work is hard — it still requires extensive manual fiddling, involving a combination of intuition and trial and error.

With traditional machine learning models, the time engineers spend on model training and tuning is relatively short — usually just a few hours. Ultimately, if the accuracy improvements that deep learning can achieve are modest, the need for scalability and development speed outweighs their value.

bottom of page