Subscribe to Newsletter
Inside the Lab Digital and computational pathology, Software and hardware, Technology and innovation

Artificial Education

A decade has passed since a deep learning solution won the first place in the ImageNet Classification challenge. Since then, it seems that the entire algorithm world has abandoned classical machine learning to fully embrace deep learning. But what is the difference between those two things? And why is deep learning so successful?

The short answer is: features. When you aim to solve a problem using classical machine learning, the common workflow involves extracting features from the raw data and using them to solve your problem of interest. For example, let’s take the problem that I worked on for three years: segmenting a zebra (see Figure 1). Well, I did it back in the early 2000s when AI and deep learning were still in deep sleep.

How? By using classical approaches. I extracted texture features from the zebra image using Gabor filters, a methodology that is sensitive to scale and orientation in a very similar way to our own visual system. I then used the features I had extracted in a mechanism called active contours (or snakes) that aims to determine which parts of the image are zebra and which are not. Thus, in classical machine learning we handcraft the features we need according to the problem we want to solve – and then we classify the information according to what is of interest to us.

A workflow showing how Gabor filters extract texture features from a zebra image, followed by an image showing the segmented zebra that results from the process.

Figure 1. Segmenting a zebra using classical machine learning. Top: extraction of texture features using Gabor filters; bottom: the resultant segmented zebra. Credit: Vincent van Zalinge from Unsplash.com

If you want to solve the zebra segmentation problem using deep learning, your approach will be completely different. First, you will collect several images of zebras. Then, you will submit these examples to a deep neural network – basically, layers of nodes that are connected to each other. The weighted average of each node’s inputs is calculated and, if it exceeds a specified threshold, the neural network continues to the next node (and so on). The output of such a neural network would be whether there is a zebra in the image or not. The network’s errors are then used to correct the weights for each node in a procedure called back propagation. This process repeats again and again until the neural network provides accurate results for segmenting zebras. No features are handcrafted. The neural network itself generates the features within the training process and these features are manifested in the weights of the nodes.

This brings us to the key question – what has made deep learning take over the world of algorithms to such an extent? Why is it so successful?

Well, the proof is in the pudding.

Deep learning methods have solved problems that were previously considered unsolvable. Although it’s not too difficult to handcraft features for some problems, it can be extremely difficult to do so for many real-life scenarios. This is because the features need to capture the complexity and variety of what we are looking for – a challenge often beyond our capabilities.

Let’s take our zebra. In classical machine learning, we could handle a lot of zebras, but generalization to images of different kind of zebras taken at different viewpoints was practically impossible. With deep learning, the process is much easier. Select an architecture that fits the problem you are trying to solve, then provide examples of all the cases you want your system to handle. Sufficient good data and an iterative learning process can create a solution that addresses the data’s inherent variety and complexity. It could even allow you to take your problem to the next level – what about two overlapping zebras in a single image (1)?

So are classical machine learning algorithms a thing of the past? I don’t think so. Deep learning solves “unsolvable” problems, but there are still many problems that can be perfectly solved using classical approaches – and still others that require a combination of classical machine learning and deep learning.

Receive content, products, events as well as relevant industry updates from The Pathologist and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. H Wang, L Chen, “MaX-DeepLab: Dual-Path Transformers for End-to-End Panoptic Segmentation” (2021). Available at: https://bit.ly/3AVpMNa.
About the Author
Chen Sagiv

Co-Founder and Co-CEO of DeePathology.ai, Raanana, Israel.

Register to The Pathologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Pathologist magazine

Register