Decision Trees in Machine Learning: Two Types (+ Examples)

Written by Coursera Staff • Updated on

Decision trees are a supervised learning algorithm often used in machine learning. Here’s what you need to know.

[Featured image] A man sits in his home office thinking about how to use decision trees in machine learning.

Trees are a common analogy in everyday life. Shaped by a combination of roots, trunk, branches, and leaves, trees often symbolize growth. In machine learning, a decision tree is an algorithm that can create both classification and regression models. 

The decision tree is so named because it starts at the root, like an upside-down tree, and branches off to demonstrate various outcomes. Because machine learning is based on the notion of solving problems, decision trees help us to visualize these models and adjust how we train them.

Here’s what you need to know about decision trees in machine learning.

What is a decision tree? 

A decision tree is a supervised learning algorithm that is used for classification and regression modeling. Regression is a method used for predictive modeling, so these trees are used to either classify data or predict what will come next. 

Decision trees look like flowcharts, starting at the root node with a specific question of data, that leads to branches that hold potential answers. The branches then lead to decision (internal) nodes, which ask more questions that lead to more outcomes. This goes on until the data reaches what’s called a terminal (orleaf”) node and ends.

In machine learning, there are four main methods of training algorithms: supervised, unsupervised, reinforcement learning, and semi-supervised learning. A decision tree helps us visualize how a supervised learning algorithm leads to specific outcomes.

For a more detailed look at decision trees, watch this video:

Introduction to supervised learning

If you want to deepen your knowledge of supervised learning, consider this course Introduction to Supervised Learning: Regression and Classification from DeepLearningAI and Stanford University. In 33 hours or less, you’ll get an introduction to modern machine learning, including supervised learning and algorithms such as decision trees, multiple linear regression, neural networks, and logistic regression.

Placeholder

Why is a decision tree important in machine learning?

Decision trees in machine learning provide an effective method for making decisions because they lay out the problem and all the possible outcomes. It enables developers to analyze the possible consequences of a decision, and as an algorithm accesses more data, it can predict outcomes for future data. 

In this simple decision tree, the question of whether or not to go to the supermarket to buy toilet paper is analyzed:

[Image] A decision tree describes the process of buying toilet paper.

In machine learning, decision trees offer simplicity and a visual representation of the possibilities when formulating outcomes. Below, we will explain how the two types of decision trees work. 

Types of decision trees in machine learning

Decision trees in machine learning can either be classification trees or regression trees. Together, both types of algorithms fall into a category of “classification and regression trees” and are sometimes referred to as CART. Their respective roles are to “classify” and to “predict.”

1. Classification trees

Classification trees determine whether an event happened or didn’t happen. Usually, this involves a “yes” or “no” outcome. 

We often use this type of decision-making in the real world. Here are a few examples to help contextualize how decision trees work for classification:

Example 1: How to spend your free time after work

What you do after work in your free time can be dependent on the weather. If it is sunny, you might choose between having a picnic with a friend, grabbing a drink with a colleague, or running errands. If it is raining, you might opt to stay home and watch a movie instead. There is a clear outcome. In this case, that is classified as whether to “go out” or “stay in.”

Example 2: Homeownership based on age and income

In a classification tree, the data set splits according to its variables. There are two variables, age and income, that determine whether or not someone buys a house. If training data tells us that 70 percent of people over age 30 bought a house, then the data gets split there, with age becoming the first node in the tree. This split makes the data 80 percent “pure.” The second node then addresses income from there.

Gain hands-on experience with classification trees

If you want to get started on understanding how decision trees work in machine learning, consider registering for these guided projects to apply your skills to real-world projects. You can complete them in two hours or less:

Decision Tree and Random Forest Classification using Julia

Predicting Salaries with Decision Trees

Placeholder

2. Regression trees

Regression trees, on the other hand, predict continuous values based on previous data or information sources. For example, they can predict the price of gasoline or whether a customer will purchase eggs (including which type of eggs and at which store).

This type of decision-making is more about programming algorithms to predict what is likely to happen, given previous behavior or trends. 

Example 1: Housing prices in Colorado

Regression analysis could be used to predict the price of a house in Colorado, which is plotted on a graph. The regression model can predict housing prices in the coming years using data points of what prices have been in previous years. This relationship is a linear regression since housing prices are expected to continue rising. Machine learning helps us predict specific prices based on a series of variables that have been true in the past.

Example 2: Bachelor’s degree graduates in 2025

A regression tree can help a university predict how many bachelor’s degree students there will be in 2025. On a graph, one can plot the number of degree-holding students between 2010 and 2022. If the number of university graduates increases linearly each year, then regression analysis can be used to build an algorithm that predicts the number of students in 2025. 

Classification and Regression Tree (CART) is a predictive algorithm used in machine learning that generates future predictions based on previous values. These decision trees are at the core of machine learning, and serve as a basis for other machine learning algorithms such as random forest, bagged decision trees, and boosted decision trees.

Gain hands-on experience with regression trees

To get started on how decision tree algorithms work in predictive machine learning models, take a look at these guided projects. Each project takes less than two hours, and they are based on real-world examples so you can elevate your skills:

XG-Boost 101: Used Cars Price Prediction

Decision Tree Classifier for Beginners in R

Placeholder

Decision tree terminology

These terms come up frequently in machine learning and are helpful to know as you embark on your machine learning journey:

  • Root node: The topmost node of a decision tree that represents the entire message or decision

  • Decision (or internal) node: A node within a decision tree where the prior node branches into two or more variables

  • Leaf (or terminal) node: The leaf node is also called the external node or terminal node, which means it has no child—it’s the last node in the decision tree and furthest from the root node

  • Splitting: The process of dividing a node into two or more nodes. It’s the part at which the decision branches off into variables

  • Pruning: The opposite of splitting, the process of going through and reducing the tree to only the most important nodes or outcomes

Learn machine learning with Coursera

Start your machine learning journey with Coursera’s top-rated specialization Supervised Machine Learning: Regression and Classification, offered by DeepLearning.AI. Taught by AI visionary Andrew Ng, you will build machine learning models in Python using popular libraries NumPy and scikit-learn, and train supervised machine learning models for prediction (including decision trees!).

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.