What Is Machine Learning Fairness? What You Need to Know

Written by Coursera Staff • Updated on

Here’s what you need to know about machine learning fairness and incorporating ethics into an increasingly automated world.

[Featured image] A machine learning engineer is looking at a program to ensure it practices machine learning fairness.

Our world today is becoming more and more automated. Machine learning is integrated into our everyday lives to automate tasks such as job application screenings and university admissions. There is an increasing need to make sure this practice, including any tools and systems used, is ethical and fair. 

When machine learning isn’t fair, the outcome can be detrimental to users and the community. For example, algorithms on social media sites may have sparked political tensions due to skewed or siloed news feeds (and fake news) when the intention was to deliver personalized recommendations to users.

What is machine learning fairness?

Machine learning fairness is the process of correcting and eliminating algorithmic bias (of race and ethnicity, gender, sexual orientation, disability, and class) from machine learning models. Machine learning is a branch of artificial intelligence (AI) that stems from the idea that computers can learn from data collected to identify patterns and make decisions that mimic those of humans, with minimal human intervention.

Watch this video for an introduction to algorithmic fairness:

Why is it important to address fairness and ethics in machine learning?

Unintentional discrimination in machine learning algorithms is just one of the reasons why it’s important to address fairness and AI ethics.

Machine learning is enmeshed in the systems and applications we use to help us buy furniture, find jobs, recruit new hires, apply for universities, listen to music, get loans, find news, search on Google, target ads, and so much more. It enhances our ability to streamline information and provide recommendations, but it can have serious consequences if it is trained on the wrong information and fails to promote fair and equal practices.

To remove these potential biases, data scientists and machine learning experts must look out for them in algorithmic models and correct them. Because machine learning by definition learns by example, it can also "learn" to avoid bias as long as it is given the right data [1]. Used in industries as varied as the criminal justice system to corporate human resources to credit lending, it's important that machine learning adopts fair and ethical processes.

The COMPAS controversy

COMPAS is a decision support tool that was developed by Northpointe and used by the US court system to assess the likelihood of a criminal being a repeat offender (recidivist). The algorithm they used predicts which criminals are most likely to re-offend, taking a quantitative approach to fairness that sparked controversy.

Placeholder

3 ways to make machine learning fair and ethical

For those working in data science and artificial intelligence with algorithms, there are a few ways to make sure that machine learning is fair and ethical. You can:

  • Examining the algorithms’ ability to influence human behaviour and deciding whether it is biased, then creating algorithmic methods that avoid predictive bias

  • Identifying any vulnerabilities or inconsistencies in public data sets and assessing whether there is a privacy violation

  • Using tools that can help prevent and eliminate bias in machine learning

You can learn these technical skills with the Ethics in the Age of AI Specialization from LearnQuest:

Tools for machine learning fairness

There are plenty of courses, tools, and processes available to help you integrate machine learning fairness into your organization’s workflow and prevent machine learning malpractice. Here are a few you can check out:

  • IBM’s AI Fairness 360: A Python toolkit of technical solutions on fairness metrics and algorithms that helps users and researchers share and evaluate discrimination and bias in machine learning models.

  • Google’s What-If Tool: A visualization tool that explores a model’s performance on a data set, assessing against preset definitions of fairness constraints. It supports binary classification, multi-class classification, and regression tasks.

  • Google’s Model Cards and Toolkit: This tool confirms that a given model’s intent matches its use case and helps users understand the conditions in which their model is safe and appropriate to move forward with. 

  • Microsoft’s fairlearn.py: An open-source Python toolkit that assesses and improves fairness in machine learning. With an interactive visualization dashboard and unfairness-mitigation algorithms, this tool helps users analyze the trade-offs between fairness and model performance.

  • Deon: An ethics checklist that facilitates responsible data science by evaluating and systematically reviewing applications for potential ethical implications, from the early stages of data collection to implementation. 

Start learning today

Contribute to tackling machine learning ethics with the Ethics in the Age of AI specialization from LearnQuest, an award-winning provider of global business and IT technical training for corporations and government agencies. You’ll learn job-ready skills in four months or less.

To learn more about machine learning, consider enrolling in our most popular course, Machine Learning, taught by Stanford University professor and Coursera founder Andrew Ng.

Article sources

1

Fair ML Book. “Fairness and Machine Learning: Limitations and Opportunities, https://fairmlbook.org/pdf/fairmlbook.pdf.” Accessed June 12, 2024.

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.