Duke University
Interpretable Machine Learning
Duke University

Interpretable Machine Learning

Taught in English

Course

Gain insight into a topic and learn the fundamentals

Brinnae Bent, PhD

Instructor: Brinnae Bent, PhD

Intermediate level

Recommended experience

13 hours to complete
3 weeks at 4 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Describe and implement regression and generalized interpretable models

  • Demonstrate knowledge of decision trees, rules, and interpretable neural networks

  • Explain foundational Mechanistic Interpretability concepts, hypotheses, and experiments

Details to know

Shareable certificate

Add to your LinkedIn profile

Recently updated!

September 2024

Assessments

3 assignments

See how employees at top companies are mastering in-demand skills

Placeholder
Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 3 modules in this course

In this module, you will be introduced to the concepts of regression and generalized models for interpretability. You will learn how to describe interpretable machine learning and differentiate between interpretability and explainability, explain and implement regression models in Python, and demonstrate knowledge of generalized models in Python. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.

What's included

5 videos6 readings1 assignment3 discussion prompts3 ungraded labs

In this module, you will be introduced to the concepts of decision trees, decision rules, and interpretability in neural networks. You will learn how to explain and implement decision trees and decision rules in Python and define and explain neural network interpretable model approaches, including prototype-based networks, monotonic networks, and Kolmogorov-Arnold networks. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.

What's included

8 videos1 reading1 assignment2 discussion prompts3 ungraded labs

In this module, you will be introduced to the concept of Mechanistic Interpretability. You will learn how to explain foundational Mechanistic Interpretability concepts, including features and circuits; describe the Superposition Hypothesis; and define Representation Learning to be able to analyze current research on scaling Representation Learning to LLMs. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.

What's included

6 videos4 readings1 assignment3 discussion prompts1 ungraded lab

Instructor

Brinnae Bent, PhD
Duke University
0 Courses0 learners

Offered by

Duke University

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

New to Machine Learning? Start here.

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions