Essential Artificial Intelligence Skills
February 4, 2025
Article
Build Ethical and Transparent AI Systems. Master skills in explainability techniques and ethical AI development to create trustworthy and transparent machine learning solutions.
Instructor: Brinnae Bent, PhD
Included with
(20 reviews)
Recommended experience
Intermediate level
Ideal for professionals with a basic to intermediate understanding of machine learning concepts like supervised learning and neural networks.
(20 reviews)
Recommended experience
Intermediate level
Ideal for professionals with a basic to intermediate understanding of machine learning concepts like supervised learning and neural networks.
Implement XAI approaches to enhance transparency, trust, robustness, and ethics in decision-making processes.
Build interpretable models in Python, including decision trees, regression models, and neural networks.
Apply advanced techniques like LIME, SHAP, and explore explainability for LLMs and computer vision models.
Add to your LinkedIn profile
September 2024
Add this credential to your LinkedIn profile, resume, or CV
Share it on social media and in your performance review
In an era where Artificial Intelligence (AI) is rapidly transforming high-risk domains like healthcare, finance, and criminal justice, the ability to develop AI systems that are not only accurate but also transparent and trustworthy is critical. The Explainable AI (XAI) Specialization is designed to empower AI professionals, data scientists, machine learning engineers, and product managers with the knowledge and skills needed to create AI solutions that meet the highest standards of ethical and responsible AI.
Taught by Dr. Brinnae Bent, an expert in bridging the gap between research and industry in machine learning, this course series leverages her extensive experience leading projects and developing impactful algorithms for some of the largest companies in the world. Dr. Bent's work, ranging from helping people walk to noninvasively monitoring glucose, underscores the meaningful applications of AI in real-world scenarios.
Throughout this series, learners will explore key topics including Explainable AI (XAI) concepts, interpretable machine learning, and advanced explainability techniques for large language models (LLMs) and generative computer vision models. Hands-on programming labs, using Python to implement local and global explainability techniques, and case studies offer practical learning. This series is ideal for professionals with a basic to intermediate understanding of machine learning concepts like supervised learning and neural networks.
Applied Learning Project
The Explainable AI (XAI) Specialization offers hands-on projects that deepen understanding of XAI and Interpretable Machine Learning through coding activities and real-world case studies.
Course 1 Projects: Explore ethical and bias considerations through moral machine reflections, case studies, and research analysis. Projects include visualizing embedding spaces using TensorFlow’s Embedding Projector and evaluating XAI in healthcare for diagnostics and security in autonomous driving.
Course 2 Projects: Python-based lab activities with Jupyter notebooks focus on implementing models like GLMs, GAMs, decision trees, and RuleFit.
Course 3 Projects: Advanced labs focus on local explanations using LIME, SHAP, and Anchors, along with visualizing saliency maps and Concept Activation Vectors using free platforms like Google Colab for GPU resources. The projects provided in this Specialization prepare learners to create transparent and ethical AI solutions for real-world challenges.
Define key Explainable AI terminology and their relationships to each other
Describe commonly used interpretable and explainable approaches and their trade-offs
Evaluate considerations for developing XAI systems, including XAI evaluation approach, robustness, privacy, and integration with decision-making
Describe and implement regression and generalized interpretable models
Demonstrate knowledge of decision trees, rules, and interpretable neural networks
Explain foundational Mechanistic Interpretability concepts, hypotheses, and experiments
Explain and implement model-agnostic explainability methods.
Visualize and explain neural network models using SOTA techniques.
Describe emerging approaches to explainability in large language models (LLMs) and generative computer vision.
Duke University has about 13,000 undergraduate and graduate students and a world-class faculty helping to expand the frontiers of knowledge. The university has a strong commitment to applying knowledge in service to society, both near its North Carolina campus and around the world.
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Earn a degree from world-class universities - 100% online
Upskill your employees to excel in the digital economy
The Specialization includes 3 courses with 3 modules each, taking approximately 2-4 hours per module. On average, learners can complete the entire series in 3-4 months at a recommended pace.
A basic to intermediate understanding of machine learning concepts, such as supervised learning and neural networks, is recommended for success in this Specialization.
Yes, the courses are designed be taken in sequential order to complete the Specialization.
University credit will not be offered for completing the Specialization.
Upon completion, you will be able to implement interpretable machine learning models, apply advanced explainability techniques, and create ethical, transparent AI systems that build trust with users and stakeholders.
This course is completely online, so there’s no need to show up to a classroom in person. You can access your lectures, readings and assignments anytime and anywhere via the web or your mobile device.
If you subscribed, you get a 7-day free trial during which you can cancel at no penalty. After that, we don’t give refunds, but you can cancel your subscription at any time. See our full refund policy.
Yes! To get started, click the course card that interests you and enroll. You can enroll and complete the course to earn a shareable certificate, or you can audit it to view the course materials for free. When you subscribe to a course that is part of a Specialization, you’re automatically subscribed to the full Specialization. Visit your learner dashboard to track your progress.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
When you enroll in the course, you get access to all of the courses in the Specialization, and you earn a certificate when you complete the work. If you only want to read and view the course content, you can audit the course for free. If you cannot afford the fee, you can apply for financial aid.
Financial aid available,