Fine-tuning a large language model (LLM) is crucial for aligning it with specific business needs, enhancing accuracy, and optimizing its performance. In turn, this gives businesses precise, actionable insights that drive efficiency and innovation. This course gives aspiring gen AI engineers valuable fine-tuning skills employers are actively seeking.
Generative AI Advance Fine-Tuning for LLMs
This course is part of multiple programs.
Instructors: Joseph Santarcangelo +3 more
Included with
Recommended experience
What you'll learn
In-demand gen AI engineering skills in fine-tuning LLMs employers are actively looking for in just 2 weeks
Instruction-tuning and reward modeling with the Hugging Face, plus LLMs as policies and RLHF
Direct preference optimization (DPO) with partition function and Hugging Face and how to create an optimal solution to a DPO problem
How to use proximal policy optimization (PPO) with Hugging Face to create a scoring function and perform dataset tokenization
Skills you'll gain
- Category: Reinforcement Learning
- Category: Proximal policy optimization (PPO)
- Category: Reinforcement learning
- Category: Direct preference optimization (DPO)
- Category: Hugging Face
- Category: Instruction-tuning
Details to know
Add to your LinkedIn profile
October 2024
5 assignments
Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV
Share it on social media and in your performance review
There are 2 modules in this course
In this module, you’ll begin by defining instruction-tuning and its process. You’ll also gain insights into loading a dataset, generating text pipelines, and training arguments. Further, you’ll delve into reward modeling, where you’ll preprocess the dataset and apply low-rank adaptation (LoRA) configuration. You’ll also learn to quantify quality responses, guide model optimization, and incorporate reward preferences. You’ll also describe reward trainer, an advanced training technique to train a model, and reward model loss using Hugging Face. The labs, in this module will allow practice on instruction-tuning and reward models.
What's included
6 videos3 readings2 assignments2 app items1 plugin
In this module, you’ll describe the applications of large language models (LLMs) to generate policies and probabilities for generating responses based on the input text. You’ll also gain insights into the relationship between the policy and the language model as a function of omega to generate possible responses. Further, this module will demonstrate how to calculate rewards using human feedback incorporating reward function, train response samples, and evaluate agent’s performance. You’ll also define the scoring function for sentiment analysis using PPO with Hugging Face. You’ll also explain the PPO configuration class for specific models and learning rate for PPO training and how the PPO trainer processes the query samples to optimize the chatbot’s policies to get high-quality responses. This module delves into direct preference optimization (DPO) concepts to provide optimal solutions for the generated queries based on human preferences more directly and efficiently using Hugging Face. The labs in this module provide hands-on practice on human feedback and DPO.
What's included
10 videos5 readings3 assignments2 app items3 plugins
Instructors
Offered by
Why people choose Coursera for their career
Frequently asked questions
It takes about 3–5 hours to complete this course, so you can have the job-ready skills you need to impress an employer within just two weeks!
This course is intermediate level, so to get the most out of your learning, you must have basic knowledge of Python, large language models (LLMs), reinforcement learning, and instruction-tuning. You should also be familiar with machine learning and neural network concepts.
This course is part of the Generative AI Engineering with LLMs specialization. When you complete the specialization, you will have the skills and confidence to take on job roles such as AI engineer, data scientist, machine learning engineer, deep learning engineer, AI engineer, and developers seeking to work with LLMs.