Fine-tuning a large language model (LLM) is crucial for aligning it with specific business needs, enhancing accuracy, and optimizing its performance. In turn, this gives businesses precise, actionable insights that drive efficiency and innovation. This course gives aspiring gen AI engineers valuable fine-tuning skills employers are actively seeking.
Generative AI Advance Fine-Tuning for LLMs
This course is part of multiple programs.
Instructors: Joseph Santarcangelo
Sponsored by InternMart, Inc
Recommended experience
What you'll learn
In-demand gen AI engineering skills in fine-tuning LLMs employers are actively looking for in just 2 weeks
Instruction-tuning and reward modeling with the Hugging Face, plus LLMs as policies and RLHF
Direct preference optimization (DPO) with partition function and Hugging Face and how to create an optimal solution to a DPO problem
How to use proximal policy optimization (PPO) with Hugging Face to create a scoring function and perform dataset tokenization
Details to know
Add to your LinkedIn profile
5 assignments
October 2024
See how employees at top companies are mastering in-demand skills
Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV
Share it on social media and in your performance review
There are 2 modules in this course
In this module, you’ll begin by defining instruction-tuning and its process. You’ll also gain insights into loading a dataset, generating text pipelines, and training arguments. Further, you’ll delve into reward modeling, where you’ll preprocess the dataset and apply low-rank adaptation (LoRA) configuration. You’ll also learn to quantify quality responses, guide model optimization, and incorporate reward preferences. You’ll also describe reward trainer, an advanced training technique to train a model, and reward model loss using Hugging Face. The labs, in this module will allow practice on instruction-tuning and reward models.
What's included
6 videos3 readings2 assignments2 app items1 plugin
In this module, you’ll describe the applications of large language models (LLMs) to generate policies and probabilities for generating responses based on the input text. You’ll also gain insights into the relationship between the policy and the language model as a function of omega to generate possible responses. Further, this module will demonstrate how to calculate rewards using human feedback incorporating reward function, train response samples, and evaluate agent’s performance. You’ll also define the scoring function for sentiment analysis using PPO with Hugging Face. You’ll also explain the PPO configuration class for specific models and learning rate for PPO training and how the PPO trainer processes the query samples to optimize the chatbot’s policies to get high-quality responses. This module delves into direct preference optimization (DPO) concepts to provide optimal solutions for the generated queries based on human preferences more directly and efficiently using Hugging Face. The labs in this module provide hands-on practice on human feedback and DPO.
What's included
10 videos5 readings3 assignments2 app items3 plugins
Offered by
Why people choose Coursera for their career
Recommended if you're interested in Data Science
Open new doors with Coursera Plus
Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy