Machine learning and AI projects require managing diverse data sources, vast data volumes, model and parameter development, and conducting numerous test and evaluation experiments. Overseeing and tracking these aspects of a program can quickly become an overwhelming task.
Evaluating and Debugging Generative AI
Instructor: Carey Phelps
Sponsored by Louisiana Workforce Commission
What you'll learn
Learn to evaluate programs utilizing LLMs as well as generative image models using platform-independent tools
Instrument a training notebook, and add tracking, versioning, and logging
Implement monitoring and tracing of LLMs over time in complex interactions
Details to know
Add to your LinkedIn profile
Only available on desktop
See how employees at top companies are mastering in-demand skills
Learn, practice, and apply job-ready skills in less than 2 hours
- Receive training from industry experts
- Gain hands-on experience solving real-world job tasks
About this project
Instructor
Offered by
How you'll learn
Hands-on, project-based learning
Practice new skills by completing job-related tasks with step-by-step instructions.
No downloads or installation required
Access the tools and resources you need in a cloud environment.
Available only on desktop
This project is designed for laptops or desktop computers with a reliable Internet connection, not mobile devices.
Why people choose Coursera for their career
You might also like
Amazon Web Services
Johns Hopkins University
Open new doors with Coursera Plus
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy