This IBM course will teach you how to implement, train, and evaluate generative AI models for natural language processing (NLP). The course will help you acquire knowledge of NLP applications including document classification, language modeling, language translation, and fundamentals for building small and large language models.
Gen AI Foundational Models for NLP & Language Understanding
This course is part of multiple programs.
Instructors: Joseph Santarcangelo
Sponsored by University of Texas at Austin
2,446 already enrolled
(29 reviews)
Recommended experience
What you'll learn
Explain how to use one-hot encoding, bag-of-words, embedding, and embedding bags to convert words to features.
Build and use word2vec models for contextual embedding.
Build and train a simple language model with a neural network.
Utilize N-gram and sequence-to-sequence models for document classification, text analysis, and sequence transformation.
Details to know
Add to your LinkedIn profile
5 assignments
See how employees at top companies are mastering in-demand skills
Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV
Share it on social media and in your performance review
There are 2 modules in this course
In this module, you will learn about one-hot encoding, bag-of-words, embeddings, and embedding bags. You will also gain knowledge of neural networks and their hyperparameters, cross-entropy loss, and optimization. You will then delve into the concept of language modeling with n-grams. The module also includes hands-on labs on document classification with PyTorch and building a simple language model with a neural network.
What's included
7 videos4 readings3 assignments2 app items1 plugin
In this module, you will learn about the word2vec embedding model and its types. You will also be introduced to sequence-to-sequence models and how they employ Recurrent neural networks (RNNs) to process variable-length input sequences and generate variable-length output sequences. You will gain insights about encoder-decoder RNN models, their architecture, and how to build them using PyTorch. The module will give you knowledge about evaluating the quality of text using perplexity, precision, and recall in text generation. In hands-on labs, you will integrate pre-trained embedding models for text analysis or classification and develop a sequence-to-sequence model for sequence transformation tasks.
What's included
6 videos4 readings2 assignments2 app items3 plugins
Offered by
Why people choose Coursera for their career
Open new doors with Coursera Plus
Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy