This IBM course will teach you how to implement, train, and evaluate generative AI models for natural language processing (NLP). The course will help you acquire knowledge of NLP applications including document classification, language modeling, language translation, and fundamentals for building small and large language models.
Gen AI Foundational Models for NLP & Language Understanding
This course is part of multiple programs.
Instructors: Joseph Santarcangelo +1 more
2,371 already enrolled
Included with
(29 reviews)
Recommended experience
What you'll learn
Explain how to use one-hot encoding, bag-of-words, embedding, and embedding bags to convert words to features.
Build and use word2vec models for contextual embedding.
Build and train a simple language model with a neural network.
Utilize N-gram and sequence-to-sequence models for document classification, text analysis, and sequence transformation.
Skills you'll gain
- Category: N-Gram
- Category: PyTorch torchtext
- Category: Generative AI for NLP
- Category: Word2Vec Model
- Category: Sequence-to-Sequence Model
Details to know
Add to your LinkedIn profile
5 assignments
Build your subject-matter expertise
- Learn new concepts from industry experts
- Gain a foundational understanding of a subject or tool
- Develop job-relevant skills with hands-on projects
- Earn a shareable career certificate
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV
Share it on social media and in your performance review
There are 2 modules in this course
In this module, you will learn about one-hot encoding, bag-of-words, embeddings, and embedding bags. You will also gain knowledge of neural networks and their hyperparameters, cross-entropy loss, and optimization. You will then delve into the concept of language modeling with n-grams. The module also includes hands-on labs on document classification with PyTorch and building a simple language model with a neural network.
What's included
7 videos4 readings3 assignments2 app items1 plugin
In this module, you will learn about the word2vec embedding model and its types. You will also be introduced to sequence-to-sequence models and how they employ Recurrent neural networks (RNNs) to process variable-length input sequences and generate variable-length output sequences. You will gain insights about encoder-decoder RNN models, their architecture, and how to build them using PyTorch. The module will give you knowledge about evaluating the quality of text using perplexity, precision, and recall in text generation. In hands-on labs, you will integrate pre-trained embedding models for text analysis or classification and develop a sequence-to-sequence model for sequence transformation tasks.
What's included
6 videos4 readings2 assignments2 app items3 plugins
Instructors
Offered by
Why people choose Coursera for their career
Frequently asked questions
It will take only two weeks to complete this course if you spend four hours of study time per week.
It will be good if you have a basic knowledge of Python and a familiarity with machine learning and neural network concepts.
PS: Data set preprocessing/cleaning is not covered in this course.
This course is part of a specialization. When you complete the specialization, you will prepare yourself with the skills and confidence to take on jobs such as AI Engineer, NLP Engineer, Machine Learning Engineer, Deep Learning Engineer, and Data Scientist.