- Recurrent Neural Networks (RNNs)
- Feature Engineering
- Generative AI
- Model Evaluation
- Data Preprocessing
- Embeddings
- Classification Algorithms
- PyTorch (Machine Learning Library)
- Transfer Learning
- Artificial Neural Networks
- Data Ethics
- Natural Language Processing
Gen AI Foundational Models for NLP & Language Understanding
Completed by Meredith V. Pollock
February 12, 2025
9 hours (approximately)
Meredith V. Pollock's account is verified. Coursera certifies their successful completion of Gen AI Foundational Models for NLP & Language Understanding
What you will learn
Explain how one-hot encoding, bag-of-words, embeddings, and embedding bags transform text into numerical features for NLP models
Implement Word2Vec models using CBOW and Skip-gram architectures to generate contextual word embeddings
Develop and train neural network-based language models using statistical N-Grams and feedforward architectures
Build sequence-to-sequence models with encoder–decoder RNNs for tasks such as machine translation and sequence transformation
Skills you will gain

