Chevron Left
Back to Generative AI with Large Language Models

Learner Reviews & Feedback for Generative AI with Large Language Models by DeepLearning.AI

4.8
stars
2,684 ratings

About the Course

In Generative AI with Large Language Models (LLMs), you’ll learn the fundamentals of how generative AI works, and how to deploy it in real-world applications. By taking this course, you'll learn to: - Deeply understand generative AI, describing the key steps in a typical LLM-based generative AI lifecycle, from data gathering and model selection, to performance evaluation and deployment - Describe in detail the transformer architecture that powers LLMs, how they’re trained, and how fine-tuning enables LLMs to be adapted to a variety of specific use cases - Use empirical scaling laws to optimize the model's objective function across dataset size, compute budget, and inference requirements - Apply state-of-the art training, tuning, inference, tools, and deployment methods to maximize the performance of models within the specific constraints of your project - Discuss the challenges and opportunities that generative AI creates for businesses after hearing stories from industry researchers and practitioners Developers who have a good foundational understanding of how LLMs work, as well the best practices behind training and deploying them, will be able to make good decisions for their companies and more quickly build working prototypes. This course will support learners in building practical intuition about how to best utilize this exciting new technology. This is an intermediate course, so you should have some experience coding in Python to get the most out of it. You should also be familiar with the basics of machine learning, such as supervised and unsupervised learning, loss functions, and splitting data into training, validation, and test sets. If you have taken the Machine Learning Specialization or Deep Learning Specialization from DeepLearning.AI, you’ll be ready to take this course and dive deeper into the fundamentals of generative AI....

Top reviews

AB

Invalid date

Excellent, A lot of things covered. No words to describe how the complex topics explained in such a simple manner. One suggestion is to include more hands-on labs with different kind of tasks.

NF

Invalid date

Excellent course with engaging content and instructors. I have a much better understanding of how and what is going on under the hood of transformers and generative AI as a field.

Filter by:

651 - 675 of 687 Reviews for Generative AI with Large Language Models

By ARJUN K

•

Jul 6, 2023

good one!

By Wayne

•

Dec 28, 2023

great

By Sandhya G

•

Feb 27, 2024

Good

By xingnan z

•

Jan 8, 2024

good

By Kamal M

•

Oct 19, 2023

good

By Lakshmi G

•

Sep 10, 2023

Good

By Olga C

•

Feb 12, 2024

I took this course because I like the ML Specification of Andrew Ng very much. For me, that was really the gold standard for MOOC. This LLM course was, unfortunately, a bit disappointing. The labs were underwhelming. To me, not really grading the labs looks like the lack of engagement on the part of the authors. I also had technical problems with the third lab -- I think, some issues with hard-coded versions of the packages -- and did not manage to run it at all! I also missed a forum to interact with fellow students and the instructors. I am still very thankful for this course, it is really hard to make one at this stage. I especially liked that authors went into technical details and provided links to the original papers. I see this course as a germ of a future cool GenAI specification. I am confident that the authors are capable of developing it further.

By Freddie K

•

Jan 31, 2024

"Intermediate" level in the sense that you perhaps need some basic understanding of machine learning, but this is definitely not a course that challenges you. You get a very high level conceptual explanation of basic concepts (including things like LoRA, RLHF), but definitely no specifics on the implementation level. The assignment "Labs" consists of executing pre-written code in notebooks, and seeing the result output. No coding of your own, and typically just making function calls to Huggingface libraries, but not actually seeing how the algorithms are implemented.

By Abraham Y

•

Dec 26, 2023

Lots of theory with very little practice. You will not walk away from this course feeling confident that you know how to code any of it. The labs that are offered do not teach you much either. The instructors just tell you to not worry about how it works, and that it just works. The instructors need to add a whole lot more practice code to get you practicing the theory they teach. At this point, I am looking for where I can find that information because theory without practice is pointless.

By Daniel E

•

Jul 28, 2023

The course material was all pretty superficial; the lectures never really delve into the nitty gritty details. The labs require no coding, which is disappointing. That being said, it's a good overview of the current landscape. If you want to learn implementation and the way things work (the how rather than the what), you will probably be disappointed.

By Sai S

•

Sep 10, 2023

While the course content and organization was great, I had issues in accessing the AWS labs (Week 2 and Week 3) where I couldn't execute the Python notebook steps after few steps and got stuck. When I tried to resume by restarting the terminal it said invalid authentication and could not complete the labs and had to do forceful submission.

By Marty P

•

Aug 17, 2023

The videos in the course were helpful, with the exception of the lab videos.

I found those simply regurgitated what was already in the lab notes.

The labs themselves were only partially helpful due to the high-level code being used.

I would have actually preferred to go a bit lower-level in implementing a few pieces.

By Yuchen P

•

Jan 23, 2024

I think the course needs to have a better balance between the contents. For example, it spends tons of effort talking about different parameters in model inference, which is as simple as 1+1, but it touches very lightly regarding how transformer works.

By Deepak K G S

•

Aug 24, 2024

The concepts thought were at very high level and the instructors were good in covering it all - however the downside is none of it is covered indepth due to which one may lose track of what exactly is being thought about .

By Attyuttam S

•

Jul 26, 2024

The labs were just run the modules, there should have been assignments related to building and fine-tuning models, the labs could have been where we were asked to code rather than just run the blocks

By 孙佳垚

•

Jul 31, 2024

This course helped me to learn the basics of fine-tuning and aligning LLMs. However, the Labs are simply demos rather than practices, and a bunch of technical detail is omitted.

By Shay L

•

Oct 6, 2023

The lab parts do not make the student work nor present a challenge, they only make the student run through someone else's code.

By Ahmed S E E

•

Sep 5, 2023

(+) Excellent info, representation and organization

(-) The practical part is not good (some hands-on need to be added)

By Amlan P

•

Sep 18, 2023

Week 1 and 2 are great but 3 isn't that exciting. I was expecting the course to be more technical.

By Jason M

•

Jul 30, 2023

Helpful introduction to LLMs but I wish we got the chance to go in-depth on implementation

By Joel Ö

•

Jan 2, 2024

A lot of issues with the labs. Contacted supported and waited for long but no resolution

By ATHARVA G

•

Jul 11, 2023

Not explained as clearly as you would expect from an Andrew Ng course.

By Nikita L

•

Aug 29, 2024

not challenging, shallow, wouldn't call it "intermediate"

By Bharat L

•

Mar 6, 2024

Not very technical course, but gives quite an overview

By Hoss

•

Oct 10, 2023

Not too practical Just a broad view on the subjecgt